Is ChatGPT a ‘virus that has been released into the wild’?

More than three years ago, this editor sat down with Sam Altman for a small event in San Francisco soon after he’d left his role as the president of Y Combinator to become CEO of the AI company he co-founded in 2015 with Elon Musk and others, OpenAI.

At the time, Altman described OpenAI’s potential in language that sounded outlandish to some. Altman said, for example, that the opportunity with artificial general intelligence — machine intelligence that can solve problems as well as a human — is so incomprehensibly enormous that if OpenAI managed to crack it, the outfit could “maybe capture the light cone of all future value in the universe.” He said that the company was “going to have to not release research” because it was so powerful. Asked if OpenAI was guilty of fear-mongering — Elon Musk, a cofounder of the outfit, has repeatedly called all organizations developing AI to be regulated — Altman talked about dangers of not thinking about “societal consequences” when “you’re building something on an exponential curve.”

The audience laughed at various points of the conversation, not certain how seriously to take Altman. No one is laughing now, however. While machines are not yet as smart as people, the tech that OpenAI has since released into the world comes close enough that some critics fear it could be our undoing (and more sophisticated tech is reportedly coming).

Indeed, the ChatGPT model that OpenAI made available to the general public last week is so capable of answering questions like a person that professionals across a range of industries are struggling to process the implications. Educators, for example, wonder how they’ll be able to distinguish original writing from the algorithmically generated essays they are bound to receive — and that can evade anti-plagiarism software.

Paul Kedrosky isn’t an educator per se. He’s an economist, venture capitalist and MIT fellow who calls himself a “frustrated normal with a penchant for thinking about risks and unintended consequences in complex systems.” But he is among those who are suddenly worried about our collective future, tweeting yesterday: “[S]hame on OpenAI for launching this pocket nuclear bomb without restrictions into an unprepared society.” Wrote Kedrosky, “I obviously feel ChatGPT (and its ilk) should be withdrawn immediately. And, if ever re-introduced, only with tight restrictions.”

We talked with him yesterday about some of his concerns, and why he thinks what OpenAI is driving what he believes is the “most disruptive change the U.S.economy has seen in 100 years,” and not in a good way.

Our chat has been edited for length and clarity.

TC: ChatGPT came out last Wednesday. What triggered your reaction on Twitter?

PK: I’ve played with these conversational user interfaces and AI services in the past and this obviously is a huge leap beyond. And what troubled me here in particular is the casual brutality of it, with massive consequences for a host of different activities. It’s not just the obvious ones, like high school essay writing, but across pretty much any domain where there’s a grammar — [meaning] an organized way of expressing yourself. That could be software engineering, high school essays, legal documents. All of them are easily eaten by this voracious beast and spit back out again without compensation to whatever was used for training it.

I heard from a colleague at UCLA who told me they have no idea what to do with essays at the end of the current term, where they’re getting hundreds per course and thousands per department, because they have no idea anymore what’s fake and what’s not. So to do this so casually — as someone said to me earlier today — is reminiscent of the so-called [ethical] white hat hacker who finds a bug in a widely used product, then informs the developer before the broader public knows so the developer can patch their product and we don’t have mass devastation and power grids going down. This is the opposite, where a virus has been released into the wild with no concern for the consequences.

It does feel like it could eat up the world.

Some might say, ‘Well, did you feel the same way when automation arrived in auto plants and auto workers were put out of work? Because this is a kind of broader phenomenon.’ But this is very different. These specific learning technologies are self catalyzing; they’re learning from the requests. So robots in a manufacturing plant, while disruptive and creating incredible economic consequences for the people working there, didn’t then turn around and start absorbing everything going inside the factory, moving across sector by sector, whereas that’s exactly not only what we can expect but what you should expect.

Musk left OpenAI partly over disagreements about the company’s development, he said in 2019, and he has been talking about AI as an existential threat for a long time. But people carped that he didn’t know he’s talking about. Now we’re confronting this powerful tech and it’s not clear who steps in to address it.

I think it’s going to start out in a bunch of places at once, most of which will look really clumsy, and people will [then] sneer because that’s what technologists do. But too bad, because we’ve walked ourselves into this by creating something with such consequentiality. So in the same way that the FTC demanded that people running blogs years ago [make clear they] have affiliate links and make money from them, I think at a trivial level, people are going to be forced to make disclosures that ‘We wrote none of this. This is all machine generated.’

I also think we’re going to see new energy for the ongoing lawsuit against Microsoft and OpenAI over copyright infringement in the context of our in-training, machine learning algorithms. I think there’s going to be a broader DMCA issue here with respect to this service.

And I think there’s the potential for a [massive] lawsuit and settlement eventually with respect to the consequences of the services, which, you know, will probably take too long and not help enough people, but I don’t see how we don’t end up in [this place] with respect to these technologies.

What’s the thinking at MIT?

Andy McAfee and his group over there are more sanguine and have a more orthodox view out there that anytime time we see disruption, other opportunities get created, people are mobile, they move from place to place and from occupation to occupation, and we shouldn’t be so hidebound that we think this particular evolution of technology is the one around which we can’t mutate and migrate. And I think that’s broadly true.

But the lesson of the last five years in particular has been these changes can take a long time. Free trade, for example, is one of those incredibly disruptive, economy-wide experiences, and we all told ourselves as economists looking at this that the economy will adapt, and people in general will benefit from lower prices. What no one anticipated was that someone would organize all the angry people and elect Donald Trump. So there’s this idea that we can anticipate and predict what the consequences will be, but [we can’t].

You talked about high school and college essay writing. One of our kids has already asked — theoretically! — if it would be plagiarism to use ChatGPT to author a paper.

The purpose of writing an essay is to prove that you can think, so this short circuits the process and defeats the purpose. Again, in terms of consequences and externalities, if we can’t let people have homework assignments because we no longer know whether they’re cheating or not, that means that everything has to happen in the classroom and must be supervised. There can’t be anything we take home. More stuff must be done orally, and what does that mean? It means school just became much more expensive, much more artisanal, much smaller and at the exact time that we’re trying to do the opposite. The consequences for higher education are devastating in terms of actually delivering a service anymore.

What do you think of the idea of universal basic income, or enabling everyone to participate in the gainsfrom AI?

I’m much less strong a proponent than I was pre COVID. The reason is that COVID, in a sense, was an experiment with a universal basic income. We paid people to stay home, and they came up with QAnon. So I’m really nervous about what happens whenever people don’t have to hop in a car, drive somewhere, do a job they hate and come home again, because the devil finds work for idle hands, and there’ll be a lot of idle hands and a lot of deviltry.

Is ChatGPT a ‘virus that has been released into the wild’? by Connie Loizos originally published on TechCrunch

Daily Crunch: Grocery delivery app Getir bags rival Gorillas in a $1.2B acquisition

To get a roundup of TechCrunch’s biggest and most important stories delivered to your inbox every day at 3 p.m. PDT, subscribe here.

We’ve made it to Friday, folks. If you’re anything like me, that means finishing the workday with a well-deserved nap and reruns of “The Office.” Tweet, toot or Post at me about your favorite way to end the week.

Mark your calendar for a Twitter Space event on Tuesday, December 13 at 1 p.m. PST/4 p.m. EST featuring Builders VC investor Andrew Chen, who will speak with Walter about the role tech reporting plays in shaping ecosystems.

See you Monday! — Christine

The TechCrunch Top 3

Knock, knock, there’s a competitor at your door: Significant M&A in the food delivery space was only a matter of time, and Romain has details about a big one — Getir acquiring its competitor Gorillas in a deal that the Financial Times originally reported is valued at $1.2 billion.
Bye bye, Twitter Toolbox: Twitter tried to make it work with third-party developers, but alas, the company decided to make a clean break by shutting down some of its developer initiatives, including Toolbox. Ivan has more.
Tesla’s China boss gets a new gig…afactory: Rita is following a story about Tom Zhu, who oversaw Tesla’s China Gigafactory and has now been tapped to work his magic stateside leading Gigafactory Texas.

Startups and VC

More layoffs this week as Ingrid reports on Primer, an e-commerce infrastructure startup based in the U.K. that announced it would lay off one-third of its staff amid some restructuring to manage current and proposed commerce market conditions.

Meanwhile, Haje believes you need the perfect summary slide for your pitch deck and has found some for you (requires a TechCrunch+ subscription).

And we have three more for you:

Going dark: Kirsten reports that executives at Brodmann17, a computer vision technology startup, made the decision to shut down after realizing it would not be able to bring its products to market.
What are your symptoms?: Japanese health tech startup Ubie secured $19 million in new funding to bring its AI-powered symptom checker technology to the U.S., Kate reports.
Making that dollar work for it: Kate also has a story on Akros Technologies, which raised $2.3 million in new capital to inject some artificial intelligence into asset management.

How to respond when a VC asks about your startup’s valuation

Image Credits: boschettophotography (opens in a new window) / Getty Images

When a VC inevitably asks about your valuation expectations, it is a trick question: If your response is too high, it’s a red flag, whereas a lowball figure undervalues the company.

“We’re letting the market price this round” is an appropriate reply, but only if you’ve already gathered substantial data points from other investors — and can fire back with a few questions of your own, says Evan Fisher, founder of Unicorn Capital.

“If that’s all you say, you’re in trouble because it can also be interpreted as ‘we don’t have a clue’ or ‘we’ll take what we’re given,’” said Fisher.

Instead of going in cold, he advises founders to pre-pitch investors from their next round and use takeaways from those conversations to shape current valuations.

In the article, Fisher includes sample questions “you will want to ask every VC you speak with,” along with other tips that will help “when they pop the valuation question.”

Three more from the TC+ team:

Taking that exit: Tim writes about Vanguard’s decision to get out of a carbon emissions initiative and what prompted the move.
SPAC is back: Getaround is now a public company after braving a chilly SPAC environment that has left other companies in the cold. Alex has more.
Bridging blockchain and the physical world: That’s what Jacquelyn writes Solana founders want to see happen as the company and others pick up the pieces and move on from the FTX collapse.

TechCrunch+ is our membership program that helps founders and startup teams get ahead of the pack. You can sign up here. Use code “DC” for a 15% discount on an annual subscription!

Big Tech Inc.

We are over here with our mouths open upon learning that crypto news publication The Block received some significant — and undisclosed — loans from former FTX CEO Sam Bankman-Fried’s company Alameda Research. As a result, CEO Michael McCaffrey is out and Bobby Moran, the company’s chief revenue officer, takes the role but as Jacquelyn and Alex write, the now conflict of interest will take some time to repair, if it can even be done.

As we wait for the Federal Trade Commission to send news of Microsoft’s fate with Activision, Kyle writes that the cloud services giant acquired a different company, this time Lumenisity, a startup developing high-speed cables for transmitting data.

And three more for you:

Looking for that special gift?: Natasha L has some suggestions for your fitness-loving buddy, while Haje has some gift ideas to ensure your other friends are well caffeinated.
Meet Slack’s new CEO: When Slack announced that Lidiane Jones would be its new CEO, Ron wanted to shed some light on her career and how she got where she is today.
Exposed: Carly reports that CommonSpirit Health confirmed that data from over 620,000 patients was stolen during a ransomware attack in October.

Daily Crunch: Grocery delivery app Getir bags rival Gorillas in a $1.2B acquisition by Christine Hall originally published on TechCrunch

Musk’s ‘Twitter Files’ offer a glimpse of the raw, complicated and thankless task of moderation

Twitter’s new owner, Elon Musk, is feverishly promoting his “Twitter Files”: selected internal communications from the company, laboriously tweeted out by sympathetic amanuenses. But Musk’s obvious conviction that he has released some partisan kraken is mistaken — far from conspiracy or systemic abuse, the files are a valuable peek behind the curtain of moderation at scale, hinting at the Sisyphean labors undertaken by every social media platform.

For a decade companies like Twitter, YouTube, and Facebook have performed an elaborate dance to keep the details of their moderation processes equally out of reach of bad actors, regulators, and the press.

To reveal too much would be to expose the processes to abuse by spammers and scammers (who indeed take advantage of every leaked or published detail), while to reveal too little leads to damaging reports and rumors as they lose control over the narrative. Meanwhile they must be ready to justify and document their methods or risk censure and fines from government bodies.

The result is that while everyone knows a little about how exactly these companies inspect, filter, and arrange the content posted on their platforms, it’s just enough to be sure that what we’re seeing is only the tip of the iceberg.

Sometimes there are exposés of the methods we suspected — by-the-hour contractors clicking through violent and sexual imagery, an abhorrent but apparently necessary industry. Sometimes the companies overplay their hands, like repeated claims of how AI is revolutionizing moderation, and subsequent reports that AI systems for this purpose are inscrutable and unreliable.

What almost never happens — generally companies don’t do this unless they’re forced to — is that the actual tools and processes of content moderation at scale are exposed with no filter. And that’s what Musk has done, perhaps to his own peril, but surely to the great interest of anyone who ever wondered what moderators actually do, say, and click as they make decisions that may affect millions.

Pay no attention to the honest, complex conversation behind the curtain

The email chains, Slack conversations, and screenshots (or rather shots of screens) released over the last week provide a glimpse at this important and poorly understood process. What we see is a bit of the raw material, which is not the partisan illuminati some expected — though it is clear, by its highly selective presentation, that this what we are meant to perceive.

Far from it: the people involved are by turns cautious and confident, practical and philosophical, outspoken and accommodating, showing that the choice to limit or ban is not made arbitrarily but according to an evolving consensus of opposing viewpoints.

Leading up to the choice to temporarily restrict the Hunter Biden laptop story — probably at this point the most contentious moderation decision of the last few years, behind banning Trump — there is neither the partisanship nor conspiracy insinuated by the bombshell packaging of the documents.

Instead we find serious, thoughtful people attempting to reconcile conflicting and inadequate definitions and policies: What constitutes “hacked” materials? How confident are we in this or that assessment? What is a proportionate response? How should we communicate it, to whom, and when? What are the consequences if we do, if we don’t limit? What precedents do we set or break?

The answers to these questions are not at all obvious, and are the kind of thing usually hashed out over months of research and discussion, or even in court (legal precedents affect legal language and repercussions). And they needed to be made fast, before the situation got out of control one way or the other. Dissent from within and without (from a U.S. Representative no less — ironically, doxxed in the thread along with Jack Dorsey in violation of the selfsame policy) was considered and honestly integrated.

“This is an emerging situation where the facts remain unclear,” said Former Trust and Safety Chief Yoel Roth. “We’re erring on the side of including a warning and preventing this content from being amplified.”

Some question the decision. Some question the facts as they have been presented. Others say it’s not supported by their reading of the policy. One says they need to make the ad-hoc basis and extent of the action very clear since it will obviously be scrutinized as a partisan one. Deputy General Counsel Jim Baker calls for more information but says caution is warranted. There’s no clear precedent; the facts are at this point absent or unverified; some of the material is plainly non-consensual nude imagery.

“I believe Twitter itself should curtail what it recommends or puts in trending news, and your policy against QAnon groups is all good,” concedes Rep. Ro Khanna, while also arguing the action in question is a step too far. “It’s a hard balance.”

Neither the public nor the press have been privy to these conversations, and the truth is we’re as curious, and largely as in the dark, as our readers. It would be incorrect to call the published materials a complete or even accurate representation of the whole process (they are blatantly, if ineffectively, picked and chosen to fit a narrative), but even such as they are we are more informed than we were before.

Tools of the trade

Even more directly revealing was the next thread, which carried screenshots of the actual moderation tooling used by Twitter employees. While the thread disingenuously attempts to equate the use of these tools with shadow banning, the screenshots do not show nefarious activity, nor need they in order to be interesting.

Image Credits: Twitter

On the contrary, what is shown is compelling for the very reason that it is so prosaic, so blandly systematic. Here are the various techniques all social media companies have explained over and over that they use, but whereas before we had it couched in PR’s cheery diplomatic cant, now it is presented without comment: “Trends Blacklist,” “High Profile,” “DO NOT TAKE ACTION” and the rest.

Meanwhile Yoel Roth explains that the actions and policies need to be better aligned, that more research is required, that plans are underway to improve:

The hypothesis underlying much of what we’ve implemented is that if exposure to, e.g., misinformation directly causes harm, we should use remediations that reduce exposure, and limiting the spread/virality of content is a good way to do that… we’re going to need to make a more robust case to get this into our repertoire of policy remediations – especially for other policy domains.

Again the content belies the context it is presented in: these are hardly the deliberations of a secret liberal cabal lashing out at its ideological enemies with a ban hammer. It’s an enterprise-grade dashboard like you might see for lead tracking, logistics, or accounts, being discussed and iterated upon by sober-minded persons working within practical limitations and aiming to satisfy multiple stakeholders.

As it should be: Twitter has, like its fellow social media platforms, been working for years to make the process of moderation efficient and systematic enough to function at scale. Not just so the platform isn’t overrun with bots and spam, but in order to comply with legal frameworks like FTC orders and the GDPR. (Of which the “extensive, unfiltered access” outsiders were given to the pictured tool may well constitute a breach. The relevant authorities told TechCrunch they are “engaging” with Twitter on the matter.)

A handful of employees making arbitrary decisions with no rubric or oversight is no way to moderate effectively or meet such legal requirements; neither (as the resignation of several on Twitter’s Trust & Safety Council today testifies) is automation. You need a large network of people cooperating and working according to a standardized system, with clear boundaries and escalation procedures. And that’s certainly what seems to be shown by the screenshots Musk has caused to be published.

What isn’t shown by the documents is any kind of systematic bias, which Musk’s stand-ins insinuate but don’t quite manage to substantiate. But whether or not it fits into the narrative they wish it to, what is being published is of interest to anyone who thinks these companies ought to be more forthcoming about their policies. That’s a win for transparency, even if Musk’s opaque approach accomplishes it more or less by accident.

Musk’s ‘Twitter Files’ offer a glimpse of the raw, complicated and thankless task of moderation by Devin Coldewey originally published on TechCrunch

Proposed legislation would force US higher education endowments to reveal where they invest

In January, Missouri Representative Emanuel Cleaver will introduce the Endowment Transparency Act into Congress, a move that could, frankly, change everything.

Cleaver’s proposed legislation seeks to amend the Higher Education Act of 1965 to require universities and colleges to share information about how and whether they are allocating endowment investment funds to women- and minority-owned firms. These higher education institutions, which collectively manage more than $821 billion in assets, have been notoriously secretive about where they invest money and, despite calls for change, many have refused transparency.

“It was noble to desegregate a student body and faculty, but it is nobler to desegregate economics because they are reasons things are as they are.”Rep. Emanuel Cleaver

Educational behemoths play a critical role in the venture market as limited partners. TechCrunch previously reported on the pressing need for those at the top to bear more responsibility for the uneven venture landscape that disproportionately shuts out women and people of color. Those calls have turned to cries for more diverse fund managers and more money invested into minority-led funds — or at least criteria that LPs hold venture general partners accountable for the type of companies in which they invest.

Harrowing stats on the number of diverse fund managers, paired with the dearth of capital allocated to minority fund managers and founders, is indicative of the role endowments can play in maintaining existing inequities. Speaking with TechCrunch, Cleaver said when he attempted to reach out to higher education institutions to discuss their reticent behavior regarding asset managerial diversity and allocation, even he, at times, had doors shut in his face.

“These colleges brag about inclusion because they have minorities on their faculty and an inclusive student body, so they think everything is OK,” he said. “But let me just say, it was noble to desegregate a student body and faculty, but it is nobler to desegregate economics because they are reasons things are as they are.”

Proposed legislation would force US higher education endowments to reveal where they invest by Dominic-Madori Davis originally published on TechCrunch

The Block founder says he’s exploring ways to get the publication into ‘trustworthy’ hands

Crypto news publication The Block announced today that its CEO, Michael McCaffrey, has resigned after failing to disclose a series of loans from former FTX CEO Sam Bankman-Fried’s company Alameda Research. Axios first reported the news.

The capital was used in part to finance an employee-led buyout of the company, among other more extracurricular activities.

McCaffrey will be replaced by the company’s chief revenue officer, Bobby Moran, effective immediately, according to a statement. “No one at The Block had any knowledge of this financial arrangement besides Mike,” Moran wrote.

McCaffrey confirmed that in a series of tweets Friday: “I didn’t disclose the loan to anyone. Absolutely no one at The Block knew about the financial arrangement between my holding company and SBF, including the editorial and the research teams.” He claimed his rationale for this decision was to not “compromise the objectivity” of coverage surrounding SBF.

The Block was founded in 2018 by Mike Dudas. In 2020, McCaffrey took over as CEO. By April 2021, McCaffrey led a buyout of all of The Block investors, making the firm owned by employees, with McCaffrey as the biggest stakeholder. Even today, he remains the company’s majority shareholder.

Dudas told TechCrunch in an exchange after the news came out that he is “exploring what, if any, avenues exist to get The Block into trustworthy ownership.”

Dudas explained to TechCrunch that at the time of The Block’s sale, his “understanding was that Mike McCaffrey’s family was wealthy and loaned him money to buy out [his stake] and the VCs so the team could assume full independent ownership.”

In a tweet, Dudas said, “I’d like to buy The Block back.”

Media companies must disclose conflicts of interest when they arise; even the appearance of conflicts can prove damaging to a brand, as they can undercut reader trust in its impartiality.

“Mike’s decision to take out a loan from SBF and not disclose that information demonstrates a serious lack of judgment,” Moran wrote. “It undermines The Block’s reputation and credibility, especially that of our reporters and researchers, as well as our efforts at industry-leading transparency.”

McCaffrey received three loans for a total of $43 million from Bankman-Fried/Alameda. The first loan was for $12 million and was used to buy out The Block and make him CEO in 2021. The second was for $15 million in January to fund operations at the media and data research company. Finally, earlier this year, a loan for $16 million was provided to McCaffrey to buy personal real estate in the Bahamas.

The Block funds its newsroom via a combination of advertisements and research. The publication’s data section is a tool that TechCrunch utilizes from time to time.

That SBF-related capital went into a media company is not a surprise; the former web3 mogul was prolific in his media investments. However, given the lack of disclosure relating to the loans in question, this particular episode is different. Current staff and former executives were incensed about the transactions, the lack of transparency and effectively being lied to by their leader for a lengthy period.

The Block founder says he’s exploring ways to get the publication into ‘trustworthy’ hands by Jacquelyn Melinek originally published on TechCrunch

Microsoft acquires startup developing high-speed cables for transmitting data

Microsoft today announced that it acquired Lumenisity, a U.K.-based startup developing “hollow core fiber (HCF)” technologies primarily for data centers and ISPs. Microsoft says that the purchase, the terms of which weren’t disclosed, will “expand [its] ability to further optimize its global cloud infrastructure” and “serve Microsoft’s cloud platform and services customers with strict latency and security requirements.”

HCF cables fundamentally combine optical fiber and coaxial cable. They’ve been around since the ’90s, but what Lumenisity brings to the table is a proprietary design with an air-filled center channel surrounded by a ring of glass tubes. The idea is that light can travel faster through air than glass; in a trial with Comcast in April, a single strand of Lumenisity HCF was reportedly able to deliver traffic rates ranging from 10 Gbps to 400 Gbps.

“HCF can provide benefits across a broad range of industries including healthcare, financial services, manufacturing, retail and government,” Girish Bablani, CVP of Microsoft’s Azure Core business, wrote in a blog post. “For the public sector, HCF could provide enhanced security and intrusion detection for federal and local governments across the globe. In healthcare, because HCF can accommodate the size and volume of large data sets, it could help accelerate medical image retrieval, facilitating providers’ ability to ingest, persist and share medical imaging data in the cloud. And with the rise of the digital economy, HCF could help international financial institutions seeking fast, secure transactions across a broad geographic region.”

An illustration of Lumenisity’s cable design. Image Credits: Lumenisity

Lumenisity was founded in 2017 as a spinoff from the Optoelectronics Research Centre at the University of Southampton to commercialize research in HCF. Prior to the acquisition, the startup raised £12.5 million (~$15.35 million) in funding across several funding rounds from investors, including the Business Growth Fund and Parkwalk Advisors.

Lumenisity claims its fibers are deployed in customer networks “with the longest spans ever reported utilizing HCF technology.” Beyond Comcast, U.K. operator BT recently piloted Lumenisity’s tech, which BT claimed at the time had the potential to slash latency by up to 50% compared to traditional fiber. Infrastructure company euNetworks Fiber UK Limited is also testing Lumenisity cable to serve the London Stock Exchange.

Earlier this month, Lumenisity completed construction of a 40,000-square-foot HCF manufacturing facility in Romsey, U.K., which the company says will enable “scaled-up” production of its HCF technology in the future.

“This is the end of the beginning, and we are excited to start our new chapter as part of Microsoft to fulfil this technology’s full potential and continue our pursuit of unlocking new capabilities in communication networks,” Lumenisity wrote in a statement on its website. “We are proud to be acquired by a company with a shared vision that will accelerate our progress in the hollowcore space.”

Microsoft acquires startup developing high-speed cables for transmitting data by Kyle Wiggers originally published on TechCrunch

Ireland’s privacy watchdog engaging with Twitter over data access to reporters

Elon Musk’s desire to stir conspiratorial shit up by giving select outsiders aligned with his conservative agenda access to Twitter systems and data could land the world’s richest man in some serious doodoo with regulators on both sides of the Atlantic.

In recent days, this access granted by Musk to a few external reporters has led to the publication of what he and his cheerleaders are framing as an exposé of the platform’s prior approach to content moderation.

So far these “Twitter Files” releases, as he has branded them, have been a damp squib in terms of newsworthy revelations — unless the notion that a company with a large volume of user generated content A) employs trust and safety staff who discuss how to implement policies, including in B) fast-moving situations where all the facts around pieces of content may not yet be established; and C) also has moderation systems in place that can be applied to reduce the visibility of potentially harmful content (as an alternative to taking it down) is a particularly wild newsflash.

But these heavily amplified data dumps could yet create some hard news for Twitter — if Musk’s tactic of opening up its systems to external reporters boomerangs back in the form of regulatory sanctions.

Ireland’s Data Protection Commission (DPC), which is (at least for now) Twitter’s lead data protection regulator in the European Union is seeking more details from Twitter about the outsider data access issue.

“The DPC has been in contact with Twitter this morning. We are engaging with Twitter on the matter to establish further details,” a spokeswomen told TechCrunch.

Earlier today, Bloomberg also reported on concerns over the pond about outsiders accessing Twitter user data — citing tweets by Facebook’s former CISO, Alex Stamos, who posited publicly that a Twitter thread posted yesterday by one of the reporters given access by Musk “should be enough for the FTC to open an investigation of the consent decree”.

Feels like Weiss’ thread should be enough for the FTC to open an investigation into a violation of the consent decree and perhaps get a subpoena for Twitter’s internal access logs.

— Alex Stamos (@alexstamos) December 9, 2022

Twitter’s FTC consent decree dates back to 2011 — and relates to allegations that the company misrepresented the “security and privacy” of user data over several years.

The social media firm was already fined $150 milloion back in May for breaching the order. But future penalties could be a lot more severe if the FTC deems it is flagrantly breaching the terms of the agreement. And the signs are foreboding, given the FTC already put Twitter on notice last month — warning that “no CEO or company is above the law”.

Another consideration here is the European Union’s General Data Protection Regulation (GDPR) — which contains a legal requirement that personal data is adequately protected.

This is known as the security — or “integrity and confidentiality” — principle of the GDPR, which states that personal data shall be:

processed in a manner that ensures appropriate security of the personal data, including protection against unauthorised or unlawful processing and against accidental loss, destruction or damage, using appropriate technical or organisational measures (‘integrity and confidentiality’).

Handing user data (and/or systems access that could expose user data) over to non-staff to sift through might therefore raise questions over whether Twitter is in full compliance with the GDPR’s security principle. There is a further question to consider here, too — of what legal basis Twitter is relying upon to hand over (non-public) user data to outsiders, if indeed that’s what’s happening.

On the face of it, Twitter users would hardly have knowingly consented to such extraordinary processing under its standard T&Cs. And it’s not clear what other legal bases could reasonably apply here. (Twitter’s terms invoke contractual necessity, legitimate interests, consent, or legal obligation, variously, as regards processing users’ direct messages or other non-public comms depending on the processing scenario — but which of any of those bases would fit, if it is indeed handing this kind of non-public user data to non-employees who are neither Twitter service providers nor entities like law enforcement etc, is debatable.)

Asked for her views on this, Lilian Edwards — a professor of Law, Innovation and Society at Newcastle Law School — told us that how the GDPR applies here isn’t cut and dried but she suggested Twitter disclosing data to unforeseen third parties (“who might share it willy-nilly”) could be a breach of the security principle.

“If you’ve consented [to Twitter’s expansive terms], have you authorized these uses — so no security breach? I think there has to be an element of egregiousness here,” she argued. “How much you didn’t expect this and how open to security and privacy threats it leaves you — e.g. if it includes personal info like passwords or phone numbers?”

“It’s tricky,” she added — citing guidance put out by the U.K.’s data protection authority which notes that security measures required under the GDPR “should seek to ensure that the data: can be accessed, altered, disclosed or deleted only by those you have authorized to do so (and that those people only act within the scope of the authority you give them”.

“Well Musk has authorized them right, but should he? Are they security risks? I think a reasonable DPA would look at that quite sternly.”

At the time of writing, it is not clear which data exactly or how much systems access Twitter is providing to its chosen outsider reporters — so it’s not clear whether any non-public user data has been handed over or not.

One of the reporters given access by Twitter, journalist Bari Weiss, claimed in a tweet thread (which references four other writers associated with the publication she founded that will be reporting on the data) that: “The authors have broad and expanding access to Twitter’s files. The only condition we agreed to was that the material would first be published on Twitter.”

28. The authors have broad and expanding access to Twitter’s files. The only condition we agreed to was that the material would first be published on Twitter.

— Bari Weiss (@bariweiss) December 9, 2022

Another of these writers, Abigail Shrier, further claimed: “Our team was given extensive, unfiltered access to Twitter’s internal communication and systems.”

Our team was given extensive, unfiltered access to Twitter’s internal communication and systems. One of the things we wanted to know was whether Twitter systemically suppressed political speech.

Here’s what we found: https://t.co/Gjb397fnSr

— Abigail Shrier (@AbigailShrier) December 9, 2022

Still, both tweets lack specific detail on the kind of data they’re able to access.

Twitter has also — via an employee — denied it is providing the reporters with live access to non-public user data in response to alarm over the level of access being granted. The company’s new trust & safety lead, Ella Irwin, tweeted in the last few hours to claim that screenshots of an internal system view of accounts that were being shared online, seemingly showing details of the internal access provided to the outsiders by Twitter, did not depict live access to its systems.

Rather said she had herself provided these screenshots of this internal tool view to the reporters — “for security purposes”.

Correct. For security purposes, the screenshots requested came from me so we could ensure no PII was exposed. We did not give this access to reporters and no, reporters were not accessing user DMs.

— Ella Irwin (@ellagirwin) December 9, 2022

Irwin’s tweet also claimed that this screenshot sharing methodology was chosen to “ensure no PII [personally identifiable information] was exposed”.

“We did not give this access to reporters and no, reporters were not accessing user DMs,” she added in response to a Twitter user who had raised security concerns about the reporters’ access to its systems (and potentially to DMs). Irwin only joined Twitter in June as a product lead for trust & safety — but was elevated to head of trust & safety last month (via The Information) to replace the former head, Yoel Roth, who resigned after just two weeks working under Musk over concerns about “dictatorial edict” by Musk taking over from a good faith application of policy.

Setting aside the question of why Twitter’s new head of trust & safety is spending her time screenshotting internal data to share with non-staff whose purpose is to publish reports incorporating such information, her choice of nomenclature here is notable: “PII” is not a term you will find anywhere in the GDPR. It’s a term preferred by US entities keen to whittle the idea of ‘user privacy’ down to its barest minimum (i.e. actual name, email address etc), rather than recognizing that people’s privacy can be compromised in many more ways than via direct exposure of PII.

This is important because the relevant legal terminology in the GDPR is “personal data” — which is far broader than PII, encompassing a variety of data than might not be considered PII (such as IP address, advertiser IDs, location etc). So if Irwin’s primary concern is to avoid exposing “PII” she either does not understand — or is not prioritizing — the security of personal data as the EU’s GDPR understands it.

That should make European Union regulators concerned.

While Ireland’s DPC is currently the lead data supervisor for Twitter, since Musk took over the company at the end of October — and set about slashing headcount and driving scores more staff to leave of their own volition, including a trio of senior security, privacy and compliance executives who resigned simultaneously a month ago— questions have been raised about the status of its claim to be “main established” in Ireland for the GDPR.

As we’ve reported before, unilateral US-based decision making by Musk risks Twitter crashing out of the GDPR’s one-stop-shop (OSS) mechanism, as it requires decision making that affects EU users’ data to involve Twitter’s Irish entity. And if the company loses its claim to main establishment status in Ireland it would immediately crank up its regulatory risk as data supervisors across the EU, not just the DPC, would be able to open their own enquiries if they felt local users’ data was at risk.

With Musk now opening Twitter’s systems up to unexpected outsiders he’s putting on a very public spectacle that invokes big questions about security and privacy risks which — failing robust oversight by the DPC — could make other EU data protection authorities increasingly concerned about the integrity of Twitter’s Irish oversight, too. (And the GDPR does allows for emergency interventions by non-lead DPAs if they see a pressing risk to local users’ data so Twitter could face dialled up scrutiny elsewhere in the EU even while still ostensibly inside in the OSS, such as TikTok recently has in Italy.)

Since Musk took over the company, Twitter has shuttered its communications function — so it was not possible to put questions to a press office about the level of data access that is being provided by Twitter to outsider reporters or the legal basis it’s relying upon for sharing this information. But we’re happy to include a statement from Twitter if it wants to send one.

Ireland’s privacy watchdog engaging with Twitter over data access to reporters by Natasha Lomas originally published on TechCrunch

Amazon ends support for third-party HIPAA-compliant Alexa skills

Amazon is ending support for a program that allowed patients to share HIPAA-protected health information with healthcare organizations through Alexa. The news was first reported by Voicebot.ai.

The invite-only program, which first launched in 2019, allowed select developers to create and launch HIPAA-compliant healthcare skills for Alexa (skills are the third-party voice apps that run on Alexa devices). The skills released as part of the program allowed consumers to ask the virtual assistant for help with things like booking an appointment, accessing hospital post-discharge instructions, checking on the status of a prescription delivery and more.

Amazon launched the program with six healthcare organizations, including Boston Children’s Hospital, Livongo, Swedish Health Connect, Cigna Health Today, Atrium Health and Express Scripts. As of last week, only three of these organizations had applications active on the Alexa Skills store, according to Voicebot.ai.

“We regularly review our experiences to ensure we are investing in services that will delight customers,” a spokesperson from Amazon told TechCrunch in an email. “We are continuing to invest heavily in developing healthcare experiences with first and third-party developers, including Alexa Smart Properties for Healthcare.”

The Alexa Smart Properties for Healthcare unit aims to make it easy and cost effective for hospitals and providers to care for their patients. Last year, Amazon rolled out new solutions for healthcare providers and senior living centers as part of Alexa Smart Properties. The solutions were designed to meet the needs of deploying Alexa devices at scale and will allow the facility’s administrators to create customized experiences for their residents or patients.

Amazon’s decision to end support the for HIPAA-protected Alexa tool comes as Business Insider recently reported the company is on pace to lose $10 billion this year from Alexa and other devices. In addition, Amazon’s Alexa team was reportedly the most-affected by layoffs at the company. Prior to the official layoffs announcement, reports indicated that Amazon’s leadership was closely evaluating its Alexa business.

This newest development is the latest turn in Amazon’s push into healthcare, as the company made numerous headlines this year in relation to its healthcare initiatives.

In August, the companyshut down Amazon Care, which had been a telehealth employer-focused virtual primary care business. The service first launched in 2019 as a pilot program in Seattle, and it’s unclear just how much traction it had gained before being shut down.

Last month, the company launched Amazon Clinic, which Amazon describes as a virtual health “storefront.” With Amazon Clinic, users can search for, connect with and pay for telehealth care, addressing a variety of conditions that are some of the more popular for telehealth consultations today. Amazon Clinic initially launched in 32 states in the U.S.

Amazon ends support for third-party HIPAA-compliant Alexa skills by Aisha Malik originally published on TechCrunch

Gift Guide: On-the-go fitness tech to boost their training anywhere

Keeping fit doesn’t need a lot of technology. A decent pair of running shoes and an exercise mat might just do it. But of course sometimes a little extra tech can give an inspiring boost — so long as whatever it is is useful, accessible and can move with you.

The smart spot for fitness tech is stuff that enhances and/or motivates training and performance. Think well designed kit, easy to access expertise, and trackers that give meaningful, actionable feedback, rather than expensive gym-style machinery that locks you into a subscription and chains you to the same static hardware every day.

So this holiday season if you’re buying a gift for a fitness lover or that special athlete in your life check out our round-up of smarter gift ideas — picked for their on-the-go potential to up their game or boost training anywhere.

This article contains links to affiliate partners where available. When you buy through these links, TechCrunch may earn an affiliate commission.

Beats Fit Pro exercise-friendly earbuds

Image Credits: Brian Heater

Exercising is often either a solo slog or a distracting cacophony at the gym so a good pair of headphones is a must. Just pop on a podcast or your favorite motivational music and off you go. But which buds to pick for a fitness fanatic? Apple-owned Beats’ Fit Pro earbuds are — as the name suggests — designed with physical activity in mind. So there’s at least a half-decent chance they won’t ping out mid run or slip out in a rain of fresh sweat.

As with Apple’s own brand AirPods, the Beats buds feature active noise cancelling but also a transparency mode so the wearer can stay aware of their surroundings — an essential consideration for road runners. For something a little less standard, the line had an update this summer when Beats announced a collaboration with Kim Kardashian, on a trio of nude/flesh toned Beats Fit Pro ‘phones — for an understated fashion statement.

Price: $200 from Amazon

Apple Fitness+ subscription

Image Credits: Apple

If you’re buying for an iPhone user, a subscription to Apple’s Fitness+ service could be a quick win — putting all sorts of video and audio workouts on tap on their device, from low intensity yoga to high octane HIIT. Back in October, Apple opened up access to Fitness+ by no longer requiring subscribers also own an Apple Watch so it’s more accessible than ever.

One gifting niggle: You can’t buy a dedicated Fitness+ gift sub from Apple — you’d have to purchase a general Apple Gift Card,instead.

Price: For an idea of how much to load on the Gift Card, the cost of Fitness+ is $9.99 per month — or you could splash out $79.99 for a year’s access.

ClassPass Gift Card

Image Credits: champlifezy@gmail.com (opens in a new window) / Getty Images

A solid gift idea for a gym bunny who’s always on the road or just easily bored: ClassPass’s monthly fitness membership could be just the ticket as it gives the holder access to a million boutique workaround studios around the world — letting them change up their routine to suit their mood, location, energy level and so on. Activities on offer run the gamut from yoga and pilates to dance, barre, boxing, bootcamp and many more.

ClassPass membership requires a subscription but your recipient doesn’t have to be a member already as there’s a gift purchase option. This lets you choose an amount to give — which can then be redeemed against a membership of their choosing.

Price: Varies by length of membership, but suggested gift amounts start at $50.

Fitbit Versa 4 smart watch

Image credits: Fitbit

Google-owned Fitbit has been honing a range of fitness smart watches for several years in a bid to challenge the Apple Watch’s dominance of the wearable category, building out from humble beginnings flogging step-tracking wristbands. Marketing for its Versa 4 smart watch touts “better results” from workout routines, thanks to features like a “daily readiness” score to help the wearer pick between a challenging workout or opting for a recovery day. It can also suggest workouts; provide a recommended daily active minutes goal; and serve up a wellness report (drawing on health tracking trends over the past 30 days) — as well as offering partner workouts on-demand — although you’ll need a premium subscription to access these extra bells & whistles (but six months comes bundled free with the smart watch so your recipient will get a good taster). Minus premium, the Fitbit Versa offers the usual core workout tracking plus real-time stats access that smart watches have become best known for.

Price: $230 for the smart watch from Amazon; $9.99 per month for premium (once free trial expires)

Whoop 4.0 membership

Image Credits: Whoop

Move over smart watches! Whoop’s faceless fitness band is geared towards athletes who are serious about tracking their performance and recovery in order to dial up their training and competitive potential. The company claims its sensor-packed tracker yields the “most in-depth fitness and health feedback” available on a wearable — touting “best in class” accuracy measurements that keep tabs on key vital signs like blood oxygen, skin temperature and heart rate metrics — with all this data put to work providing an individual “strain” score which is intended to smartly steer the wearer’s training. Other features include a haptic alarm that can be set to wake the wearer at an optimal time based on sleep needs and cycles (good luck not being late to the office with that though.)

Whoop’s wearable is sold as a fitness subscription with the latest version of the hardware bundled into the membership price. But gift subscriptions are available, with either a one year or two year membership priced at $300 and $480 respectively.

The company also sells a range of undergarments that are compatible with its tracking hardware, as they’re able to house the sensing pod next to your skin — which could make a nice alternative gift for an already paid-up Whoop member.

Price:Depends on length of membership

Nike ZoomX Vaporfly Next% 2 racing shoes

Image credits: Nike

Touted by sportsware giant Nike as one of the fastest shoe it’s ever made, the Nike ZoomX Vaporfly Next% 2 is ‘smart’ in the sense of being highly engineered for a feeling of speed. Packing a full length carbon fiber underfoot plate, the design creates a feeling of propulsion that’s designed to motivate runners to dig deep and up their pace. Layered below that is Nike’s cushiony ZoomX foam for added energetic bounce. Up top, the sneaker fabric incorporates a lightweight mesh for breathability.

The shoe is available in men’s and women’s models and a range of eye-popping colors. Gift heaven for runners.

Price: $250

Under Armour Flow Velociti Wind 2 run-tracking shoes

Image credits: Under Armour

What about a pair of shoes that automatically track your run? These lightweight Under Armour kicks (available in men’s or women’s models) have built in sensors that let them track metrics like cadence, foot strike angle, stride length, splits etc so there’s no need to strap on a smart watch or other type of exercise tracker. The sneakers connect to UA’s MapMyRun service to power run analysis, with access to the service bundled with the shoe up to December 31, 2024.

As well as capturing and crunching the runner’s data, UA’s digital fitness platform — which has its origins in its 2013 acquisition of MapMyFitness — provides motivational features, letting the wearer set goals and participate in monthly challenges. The “smart coaching” experience also includes personalized, audio running tips in real-time. And while the sneakers need pairing to a phone (via Bluetooth) and may require updating, at least there’s no manual charging required.

Price: $160

Agogie Resistance Pants

Image credits: Agogie

For the exercise lover who’s not big on apps (or ‘smart’ gadgets), these resistance pants offer a neat low-tech fitness gift option. There’s no tracking or quantification built in — just a little extra physical challenge since the pants come with eight elastic resistance bands sewn into seams running along the legs. The idea is that this will make your usual workout a littletougher by default as the added resistance activates muscles and works them a bit harder, helping boost strength and tone. The pants come in two grades of resistance, as well as in men’s and women’s sizes, with a variety of color options.

Price: $129 from Amazon

Straffr smart resistance band

Image Credits: Staffr

Give the gift of gym-class style inspiration on the go! German startup Straffr’s smart resistance band bestows its holder with the power perform strength training workouts wherever they are and gives them real-time feedback.

The stretchy band contains sensors running along its length so it can quantify workout performance as you move. The band connects via Bluetooth to a mobile device running Straffr’s companion app — which dispenses feedback verbally as you flex, as well as logging stats, tracking progress and offering a bunch of on-demand strength and HIIT training workouts to help you structure a strength training session.

The smart band is available in two strength grades: Medium (5-15 kg) or Strong (15-25kg).

Price: €99.99 (~$103) or €119.99 ($124) respectively.

Lumen track-it-and-hack-it metabolic fitness

Image Credits: Lumen

Lumen, a portable breath-testing CO2 sensor, came to market a few years ago. It’s the brainchild of a pair of endurance athletes who went looking for ways to better understand the impact of nutrition and workouts on their bodies to boost their performance. They came across an existing metabolic measurement, called RQ (Respiratory Quotient) — aka, the gold standard for measuring the metabolic fuel usage of an individual — which had been used by top-performing athletes for years but was expensive and difficult for a general consumer to access. Hence they set out to democratize access to elite metabolic tracking.

The upshot is a hand-held breath tester that they claim is able to measure an individual’s RQ in one breath and tell them whether their body is burning carbs or fats to get energy. The companion app guides the user to act on this metabolic tracking — nudging them to improve their metabolic flexibility through diet and exercise suggestions. How is all this good for fitness? Basically, better metabolic health means more energy available to knock it out of the park when you’re working out. So it’s about fuelling right to optimize athletic potential. Though it’s worth emphasizing that Lumen’s approach remains experimental, given the use of novel, proprietary technology.

The product is sold as a subscription service with the breath-testing hardware bundled as part of the initial sign-up price. Packages start at $249 for the Lumen and six months of service (after which the monthly price is $25). To gift the $249-six-month package Lumen offers a Gift Card service which emails a notification to your recipient and ships the product once they redeem it.

Price:Subscription plan starts at $250

Ultrahuman’s activity sensitive smart ring

Image Credits: Ultrahuman

A rising trend in fitness-related health data is more general consumer use of continuous glucose monitoring tech — which was originally designed for diabetes management. CGMs contain sensing filaments which the user ‘wears’ in the skin of their arm to track their blood sugar swings — a form of semi-invasive tracking that’s being explored as a way to quantify diet and lifestyle and, the claim is, optimize how you exercise. Indian startup Ultrahuman is one of several fitness-focused firms commercializing CGM tech in recent years — in its case selling a subscription service (its Cyborg/M1 tracker) geared towards improving metabolic health and “supercharging” exercise performance.

A recent addition to its product mix is a smart ring, the eponymous Ultrahuman Ring, which is designed to work with the aforementioned M1 CGM subscription service — linking real-time blood glucose insights with other health data that’s picked up by the sensor-packed ring (the latter tracks the wearer’s sleep quality, stress levels and activity density).

The goal is to get a deeper understanding of the wearer’s metabolic events (since many factors can affect a person’s glucose levels) and serve up better nudges to help them optimize activity and lifestyle. But if buying a CGM as a present seems a bit daunting, the Ultrahuman Ring also works as a standalone (and subscription-free) health and fitness wearable, linked to its companion app. In this scenario the sensing hardware puts the focus on tracking sleep, stress, movement and recovery (with the potential to upgrade the level of tracking by adding an M1 sensor later).

As well as detailed sleep tracking metrics, the Ultrahuman Ring generates a “Movement Index” (aka a measure of physical activity vs inactivity throughout the day to track that balance) and a “Body Index”, based on tracking sleep, activity and stress, to give the wearer a steer on how primed they are for activity. So even without any semi-invasive sensor action, Ultrahuman claims the ring will guide its wearer to optimize their activity by finding the lowest effort required to get results.

The ring’s hardware has been designed with workouts in mind so it’s sweat and water resistant (up to 7ft). Plus it has enough built in memory that its owner can workout without needing to also have their phone on them.

Price: $299

Gift Guide: On-the-go fitness tech to boost their training anywhere by Natasha Lomas originally published on TechCrunch

Announcing The Cross Chain Coalition Web3 Demo Day, a free online event

Despite a turbulent web3 market, dedicated people and companies continue working to build meaningful products and a better global financial system. Mark your calendar and join crypto experts, engineers, top-tier ecosystems, institutions and VCs for The Cross Chain Coalition Web3 Demo Day on January 11, 2023.

Start the New Year off right by joining this livestream showcase of 12 hot new startups building the future of web3 infrastructure, DeFi, NFT and gaming applications.

Don’t miss this free online event — register today!

The CCC Web3 Demo Day is packed with presentations and pitches. Check out one of the featured speakers:

Taariq Lewis is the founder of VolumeFi, an agency that specializes in the development and growth of blockchain protocols, which most recently supported the Paloma protocol. Lewis also co-founded the Cosmos-focused UniFi DAO, the Promise credit and repayment tracking protocol and Aquila, a B2B credit infrastructure platform.

Some of the other kings and queens of crypto you’ll hear from include:

Bryan Colligan, founder, AlphaGrowth
Unique Divine, co-founder, NibiruChain
Kavita Gupta, founder & GP, Delta Blockchain Fund
Elizabeth Yin, co-founder & General Partner, Hustle Fund

The Cross Chain Coalition Web3 Demo Day, which takes place on January 11, 2023, is a joint production between the CCC and TechCrunch. Register now for this free event to reserve your seat at the virtual table.

Announcing The Cross Chain Coalition Web3 Demo Day, a free online event by Lauren Simonds originally published on TechCrunch

Pin It on Pinterest