Another week of layoffs, executive departures and AI-generated everything

Hello again! Greg here again with Week in Review. WiR is the newsletter where we take the most read TechCrunch stories from the last seven days and wrap them up in as few words as possible — no fluff, no nonsense,* just a quick blast of everything you probably want to know about in tech this week.

*Maybe a little bit of nonsense.

Want it in your inbox every Saturday morning? Sign up here.

most read

Tip your Amazon driver (on Amazon’s dime): If you’ve got an Alexa device at home, Amazon will pay your delivery driver an extra $5 if you say, “Alexa, thank my driver” after a delivery. Amazon could, of course, just pay drivers more to begin with…but that, depressingly, probably wouldn’t be a move that would get Amazon one of the most read headlines of the week.

Slack’s CEO to depart: Last week Salesforce CEO Bret Taylor stepped down; this week, Stewart Butterfield, CEO of (Salesforce-owned) Slack, announced he’ll also step down come January. Ron Miller shares his insights on inbound Slack CEO Lidiane Jones and her decades of product experience.

The “Twitter Files”: “Elon Musk reminded his followers on Friday that owning Twitter now means he controls every aspect of the company — including what its employees said behind closed doors before he took over,” writes Taylor as an array of once-private internal Twitter communications is made public.

Lensa AI goes viral: Do all of your social media friends suddenly have avatars that make them look like sci-fi gods and action heroes? It’s probably because of Lensa AI, a photo editing app that went viral this week after adding support for Stable Diffusion’s AI-generated art tools. Popularity didn’t come without controversy, though — many continue to debate the ethics of selling something generated by an AI trained on the works of real people; meanwhile, others noted that the AI could be “tricked” into generating otherwise disallowed NSFW imagery.

More tech layoffs: This week Airtable laid off about 20% of its staff — over 250 people. Plaid also laid off 20%, which for them works out to 260 people. African fintech unicorn Chipper Cash let go of 50 people, and the U.K. drag-and-drop e-commerce platform Primer let go of 85 (about one-third of the company).

Google combines Maps/Waze teams: When Google bought the navigation app Waze for over $1 billion back in 2013, Google said they’d keep the Waze and Google Maps teams separate “for now.” Turns out “for now” meant about 9.5 years, but Google confirmed this week that the two teams will be merged. Google says it expects Waze to remain a stand-alone service.

Twitter Blue might cost more on iOS: Twitter’s $8 “Blue” subscription plan (which comes with a blue “verified” checkmark) is still on pause for now after a few false starts, but when it returns, it’ll reportedly cost a few bucks more if you subscribe through the iOS app in order to offset Apple’s cut.

audio roundup

Found — our podcast about founders and the companies they build — has a new co-host! Becca Szkutak stepped into the role this week, joining Darrell Etherington in a chat with Daye CEO Valentina Milanova. Meanwhile, the Equity crew tried to make sense of 2022 in a year-end look back, and Taylor Hatmaker hopped on The TechCrunch Podcast to explore what the sudden explosion of AI-generated art means for actual human artists.

TechCrunch+

Here’s what subscribers were reading most on TechCrunch+:

Investors sound the alarm about possible private equity tech deals: “Who wants to sell when prices are low?” Ron Miller and Alex Willhelm ask.

Rootine’s $10M pitch deck: “If you told me that a company that’s charging $70 per month for multivitamins would be able to raise a $10 million round, I’d demand to see the receipts,” writes Haje. With that in mind, he dives deep into the pitch deck that helped make it happen.

Another week of layoffs, executive departures and AI-generated everything by Greg Kumparak originally published on TechCrunch

Edtech’s brightest are struggling to pass

Welcome to Startups Weekly, a nuanced take on this week’s startup news and trends by Senior Reporter and Equity co-host Natasha Mascarenhas. To get this in your inbox, subscribe here.

Outschool laid off a quarter of staff, or 43 people, earlier this week, according to an e-mail obtained by TechCrunch. The edtech company, last valued at $3 billion, confirmed the layoffs over email, citing a focus on core capabilities as “growth has come back down to earth.”

The e-mail sent to staff was even more direct. “The truth is the layoffs in our sector are widespread for a reason,” Amir Nathoo, the co-founder of edtech unicorn Outschool, wrote in an email sent to staff. “The funding atmosphere has been dramatically impacted by the anticipation of a recession, higher interest rates and an increased need to show [return on investment] to investors.”

To employees, Nathoo’s tone is reminiscent of a conversation he had just months earlier, in July, when the entrepreneur had to run Outschool through its first round of layoffs, which impacted 18% of staff. The entrepreneur’s comments underscore how some of edtech’s boldest and well capitalized companies are struggling. For example, Outschool’s double round of layoffs comes after it raised a Series B, C and D in 12 months and grew its valuation from $1 billion to $3 billion in an even shorter period.

As part of this week’s layoff, Outschool co-founder and head of product Nick Grandy is also leaving the company. “I understood that our growth would slow down once learners were able to be back in school full time; however I didn’t anticipate that our growth would slow as dramatically as it has,” Nathoo wrote in the e-mail. “This is on me and I want to sincerely apologize.”

In the last quarter of 2022, edtech layoffs have hit venture-backed businesses including but not limited to BloomTech, Vedantu, Teachmint, Reforge, Coursera, Unacademy, Byju’s, Udacity and Brainly. Executive shifts include Quizlet CEO stepping down, Degreed’s CEO stepping aside for the founder’s return, and Invact Metaversity’s co-founder leaving after irreconcilable differences with his co-founder.

Class, an edtech company that neared unicorn status only 10 months after launching its Zoom School alternative, also conducted layoffs this year. The company raised a total of $146 million in known venture funding to date, including a SoftBank Vision Fund II check. CEO and founder Michael Chasen did not respond to a request for comment

Coding bootcamp BloomTech, formerly known as Lambda School, cut half of staff last week, in its third known round of layoffs since the pandemic began. Unlike Outschool and Class, BloomTech wasn’t on a rapid fundraising spree throughout the pandemic. Instead, the reasoning for layoffs seems to be a bit more ambiguous — with CEO Austen Allred only explaining the decision by saying that “we had to cut costs to become profitable.”

We now know that the startups that most enjoyed a pandemic-era boom are now the same startups facing difficult questions about how to navigate a not-so-looming downturn. But edtech is a sector that rose to an entirely different stratosphere in 2020 and 2021, as the demand for remote learning skyrocketed. As demand grew, so did investor appetite. The same venture capital rounds that allowed companies to expand their idea of what a total addressable market could look like, are the same tranches that may have forced an overspending and overhiring spree that now requires a correction.

Unlike a sector like crypto, which experienced a similar bull run and is now handling a winter of its own, edtech’s explosion touched on uniquely human and non-techie needs. In Outschool’s case, it’s now pivoting to focus more on the tutoring end of its platform to combat the learning loss coming out of COVID-19.

It’s safe to say that the sector is shifting from a disruptive mood to maintenance mode.

But let’s pause our edtech digging and move on to other happenings from this week in tech. You can find me on Twitter, Substack and Instagram, where I publish more of my words and work. In the rest of this newsletter, we’ll talk about Airtable, Plaid and all your darn AI avatars.

Airtable and Plaid

We’ll stop talking about layoffs after this section, but there were two cuts this week that truly surprised me: Plaid laid off 20% of staff and, well, so did Airtable. This comes after a long string of layoffs in the fintech space, not limited to but including Chime, Stripe and Opendoor.

Here’s why this is important: Both of these startups were hiring and touted as a place for laid-off talent to apply as recently as two weeks ago. All to say, there is so much whiplash out there for job seekers, especially those laid off, around where they can “trust” for their next gig.

I do wonder why these late-stage companies waited so long to conduct layoffs, or if they truly thought they’d be able to ride through this downturn with high expenses. What changed to make them finally pull the plug? Note that Airtable’s layoff seems to be especially sweeping — seeing that its chief product officer, chief people officer and chief revenue officer are also parting ways with the company as it pivots to focus more on the enterprise side of its business.

Were you laid off? I am writing a story about where tech talent goes from here, and I’d love to chat (off the record is totally fine!). DM me on Twitter or Signal me at (925) 271 0912.
African fintech unicorn Chipper Cash lays off about 12.5% of staff
FTX marked down Chipper Cash’s $2B valuation to $1.25B

Image Credits: Bryce Durbin / Bryce Durbin (opens in a new window)

All your AI avatars

My new flex is that I don’t have an AI avatar, and I’m only a little insecure about it! Jokes aside, if you have been on tech Twitter at all during the last few weeks, you’ve probably seen some pretty sleek, imaginative algorithmically generated portraits of your friends (and nemeses).

The company behind these magic avatars is Lensa AI, which has unsurprisingly been climbing up the app store. It’s damn cool. Yes, I’m tempted. But, not to rain all over your new Twitter pictures, there are already questions about how it’s being used and its impact on artists.

Here’s why this is important via my colleague, Taylor Hatmaker:

While the tech world has celebrated the advancements of AI image and text generators this year — and artists have watched the proceedings warily — your average Instagram user probably hasn’t struck up a philosophical conversation with ChatGPT or fed DALL-E absurdist prompts. That also means that most people haven’t grappled with the ethical implications of free, readily available AI tools like Stable Diffusion and how they’re poised to change entire industries — if we let them.

I strongly urge you to read Hatmaker’s piece to understand some of Lensa’s red flags, especially if you care about artists being appropriately credited and paid for their work and, well, the future of creation.

Meet Unstable Diffusion, the group trying to monetize AI porn generators
UPDATED: It’s way too easy to trick Lensa AI into making NSFW images
Prisma Labs, maker of Lensa AI, says it is working to prevent accidental generation of nudes
OpenAI’s ChatGPT shows why implementation is key with generative AI

Image Credits: Lensa AI

[Insert good news here]

We’re officially at the time of year, and part of the news cycle, when I’m desperately searching for good news to highlight. Without further ado, here’s what made me smile this week:

This Equity Crew attempt at recapping 2022 was a hot mess in all the right ways.
Wait, guys. The Big Brunch was so good. You need to watch, if not for the heartwarming chef stories, for the satisfaction of watching everyone in the room constantly defer to Sohla El-Waylly.
TC’s Dominic-Madori Davis, aka one of my favorites, tells us that women are rising through the ranks at VC firms.
One of my favorite restaurants on campus is reopening after the pandemic made it close.
All Too Well: The Short Film (Behind the scenes)
Babies being cute!

Image Credits: TothGaborGyula / Getty Images

A few notes

Thanks to those of you who came with us to TC Sessions: Space. If you missed the flight, recaps to come!
Speaking of events, we want to meet your startup at CES this year! The team is already gathering the startups they want to cover — so fill out this form so we can get some early eyes on your innovation.
General shout out to my colleague, Mary Ann Azevedo, for bringing her full heart and energy to work everyday.
VCs: Fill out this survey to tell us your spiciest 2023 predictions.
No pitches, please.
Big thank you to Equity listeners today and everyday.

Seen on TechCrunch

Amazon will give your overworked delivery driver $5 if you ask Alexa to say thank you

Instant grocery app Getir acquires its competitor Gorillas

Theranos exec Sunny Balwani sentenced to 13 years in prison for defrauding patients and investors

Slack’s new CEO, Lidiane Jones, brings two decades of product experience to the job

Seen on TechCrunch+

As Butterfield exits stage left, it’s fair to wonder what’s happening at Salesforce

The era of constant innovation at Amazon could be over

Getaround braves chilly public markets with SPAC combination

How to respond when a VC asks about your startup’s valuation

Worry not: Down rounds are still rare by historical standards

If you made it this far, congratulations and thank you. I’d tell you to forward to a friend, tell me what you think on Twitteror follow my personal blog for more emotional content — but also, I’m just glad you’re around and still care this close to the holidays.

Take care and stay warm,

N

Edtech’s brightest are struggling to pass by Natasha Mascarenhas originally published on TechCrunch

This Week in Apps: Apple App Store’s new pricing, Twitter app makers shift to Mastodon, debate over Lensa AI

Welcome back to This Week in Apps, the weekly TechCrunch series that recaps the latest in mobile OS news, mobile applications and the overall app economy.

Global app spending reached $65 billion in the first half of 2022, up only slightly from the $64.4 billion during the same period in 2021, as hypergrowth fueled by the pandemic has slowed down. But overall, the app economy is continuing to grow, having produced a record number of downloads and consumer spending across both the iOS and Google Play stores combined in 2021, according to multiple year-end reports. Global spending across iOS and Google Play last year was $133 billion, and consumers downloaded 143.6 billion apps.

This Week in Apps offers a way to keep up with this fast-moving industry in one place with the latest from the world of apps, including news, updates, startup fundings, mergers and acquisitions, and much more.

Do you want This Week in Apps in your inbox every Saturday? Sign up here: techcrunch.com/newsletters

Top Stories

App Store significantly expands pricing options

Apple this week loosened its requirements around how developers have to price their apps as legal and regulatory pressure over its tight control of the App Store intensifies. The company announced an expansion of its App Store pricing system to offer developers access to 700 additional price points, bringing the new total number of price points available to 900. It will also allow U.S. developers to set prices for apps, in-app purchases or subscriptions as low as $0.29 or as high as $10,000, and in rounded endings (like $1.00) instead of just $0.99. Similar new policies to reduce restrictions around price points will roll out in global markets, alongside new tools aimed at helping developers better manage pricing outside their local market.

The changes initially became available starting on December 6, 2022, for auto-renewable subscriptions. They’ll become available to paid apps and in-app purchases in spring 2023.

Developers will also be able to now publish prices that end in $.00 instead of $.99 or €X.99 or those that begin with two repeating digits, like ₩110,000.

Plus, new pricing tools are being made available that allow developers to set their subscription prices in their local currency as the basis for automatically generating pricing across the other 174 storefronts and 44 currencies. When the pricing is set automatically, pricing outside a developer’s home market will update as foreign exchange and tax rates change. Developers can also still choose to set prices manually if they prefer. And they’ll be able to make in-app purchases available by storefront.

The changes rolled out after Apple last year settleda class action lawsuitwith U.S. app developers, which included a number of concessions, one of which was an agreement to expand the number of price points available from fewer than 100 to more than 500.

Debate over top app Lensa AI

The photo editing app Lensa AI has been going viral over a new feature that offers to create “magic avatars” from a series of uploaded selfie photos. The avatars are created using the open source Stable Diffusion model to transform your photos into those that look like they were created by a digital artist. But there’s controversy surrounding how these images are made.

The feature isn’t quick or cheap — the processing time can take half an hour or even multiple hours to complete. Lensa’s pricing model is also fairly crafty. It’s either $3.99 for 50 unique avatars (five variations of 10 different styles) if you’re a subscriber, or $7.99 if not. If you want more avatars, it costs $11.99 for 100 unique avatars (10 variations of 10 styles) or $15.99 for 200 avatars (20 variations of 10 styles) — which again, can be discounted by 50% if you subscribe. This sort of hybrid pricing was hailed as both clever and opportunistic. It’s also one of those apps that uses dark patterns to try to get users to subscribe immediately upon first launch, with a pop-up splash screen you have to bypass to use the app for free.

But the big backlash isn’t over the cost, it’s about Stable Diffusion, the AI generator powering the service. The AI was originally trained on 2.3 billion captioned images from the internet, some of which are watermarked and copyrighted works, as well as a number of images from sites like Pinterest, Smugmug, Flickr, DeviantArt, ArtStation, Getty and Shutterstock. The issue at hand is that artists didn’t opt in to have their work included in the training data, nor can they now opt out.

Artists, understandably, are concerned. Now their unique styles are being duped by an AI model, meaning their original art will become lost among the now numerous auto-generated copycats. Some see this not only as an existential threat, but also as a form of unregulated stealing. Consumers, meanwhile, were simply enjoying the photos they had paid for without an understanding of how the tech worked, then became subject to backlash or shaming from those who did. More conscientious objectors soon realized they had just thrown away their money on profile pictures they now didn’t feel ethically comfortable in using. These are complex problems that need more discussion. At least when Instagram first launched its filters, they were donated by an artist, Cole Rise. (While he may have later regretted giving up his art to the company, the filters weren’t stolen.)

In addition, the app maker faced another serious issue, when people discovered it was easy to use to make non-consensual nude images in the app. If users uploaded Photoshopped images of topless models with someone else’s face, for example, the AI disables its NSFW filter and will create higher-quality AI avatars of the person whose face was uploaded on the topless photo. The company said it was working to prevent this from occurring and noted attempting to make NSFW content was in violation of its terms of use.

Image Credits: Lensa AI

Third-party Twitter app developers are now building for Mastodon

There’s a subtle stirring in the Twitter app ecosystem as third-party developers are beginning to rethink their dependence on Twitter’s API.

Now having grown to 3.3 million+ active users, the open source Twitter alternative Mastodon has been gaining interest from third-party Twitter app developers in recent days. The makers of popular Twitter clients, including Aviary and Tweetbot, have set their sights on building similar apps for the growing Mastodon user base.

Image Credits: Tapbots

App developer Tapbots, known for its popular Twitter app Tweetbot for iOS and Mac, is building an app for the Mastodon community. The app is similar to Tweetbot, which is hailed as one of the third-party Twitter clients that keeps improving with age. This year’s release of Tweetbot 7, for example, added features like picture-in-picture, a stats tab and widgets. Now, Tapbots is working on Ivory, a subscription-based app for Mastodon that includes access to key features like your home timeline, @ mentions, favorites, search and trends, and your own user profile. Tweetbots’ developer Paul Haddad said the goal is to first ship a stable 1.0, then start adding more Mastodon-specific features, as well as some features that he had wanted to add to Tweetbot but couldn’t because of technical limitations.

Aviary’s developer, Shihab Mehboob, meanwhile, is building a Mastodon client, calledMammoth. The new app will be a paid download with a yet-to-be-determined price, and will include the latest Mastodon API features when they’re released, as well as 4.0 features like editing posts and edit history.

App makers aren’t the only developers impacted by the chaos at Twitter. As TechCrunch reported, Typefully, a Twitter thread-making app backed by Ev Williams, is now planning to shift focus to LinkedIn. Scheduler Chirr App is also working on a Mastodon integration, and Tweepsmap just launched a post scheduler for Mastodon, too.

Developer News

iOS 16.2 RC has arrived ahead of next week’s public release. This is one of the bigger updates, as it will bring the new Freeform app, the just-announced karaoke experience called Apple Music Sing, the 10-minute AirDrop limit that hit China first, new Sleep and Medication widgets, new Home app architecture and, with iPadOS 16.2, the ability to use Stage Manager on iPads with an external display, among other things. It will also bring 5G support to iPhones in India.
A day after Apple revamped its App Store pricing, RevenueCat said it would roll out A/B price testing features.
Apple is launching Advanced Data Protection, a feature that offers end-to-end encryption on iCloud backups, Notes, Photos and more in the U.S. in 2022, then globally, including China, in 2023. There will be 23 data categories protected, with the exception of iCloud Mail, Contacts and Calendar, because of the need to interoperate with other systems. The FBI is not happy.
Apple also announced an iMessage feature that will help users verify they’re messaging only the people they intended,plus Apple ID support for hardware security keys.
However, the company said it’s pausing its efforts in launching a CSAM detection tool for iCloud Photos.
The developer series “Ask Apple” is returning December 12-16, for another week of one-on-ones and group Q&As around app building.
Google’s Pixel update brings the Google One VPN and Clear Calling (call enhancement) feature to Pixel 7/7 Pro and automatic speaker labels in the Recorder app to Pixel 6 and up. All Pixel devices will also gain a new privacy hub for settings.

App Updates

Image Credits: TechCrunch

Reddit rolled out its fun end-of-year Recap experience to users, which includes stats about your time on site, the communities you engage with the most and more. This year, seemingly inspired by the popular Spotify Wrapped experience,Reddit is also doling out personalized, sharable cards that include fun stats, like your most upvoted comment or if you’re team cat or dog, among other things.
Snap announced at its annual Lens Fest it now has 300,000 developers building AR products and will soon allow creators to build Lenses that feature digital goods that can be purchased with Snap Tokens.Users will be able to unlock power-ups, AR items and extra tools within select Lenses as part of this test. Snap also announced a partnership ith Adidas for a Bitmoji Fashion Drop.

Image Credits: Snap

Twitter is going to launch dual pricing for its upcoming Twitter Blue subscription relaunch.App Store users will pay $11 per month due to the “Apple tax,” while those who pay on the web will only be charged $7 per month.
Calm’s meditation app is catering to gamers with the addition of new auditory environments from games like “Halo Infinite” and “Sea of Thieves,” which can help some people focus and boost their mood.
Instagram’s latest feature will inform creators and brands if their content is ineligible for recommendation and why.

Image Credits: Instagram

Celeb greetings app Cameo debuted Cameo Kids, offering personalized video messages from popular kids’ characters like Santa and Thomas the Tank Engine.

Image Credits: Cameo

Epic Games launched Cabined Accounts for kids under 13 in Fortnite, Rocket League and Fall Guys.
Microsoft Teams wants to reach consumers with its new Communities feature for clubs and groups who need to organize and communicate. Slack and Discord also target this market, as do Facebook Groups and WhatsApp’s new Communities.
Telegram is letting users sign-up without a SIM, by offering virtual numbers for purchase via “toncoins” in partnership with Fragment. The company also announced the 1 million subscriber milestone.
WhatsApp added 3D avatars. Users can set the personalized avatars as their profile photo or choose from one of 36 stickers reflecting different emotions and actions.

Image Credits: WhatsApp

TikTok released its year-end trends list. Apparently, a chocolate giraffe was very popular, as was a collection of dumb life hacks.
Yelp takes on Angie’s List (now, just called Angi) with a new way to hire service professionals in the app.
Amazon Luna now allows Prime members to play already purchased Ubisoft games on Luna without having to subscribe.
YouTube is rolling out its own take on Twitch emotes.
Snapchat-owned social mapping app Zenly is shutting down. RIP.
Facebook Dating is now going to test the same age verification tech that Meta is currently testing on Instagram. ID uploads or selfie videos may be required for users suspected of being underage.
Robinhood added a waitlist for a new Robinhood Retirement feature, which will offer IRAs with a 1% match on every dollar contributed.
Roblox is going to allow users 13 and up to import their contacts and will introduce friend recommendations.

This Week in Twitter Drama

Image Credits: TechCrunch

A lot happened at the chaotic bird app company this week! In case you haven’t been keeping up with the drama, here’s a quick review:

Twitter is trying to lure back advertisers with huge incentives, like matching $500,000 to $1 million in spending,up to $1 million cap. Twitter’s ad revenue is said to be 80% below expectations as of the World Cup’s November 20 start.
Elon Musk made a big deal about publishing internal emails over Twitter’s Hunter Biden laptop drama. He’s calling these reveals “The Twitter Files.” But the hyped event fell a little flat as they only seemed to show a company having conversations over difficult content moderation decisions, ultimately resulting in the company taking the unusual step to limit the reach of a news story at the time over concerns it came from a hack and leak campaign by a Russian group and would violate Twitter’s anti-doxing policy. Twitter had already admitted, in hindsight (and with the aid of further information and reporting), the decision was wrong.
Twitter’s new VP of Trust and Safety Ella Irwin said the company will now focus on using automation to moderate content, not removals.
Twitter later released another “Twitter File” that again showed the company simply doing the difficult business of moderation, in this case conflating shadowbanning with de-amplification,as former Twitter exec Kayvon Beykpour pointed out, calling the characterization either “a lazy interpretation or deliberately misleading.”
Additional concerns were raised as to whether or not the reporter tweeting the story (who is not a Twitter employee) was given internal systems access, as she included screenshots of Twitter’s internal systems in the posts.
Musk also “exited” deputy general counsel Jim Baker, claiming there were concerns over Baker’s “possible role in suppression of information” related to the so-called Twitter Files.
Musk said a later Twitter update will show users if they’ve been shadowbanned and why.
Twitter allowed Andrew Anglin, a neo-Nazi and founder of the white supremacist website The Daily Stormer, back on the app.
Major brands’ ads appeared on the pages of two white nationalists after their accounts were restored by Musk. Ads from Amazon, Snap, Uber and others were among those impacted. Twitter later emailed advertisers to say it will launch controls to prevent ads from showing up next to certain keywords.
Twitter is facing multiple lawsuits over layoffs, which could cost it millions in arbitration fees. Some workers on H-1B visas also said they didn’t get adequate immigration support.
Twitter’s iOS app has been facing issues with a number of key security features,including the ability to protect tweets or toggle DM settings.
Twitter will offer two different prices for Twitter Blue: a $7 per month subscription if you buy from the website, or $11 if you buy via in-app purchase, to offset the “Apple tax.”
In the meantime, it started savagely changing legacy verified users‘ checkmarks to inform users who clicked that “this is a legacy verified account. It may or may not be notable.”
Twitter shut down various developer-focused projects like Twitter Toolbox and others.
Musk is now promising to delete 1.5 billion inactive accounts to free up usernames.

We’re Thinking About…

Image Credits: Facebook/Meta

A Facebook Twitter clone?

The New York Times took a look at the new startups and social apps capitalizing on Twitter’s chaos after the Musk takeover, also dropping the bombshell reveal that Meta may be cranking up its clone machine to dupe Twitter. The company had already been testing a short message-sharing feature in Instagram called Instagram Notes, but was now wondering if that sort of product should be its own standalone app or another feed within Instagram. While there’s a clear opportunity to gain traction amid Twitter’s transition when some users are looking for an out, we hope Meta doesn’t add even more clutter to the already overwhelmingly busy Instagram app and instead chooses to take a real risk here.

Meta hasn’t successfully launched a new app in years, so it’s easy to see why it wouldn’t want to try now. But it would be so, so interesting if it made a text-heavy, simplified version of Facebook — let’s call it FB Classic, (Gen Z loves nostalgia!) — where the News Feed instead becomes a real-time Twitter-like feed instead. No complicated navigation, no private groups, no Reels, no marketplace or game streams, or all the other detritus of today’s Facebook. Have it all run through Facebook’s existing systems for reporting and moderation. Let people privately post to friends or choose to be more public. Imagine if Facebook’s duped Twitter’s core feature set around posts, replies and threads, favorites and the like, but didn’t take on extra features. Perhaps even offer the ability to sync select posts from Facebook (and the forthcoming IG Notes) to the app’s Twitter-like feed for a minimalistic feed to get everyone started…I mean, I know Meta is building the metaverse now, but…a Twitter 2.0 arms race between Meta and Twitter itself would be wild to watch.

A Microsoft Super App?

Microsoft has discussed debuting a super app with web search, news and shopping to better compete with Google, according to The Information. It’s crazy to imagine how expensive and difficult it would be to get users to download another search app at this point in time, when even Google is complaining it’s losing search market share to TikTok and Instagram. Hey, maybe Microsoft should index TikTok and launch a Gen Z marketing campaign on the app, calling its new super app a better search engine for TikTok videos! Ha! After all, it did consider buying TikTok. Okay, okay, I kid. But if a Microsoft Super App is to succeed, it’s definitely going to need more than Bing (still cannot believe they named it that) to lure in the next generation of users.

Government, Policy and Lawsuits

Meta failed in its attempt to annul a $267 million fine over WhatsApp’s breaching of GDPR transparency obligations.
Indiana’s attorney general sued TikTok for deceiving users over China’s access to user data and exposing kids to mature content.
EU regulators ruled that Meta can’t use its Terms of Service to require users to see personalized ads on Facebook and Instagram.
Texas banned TikTok on government-issued devices. Other states have done the same, including South Dakota, South Carolina and Maryland.
The SEC is investigating whether or not the social events app IRL misled investors. The company raised $170 million from SoftBank at a $1.17 billion valuation last year, but some employees told The Information they didn’t believe the app had the 20 million users it claimed to.
Uber Eats settled a lawsuit with the City of Chicago for listing local restaurants in the Uber Eats and Postmates apps without the restaurants’ consent and charging excess commission fees. Uber will pay $10 million, $5 million of which will go toward paying damages to restaurants.

2023 Predictions

Data.ai released its annual report predicting the next big mobile trends for 2023 (below). The full report is here. Of particular interest is its bet that mobile ad spend will reach $362 billion next year.

Image Credits: data.ai

Funding and M&A

Norwegian grocery delivery app Oda raised 1.5 billion Norwegian crowns in equity (about $151 million) in funding at a lowered valuation of $353 million. The service operates in its home market as well as Finland and Germany.

Singapore super app Osome raised $25 million in Series B funding. The app helps business owners with administrative tasks like payroll, accounting and tax reporting and serves over 11,000 businesses.

Saudi Arabia-based food delivery service Jahez is acquiring The Chefz in a cash and stock deal for $173 million.

Downloads

Lensa AI

Well, you probably still want to see it! The avatars Lensa AI makes with Stable Diffusion are impressive, despite the controversy. It’s a great demo of a potential use case for AI, even if ethically fraught. Demand for AI avatars is clearly strong. Lensa’s popularity is having a knock-on effect across the App Store’s Top Charts, as now apps like AI Art, Image Generator; Meitu, Photo Editor & AI Art; Wonder, AI Art Generator; Dawn, AI Avatars; and Prequel, Aesthetic AI Editor have all entered the top 30. The apps are benefitting because they have “AI” in their names — and, in some cases, have bought App Store Search ads.

Also:

Proton: The E2EE cloud storage service launched iOS and Android apps offering 1GB of storage for free, 200GB for $4 per month, or 500GB for $10 per month.
Copilot: The popular budgeting and finance app launched a Mac counterpart.

This Week in Apps: Apple App Store’s new pricing, Twitter app makers shift to Mastodon, debate over Lensa AI by Sarah Perez originally published on TechCrunch

Hybrid pricing can help app developers better monetize their apps

W

elcome to the TechCrunch Exchange, a weekly startups-and-markets newsletter. It’s inspired by the daily TechCrunch+ column where it gets its name. Want it in your inbox every Saturday? Sign up here.

It seems like everyone has been talking about Lensa AI this week, but a less obvious point of this AI-enabled chart topper caught my attention: its hybrid pricing. Could it be an example of a new path that more developers will follow? Let’s explore. — Anna

Black Friday(s)

App pricing has long been a point of contention between Apple and developers. “Staunch Apple critics, like Spotify for example, have argued for years that the lack of pricing flexibility hinders their business,” TechCrunch’s Sarah Perez reported.

Apple seems to have taken complaints into account: This week, the Cupertino firm announced that its App Store pricing system will progressively enable more price points and will allow developers to “set prices for apps, in-app purchases or subscriptions as low as $0.29 or as high as $10,000, and in rounded endings (like $1.00) instead of just $0.99.”

The demand for more pricing flexibility is a consequence of another trend: A shift from paid app downloads to subscription-based models that may require more granularity to optimize revenue. But some developers haven’t waited for Apple’s decision to come up with hybrid pricing models — those combining subscription economics and other types of monetization — that could soon become the norm.

Hybrid pricing can help app developers better monetize their apps by Anna Heim originally published on TechCrunch

With Kite’s demise, can generative AI for code succeed?

Kite, a startup developing an AI-powered coding assistant, abruptly shut down last month. Despite securing tens of millions of dollars in VC backing, Kite struggled to pay the bills, founder Adam Smith revealed in a postmortem blog post, running into engineering headwinds that made finding a product-market fit essentially impossible.

“We failed todeliver our vision of AI-assisted programming because we were 10+ years too early to market, i.e., the tech is not ready yet,” Smith said. “Our product did not monetize, and it took too long to figure that out.”

Kite’s failure doesn’t bode well for the many other companies pursuing — and attempting to commercialize — generative AI for coding. Copilot is perhaps the highest-profile example, a code-generating tool developed by GitHub and OpenAI priced at $10 per month. But Smith notes that while Copilot shows a lot of promise, it still has “a long way to go” — estimating that it could cost over $100 million to build a “production-quality” tool capable of synthesizing code reliably.

To get a sense of the challenges that lie ahead for players in the generative code space, TechCrunch spoke with startups developing AI systems for coding, including Tabnine and DeepCode, which Snyk acquired in 2020. Tabnine’s service predicts and suggests next lines of code based on context and syntax, like Copilot. DeepCode works a bit differently, using AI to notify developers of bugs as they code.

Tabnine CEO Dror Weiss was transparent about what he sees as the barriers standing in the way of code-synthesizing systems’ mass adoption: the AI itself, user experience and monetization.

With Kite’s demise, can generative AI for code succeed? by Kyle Wiggers originally published on TechCrunch

Here’s how Audi Q4 50 E-Tron stacks up against the Tesla Model Y

After many delays, Audi has a new E-Tron on the market. It’s the Q4, a little SUV based on the same MEB platform that sits beneath the Volkswagen ID.4’s skin.

That means a similar layout and, most significantly, the identical 82-kWh battery pack here wrapped with a much more premium look and feel to match its premium price of $48,800, to start.

It’s easy to compare the Q4 to its VW corporate cousin or even the other Audi E-Tron models; we thought it’d be more interesting to line Audi’s latest up against Tesla’s stalwart SUV, the Model Y. Once meant to be an affordable entry to the world of electrification, the Y seems to get more expensive by the day. Today’s starting price? $65,990.

Given that financial delta, this Audi versus Tesla fight might seem unfair. Read on to see these two are more evenly matched than you might think.

Here’s a break down of the 2022 Audi Q4 50 vs Tesla Model Y by category:

Exterior Design

Image Credits: Tim Stevens

Design will always be subjective, but it’s hard to get excited about the Model Y’s style. Why? Well, it stole the bulk of its look from the Model 3, a car unveiled over six years ago. At best, the Model Y is an inoffensive, slightly more bulbous version of a sedan that is in desperate need of a visual refresh.

Audi’s Q4 E-Tron, on the other hand, has a thoroughly modern look. Creased fender flares at every corner give it some subtle aggression, while the big grille at the front identifies its family lineage even if it’s only there for looks.

Silver insets front, sides and rear provide some visual flair, though buyers have the option to black all that out if they like — as on the car I tested. That, plus a choice of four exterior colors and three separate wheel designs, gives the Audi an edge when it comes to factory personalization.

Exterior Design Winner: Audi Q4 E-Tron

Interior Design

Image Credits: Tesla

I fear these cars’ interior designs may prove to be even more subjective, but it’s hard to see the inside of a Model Y as approaching the level of quality you’d expect from a $65,000 car. Every time I sit in one, I’m reminded that it was designed to simplify manufacturing and minimize cost. If you love digging through touchscreen menus while you’re driving, then you’ll probably prefer the Tesla.

Me, I like a little more tactility.

That said, the Audi’s interior isn’t perfect. We’ll start with the good: Materials and overall fit and finish are far higher on the Q4. There are a few cheap plastics to be found, and the swaths of fingerprinty piano black surfaces are a real drag.

However, the leather seats look and feel great and, crucially, buyers can choose between three different interior trims compared to the Tesla’s two. There’s plenty of room, even in the back seats — which, by the way, have their own climate controls and a pair of USB-C ports.

Interior of the 2022 Audi Q4 50 E-Tron Image Credits: Tim Stevens

There’s a touchscreen up front, of course, but discrete HVAC controls with real buttons live right underneath. A gauge cluster lives behind the steering wheel, something Tesla’s Models 3 and Y both do without, plus an augmented-reality heads-up display on the higher trims.

But it’s not all good news.

Image Credits: Tim Stevens

While the Audi’s steering wheel controls are comprehensive, they are capacitive touch buttons with little space between them. I couldn’t tell you the number of times I accidentally triggered voice commands when trying to raise the volume. Meanwhile, the steering wheel heater might be the weakest I’ve ever experienced, and the heated seats are tepid at best.

No frunk in the Audi Q4 50 Image Credits: Tim Stevens

Finally, there’s no frunk, which I know some of you feel strongly about. That helps give the Model Y the edge on cargo capacity: 34.3 cubic feet with the seats up compared to the Q4’s 24.8.

The Audi’s better materials, additional displays and overall design make it look and feel far better, but those steering controls and the relative lack of cargo space make it hard to pick a winner.

Interior Design Winner: Draw

Tech and safety

Image Credits: Tesla

Tesla was an early adopter and extreme proponent of big touchscreens in cars. When the Model S was first introduced that was an exciting thing. 12 years later, Tesla’s user interface looks and feels dated. That everything from wiper speed to brake regen mode is buried in menu after menu is a real annoyance, too.

More troubling is the continued absence of both Android Auto and Apple CarPlay. It’s clear that this situation is unlikely to ever change. The Model Y has dozens of pointless games and streaming media services. I’d trade them all for the ability to use Google Maps and easily stream from YouTube Music.

Audi’s infotainment system, on the other hand, has a cleaner, modern look, but occasionally sluggish performance. Its navigational experience is a little dated, too. However, the presence of both Android Auto and Apple CarPlay, wireless at that, obviates those concerns. And, with the Q4 offering both a proper gauge cluster and a heads-up display, it’s the clear winner.

Safety is a bit more debatable. Ostensibly, the Model Y’s misnamed Full Self Driving package provides greater benefits than a traditional, active-safety setup like on the Audi. However, given the questionable state of FSD as of the moment, I’m not factoring that into the comparison.

The Audi, meanwhile, has parking sensors, forward collision warning and obstacle avoidance, plus lane departure warning, rear cross-traffic alerts, and blind-spot warnings. The Model Y offers many of the same features, but lacks cross-traffic alerts and, with Tesla opting to remove ultrasonic sensors, it currently lacks parking assistance.

Tech and Safety Winner: Audi Q4 E-Tron

Driving Dynamics

Image Credits: Tesla

For outright speed and acceleration, there’s no question: the Tesla wins. The Model Y leaps forward with far more aggression than the Q4, even when the latter is on its most aggressive throttle mapping. That’s not to say the Audi is slow. It, too, is properly quick. A 5.8-second zero-to-60 time is more than respectable, but the Model Y’s 4.8 clearly has it beat.

Really, though, how quick does your small, family-oriented SUV need to be? In this category, ride quality is far more important, and here the Audi comes out on top. The Model Y just has a tendency to crash over every bump, the sounds of compression transmitted directly into the cabin. The Audi itself isn’t exactly a standout in this regard, especially on the 20-inch wheels that my test car rolled into my life on. (If you’re configuring your own, the 19-inch wheels are the ones to get.) However, it’s the far more pleasing choice over broken, uneven asphalt and concrete.

When it comes to handling, again the Audi wins. Neither of these are the most engaging of corner carvers, but the Audi tracks with more confidence than the Tesla, which can feel a little wayward through quick transitions or when faced with broken pavement.

Given the handling and ride quality I’m calling this category for the Audi, but if you’re someone who prioritizes outright shove over all else, you might call it for Tesla.

Driving Dynamics Winner: Audi Q4 E-Tron

Range

There can be no debate on this one: the Model Y wins. Tesla provides a 330-mile EPA estimate for the Long Range flavor of the Model Y, which drops down to 303 for the Performance trim. In my experience driving multiple Model Ys over the years the 330-mile figure is a bit optimistic, but not far off from the truth in ideal conditions. (That is: reasonable temperatures, no headwinds, flat terrain.)

Audi’s Q4 50 E-Tron Quattro, on the other hand, is rated by a relatively paltry 241 miles by the EPA, though that steps up by one whole mile if you opt for the slightly slippier Sportback trim. Step down to the Q4 40 E-Tron, which lacks all-wheel-drive, and the range figure gets a healthy bump, up to 265 miles of range.

The Q4 50 Quattro has an 82-kWh battery pack. Do the math on that range and pack size and you come out to a 2.9 mi/kWh efficiency rating. In my testing of the Q4 50 Quattro, I actually scored exactly 2.9. That’s despite spending a good portion of my testing time at highway speeds, where many EVs struggle thanks to increased air resistance. So, rest assured that 241 is at least a realistic figure. And, frankly, I think it’ll be plenty for most.

When it comes time to refill the battery, most folks will charge at home most of the time. But, for juicing on the go, Tesla again comes out ahead. Tesla’s supercharger network covers nearly 1,500 locations in the US. The Q4 E-Tron’s primary high-speed charging network is Electrify America, which now has more than 800 locations and is growing rapidly. That gap narrows if you factor in the myriad other charging networks the Audi can utilize. However, since the Model Y can also use most of those, it still comes out ahead.

Range Winner: Tesla Model Y

Value

Image Credits: Tim Stevens

Tesla Model Y pricing tends to change with the weather, but as I write this a base Model Y Long Range in the cheapest color (white) with no options comes out to $65,990 plus a $1,200 destination fee. (The company recently offered discounts for those delivered in December)

The cheapest Q4 E-Tron starts at $48,800, but that’s the single-motor version, so it’s not a fair comparison. Step up to the Quattro flavor and you’re looking at $53,800 plus $1,195 destination. That model lacks adaptive cruise and some other niceties, like parking assistance, driver’s seat memory, and a power liftgate. You’ll need the Premium Plus trim to add those things and a starting price of $60,400, still $5,590 cheaper than the Tesla.

The car I tested was the top-shelf Prestige trim, which adds on an augmented-reality heads-up display and brilliant, matrix headlights. From there, your only options are bigger wheels plus premium exterior colors. The Premium starts at $61,900, $4,090 below the Tesla.

Even with every option box ticked compared to an entry-level Model Y, the Audi is easier on the wallet.

Value Winner: Audi Q4 E-Tron

Overall Winner: Audi

While design is subjective and you could definitely make the case for the Tesla to be the winner in the driving dynamics category, the Audi has the lead in most areas. That leaves only range as the clear win for the Tesla, and while it is a substantial victory there, again I think that 241 miles is more than enough for most people in most circumstances. Fast chargers are hardly ubiquitous, but they’re common enough to allow road trips to most places without too many extended pit stops.

Most importantly, the Q4 has more comprehensive safety features, more reliable driver assistance, and a far more affordable price. Despite all the delays in bringing its all-electric SUV to market it’s clear that Audi’s Q4 E-Tron was worth the wait.

Here’s how Audi Q4 50 E-Tron stacks up against the Tesla Model Y by Tim Stevens originally published on TechCrunch

OpenAI’s attempts to watermark AI text hit limits

Did a human write that, or ChatGPT? It can be hard to tell — perhaps too hard, its creator OpenAI thinks, which is why it is working on a way to “watermark” AI-generated content.

In a lecture at the University of Austin, computer science professor Scott Aaronson, currently a guest researcher at OpenAI, revealed that OpenAI is developing a tool for “statistically watermarking the outputs of a text [AI system].” Whenever a system — say, ChatGPT — generates text, the tool would embed an “unnoticeable secret signal” indicating where the text came from.

OpenAI engineer Hendrik Kirchner built a working prototype, Aaronson says, and the hope is to build it into future OpenAI-developed systems.

“We want it to be much harder to take [an AI system’s] output and pass it off as if it came from a human,” Aaronson said in his remarks. “This could be helpful for preventing academic plagiarism, obviously, but also, for example, mass generation of propaganda — you know, spamming every blog with seemingly on-topic comments supporting Russia’s invasion of Ukraine without even a building full of trolls in Moscow. Or impersonating someone’s writing style in order to incriminate them.”

Exploiting randomness

Why the need for a watermark? ChatGPT is a strong example. The chatbot developed by OpenAI has taken the internet by storm, showing an aptitude not only for answering challenging questions but writing poetry, solving programming puzzles and waxing poetic on any number of philosophical topics.

While ChatGPT is highly amusing — and genuinely useful — the system raises obvious ethical concerns. Like many of the text-generating systems before it, ChatGPT could be used to write high-quality phishing emails and harmful malware, or cheat at school assignments. And as a question-answering tool, it’s factually inconsistent — a shortcoming that led programming Q&A site Stack Overflow to ban answers originating from ChatGPT until further notice.

To grasp the technical underpinnings of OpenAI’s watermarking tool, it’s helpful to know why systems like ChatGPT work as well as they do. These systems understand input and output text as strings of “tokens,” which can be words but also punctuation marks and parts of words. At their cores, the systems are constantly generating a mathematical function called a probability distribution to decide the next token (e.g., word) to output, taking into account all previously-outputted tokens.

In the case of OpenAI-hosted systems like ChatGPT, after the distribution is generated, OpenAI’s server does the job of sampling tokens according to the distribution. There’s some randomness in this selection; that’s why the same text prompt can yield a different response.

OpenAI’s watermarking tool acts like a “wrapper” over existing text-generating systems, Aaronson said during the lecture, leveraging a cryptographic function running at the server level to “pseudorandomly” select the next token. In theory, text generated by the system would still look random to you or I, but anyone possessing the “key” to the cryptographic function would be able to uncover a watermark.

“Empirically, a few hundred tokens seem to be enough to get a reasonable signal that yes, this text came from [an AI system]. In principle, you could even take a long text and isolate which parts probably came from [the system] and which parts probably didn’t.” Aaronson said. “[The tool] can do the watermarking using a secret key and it can check for the watermark using the same key.”

Key limitations

Watermarking AI-generated text isn’t a new idea. Previous attempts, most rules-based, have relied on techniques like synonym substitutions and syntax-specific word changes. But outside of theoretical research published by the German institute CISPA last March, OpenAI’s appears to be one of the first cryptography-based approaches to the problem.

When contacted for comment, Aaronson declined to reveal more about the watermarking prototype, save that he expects to co-author a research paper in the coming months. OpenAI also declined, saying only that watermarking is among several “provenance techniques” it’s exploring to detect outputs generated by AI.

Unaffiliated academics and industry experts, however, shared mixed opinions. They note that the tool is server-side, meaning it wouldn’t necessarily work with all text-generating systems. And they argue that it’d be trivial for adversaries to work around.

“I think it would be fairly easy to get around it by rewording, using synonyms, etc.,” Srini Devadas, a computer science professor at MIT, told TechCrunch via email. “This is a bit of a tug of war.”

Jack Hessel, a research scientist at the Allen Institute for AI, pointed out that it’d be difficult to imperceptibly fingerprint AI-generated text because each token is a discrete choice. Too obvious a fingerprint might result in odd words being chosen that degrade fluency, while too subtle would leave room for doubt when the fingerprint is sought out.

ChatGPT answering a question.

Yoav Shoham, the co-founder and co-CEO of AI21 Labs, an OpenAI rival, doesn’t think that statistical watermarking will be enough to help identify the source of AI-generated text. He calls for a “more comprehensive” approach that includes differential watermarking, in which different parts of text are watermarked differently, and AI systems that more accurately cite the sources of factual text.

This specific watermarking technique also requires placing a lot of trust — and power — in OpenAI, experts noted.

“An ideal fingerprinting would not be discernable by a human reader and enable highly confident detection,” Hessel said via email. “Depending on how it’s set up, it could be that OpenAI themselves might be the only party able to confidently provide that detection because of how the ‘signing’ process works.”

In his lecture, Aaronson acknowledged the scheme would only really work in a world where companies like OpenAI are ahead in scaling up state-of-the-art systems — and they all agree to be responsible players. Even if OpenAI were to share the watermarking tool with other text-generating system providers, like Cohere and AI21Labs, this wouldn’t prevent others from choosing not to use it.

“If [it] becomes a free-for-all, then a lot of the safety measures do become harder, and might even be impossible, at least without government regulation,” Aaronson said. “In a world where anyone could build their own text model that was just as good as [ChatGPT, for example] … what would you do there?”

That’s how it’s played out in the text-to-image domain. Unlike OpenAI, whose DALL-E 2 image-generating system is only available through an API, Stability AI open-sourced its text-to-image tech (called Stable Diffusion). While DALL-E 2 has a number of filters at the API level to prevent problematic images from being generated (plus watermarks on images it generates), the open source Stable Diffusion does not. Bad actors have used it to create deepfaked porn, among other toxicity.

For his part, Aaronson is optimistic. In the lecture, he expressed the belief that, if OpenAI can demonstrate that watermarking works and doesn’t impact the quality of the generated text, it has the potential to become an industry standard.

Not everyone agrees. As Devadas points out, the tool needs a key, meaning it can’t be completely open source — potentially limiting its adoption to organizations that agree to partner with OpenAI. (If the key were to be made public, anyone could deduce the pattern behind the watermarks, defeating their purpose.)

But it might not be so far-fetched. A representative for Quora said the company would be interested in using such a system, and it likely wouldn’t be the only one.

“You could worry that all this stuff about trying to be safe and responsible when scaling AI … as soon as it seriously hurts the bottom lines of Google and Meta and Alibaba and the other major players, a lot of it will go out the window,” Aaronson said. “On the other hand, we’ve seen over the past 30 years that the big Internet companies can agree on certain minimal standards, whether because of fear of getting sued, desire to be seen as a responsible player, or whatever else.”

OpenAI’s attempts to watermark AI text hit limits by Kyle Wiggers originally published on TechCrunch

Is ChatGPT a ‘virus that has been released into the wild’?

More than three years ago, this editor sat down with Sam Altman for a small event in San Francisco soon after he’d left his role as the president of Y Combinator to become CEO of the AI company he co-founded in 2015 with Elon Musk and others, OpenAI.

At the time, Altman described OpenAI’s potential in language that sounded outlandish to some. Altman said, for example, that the opportunity with artificial general intelligence — machine intelligence that can solve problems as well as a human — is so incomprehensibly enormous that if OpenAI managed to crack it, the outfit could “maybe capture the light cone of all future value in the universe.” He said that the company was “going to have to not release research” because it was so powerful. Asked if OpenAI was guilty of fear-mongering — Elon Musk, a cofounder of the outfit, has repeatedly called all organizations developing AI to be regulated — Altman talked about dangers of not thinking about “societal consequences” when “you’re building something on an exponential curve.”

The audience laughed at various points of the conversation, not certain how seriously to take Altman. No one is laughing now, however. While machines are not yet as smart as people, the tech that OpenAI has since released into the world comes close enough that some critics fear it could be our undoing (and more sophisticated tech is reportedly coming).

Indeed, the ChatGPT model that OpenAI made available to the general public last week is so capable of answering questions like a person that professionals across a range of industries are struggling to process the implications. Educators, for example, wonder how they’ll be able to distinguish original writing from the algorithmically generated essays they are bound to receive — and that can evade anti-plagiarism software.

Paul Kedrosky isn’t an educator per se. He’s an economist, venture capitalist and MIT fellow who calls himself a “frustrated normal with a penchant for thinking about risks and unintended consequences in complex systems.” But he is among those who are suddenly worried about our collective future, tweeting yesterday: “[S]hame on OpenAI for launching this pocket nuclear bomb without restrictions into an unprepared society.” Wrote Kedrosky, “I obviously feel ChatGPT (and its ilk) should be withdrawn immediately. And, if ever re-introduced, only with tight restrictions.”

We talked with him yesterday about some of his concerns, and why he thinks what OpenAI is driving what he believes is the “most disruptive change the U.S.economy has seen in 100 years,” and not in a good way.

Our chat has been edited for length and clarity.

TC: ChatGPT came out last Wednesday. What triggered your reaction on Twitter?

PK: I’ve played with these conversational user interfaces and AI services in the past and this obviously is a huge leap beyond. And what troubled me here in particular is the casual brutality of it, with massive consequences for a host of different activities. It’s not just the obvious ones, like high school essay writing, but across pretty much any domain where there’s a grammar — [meaning] an organized way of expressing yourself. That could be software engineering, high school essays, legal documents. All of them are easily eaten by this voracious beast and spit back out again without compensation to whatever was used for training it.

I heard from a colleague at UCLA who told me they have no idea what to do with essays at the end of the current term, where they’re getting hundreds per course and thousands per department, because they have no idea anymore what’s fake and what’s not. So to do this so casually — as someone said to me earlier today — is reminiscent of the so-called [ethical] white hat hacker who finds a bug in a widely used product, then informs the developer before the broader public knows so the developer can patch their product and we don’t have mass devastation and power grids going down. This is the opposite, where a virus has been released into the wild with no concern for the consequences.

It does feel like it could eat up the world.

Some might say, ‘Well, did you feel the same way when automation arrived in auto plants and auto workers were put out of work? Because this is a kind of broader phenomenon.’ But this is very different. These specific learning technologies are self catalyzing; they’re learning from the requests. So robots in a manufacturing plant, while disruptive and creating incredible economic consequences for the people working there, didn’t then turn around and start absorbing everything going inside the factory, moving across sector by sector, whereas that’s exactly not only what we can expect but what you should expect.

Musk left OpenAI partly over disagreements about the company’s development, he said in 2019, and he has been talking about AI as an existential threat for a long time. But people carped that he didn’t know he’s talking about. Now we’re confronting this powerful tech and it’s not clear who steps in to address it.

I think it’s going to start out in a bunch of places at once, most of which will look really clumsy, and people will [then] sneer because that’s what technologists do. But too bad, because we’ve walked ourselves into this by creating something with such consequentiality. So in the same way that the FTC demanded that people running blogs years ago [make clear they] have affiliate links and make money from them, I think at a trivial level, people are going to be forced to make disclosures that ‘We wrote none of this. This is all machine generated.’

I also think we’re going to see new energy for the ongoing lawsuit against Microsoft and OpenAI over copyright infringement in the context of our in-training, machine learning algorithms. I think there’s going to be a broader DMCA issue here with respect to this service.

And I think there’s the potential for a [massive] lawsuit and settlement eventually with respect to the consequences of the services, which, you know, will probably take too long and not help enough people, but I don’t see how we don’t end up in [this place] with respect to these technologies.

What’s the thinking at MIT?

Andy McAfee and his group over there are more sanguine and have a more orthodox view out there that anytime time we see disruption, other opportunities get created, people are mobile, they move from place to place and from occupation to occupation, and we shouldn’t be so hidebound that we think this particular evolution of technology is the one around which we can’t mutate and migrate. And I think that’s broadly true.

But the lesson of the last five years in particular has been these changes can take a long time. Free trade, for example, is one of those incredibly disruptive, economy-wide experiences, and we all told ourselves as economists looking at this that the economy will adapt, and people in general will benefit from lower prices. What no one anticipated was that someone would organize all the angry people and elect Donald Trump. So there’s this idea that we can anticipate and predict what the consequences will be, but [we can’t].

You talked about high school and college essay writing. One of our kids has already asked — theoretically! — if it would be plagiarism to use ChatGPT to author a paper.

The purpose of writing an essay is to prove that you can think, so this short circuits the process and defeats the purpose. Again, in terms of consequences and externalities, if we can’t let people have homework assignments because we no longer know whether they’re cheating or not, that means that everything has to happen in the classroom and must be supervised. There can’t be anything we take home. More stuff must be done orally, and what does that mean? It means school just became much more expensive, much more artisanal, much smaller and at the exact time that we’re trying to do the opposite. The consequences for higher education are devastating in terms of actually delivering a service anymore.

What do you think of the idea of universal basic income, or enabling everyone to participate in the gainsfrom AI?

I’m much less strong a proponent than I was pre COVID. The reason is that COVID, in a sense, was an experiment with a universal basic income. We paid people to stay home, and they came up with QAnon. So I’m really nervous about what happens whenever people don’t have to hop in a car, drive somewhere, do a job they hate and come home again, because the devil finds work for idle hands, and there’ll be a lot of idle hands and a lot of deviltry.

Is ChatGPT a ‘virus that has been released into the wild’? by Connie Loizos originally published on TechCrunch

Daily Crunch: Grocery delivery app Getir bags rival Gorillas in a $1.2B acquisition

To get a roundup of TechCrunch’s biggest and most important stories delivered to your inbox every day at 3 p.m. PDT, subscribe here.

We’ve made it to Friday, folks. If you’re anything like me, that means finishing the workday with a well-deserved nap and reruns of “The Office.” Tweet, toot or Post at me about your favorite way to end the week.

Mark your calendar for a Twitter Space event on Tuesday, December 13 at 1 p.m. PST/4 p.m. EST featuring Builders VC investor Andrew Chen, who will speak with Walter about the role tech reporting plays in shaping ecosystems.

See you Monday! — Christine

The TechCrunch Top 3

Knock, knock, there’s a competitor at your door: Significant M&A in the food delivery space was only a matter of time, and Romain has details about a big one — Getir acquiring its competitor Gorillas in a deal that the Financial Times originally reported is valued at $1.2 billion.
Bye bye, Twitter Toolbox: Twitter tried to make it work with third-party developers, but alas, the company decided to make a clean break by shutting down some of its developer initiatives, including Toolbox. Ivan has more.
Tesla’s China boss gets a new gig…afactory: Rita is following a story about Tom Zhu, who oversaw Tesla’s China Gigafactory and has now been tapped to work his magic stateside leading Gigafactory Texas.

Startups and VC

More layoffs this week as Ingrid reports on Primer, an e-commerce infrastructure startup based in the U.K. that announced it would lay off one-third of its staff amid some restructuring to manage current and proposed commerce market conditions.

Meanwhile, Haje believes you need the perfect summary slide for your pitch deck and has found some for you (requires a TechCrunch+ subscription).

And we have three more for you:

Going dark: Kirsten reports that executives at Brodmann17, a computer vision technology startup, made the decision to shut down after realizing it would not be able to bring its products to market.
What are your symptoms?: Japanese health tech startup Ubie secured $19 million in new funding to bring its AI-powered symptom checker technology to the U.S., Kate reports.
Making that dollar work for it: Kate also has a story on Akros Technologies, which raised $2.3 million in new capital to inject some artificial intelligence into asset management.

How to respond when a VC asks about your startup’s valuation

Image Credits: boschettophotography (opens in a new window) / Getty Images

When a VC inevitably asks about your valuation expectations, it is a trick question: If your response is too high, it’s a red flag, whereas a lowball figure undervalues the company.

“We’re letting the market price this round” is an appropriate reply, but only if you’ve already gathered substantial data points from other investors — and can fire back with a few questions of your own, says Evan Fisher, founder of Unicorn Capital.

“If that’s all you say, you’re in trouble because it can also be interpreted as ‘we don’t have a clue’ or ‘we’ll take what we’re given,’” said Fisher.

Instead of going in cold, he advises founders to pre-pitch investors from their next round and use takeaways from those conversations to shape current valuations.

In the article, Fisher includes sample questions “you will want to ask every VC you speak with,” along with other tips that will help “when they pop the valuation question.”

Three more from the TC+ team:

Taking that exit: Tim writes about Vanguard’s decision to get out of a carbon emissions initiative and what prompted the move.
SPAC is back: Getaround is now a public company after braving a chilly SPAC environment that has left other companies in the cold. Alex has more.
Bridging blockchain and the physical world: That’s what Jacquelyn writes Solana founders want to see happen as the company and others pick up the pieces and move on from the FTX collapse.

TechCrunch+ is our membership program that helps founders and startup teams get ahead of the pack. You can sign up here. Use code “DC” for a 15% discount on an annual subscription!

Big Tech Inc.

We are over here with our mouths open upon learning that crypto news publication The Block received some significant — and undisclosed — loans from former FTX CEO Sam Bankman-Fried’s company Alameda Research. As a result, CEO Michael McCaffrey is out and Bobby Moran, the company’s chief revenue officer, takes the role but as Jacquelyn and Alex write, the now conflict of interest will take some time to repair, if it can even be done.

As we wait for the Federal Trade Commission to send news of Microsoft’s fate with Activision, Kyle writes that the cloud services giant acquired a different company, this time Lumenisity, a startup developing high-speed cables for transmitting data.

And three more for you:

Looking for that special gift?: Natasha L has some suggestions for your fitness-loving buddy, while Haje has some gift ideas to ensure your other friends are well caffeinated.
Meet Slack’s new CEO: When Slack announced that Lidiane Jones would be its new CEO, Ron wanted to shed some light on her career and how she got where she is today.
Exposed: Carly reports that CommonSpirit Health confirmed that data from over 620,000 patients was stolen during a ransomware attack in October.

Daily Crunch: Grocery delivery app Getir bags rival Gorillas in a $1.2B acquisition by Christine Hall originally published on TechCrunch

Musk’s ‘Twitter Files’ offer a glimpse of the raw, complicated and thankless task of moderation

Twitter’s new owner, Elon Musk, is feverishly promoting his “Twitter Files”: selected internal communications from the company, laboriously tweeted out by sympathetic amanuenses. But Musk’s obvious conviction that he has released some partisan kraken is mistaken — far from conspiracy or systemic abuse, the files are a valuable peek behind the curtain of moderation at scale, hinting at the Sisyphean labors undertaken by every social media platform.

For a decade companies like Twitter, YouTube, and Facebook have performed an elaborate dance to keep the details of their moderation processes equally out of reach of bad actors, regulators, and the press.

To reveal too much would be to expose the processes to abuse by spammers and scammers (who indeed take advantage of every leaked or published detail), while to reveal too little leads to damaging reports and rumors as they lose control over the narrative. Meanwhile they must be ready to justify and document their methods or risk censure and fines from government bodies.

The result is that while everyone knows a little about how exactly these companies inspect, filter, and arrange the content posted on their platforms, it’s just enough to be sure that what we’re seeing is only the tip of the iceberg.

Sometimes there are exposés of the methods we suspected — by-the-hour contractors clicking through violent and sexual imagery, an abhorrent but apparently necessary industry. Sometimes the companies overplay their hands, like repeated claims of how AI is revolutionizing moderation, and subsequent reports that AI systems for this purpose are inscrutable and unreliable.

What almost never happens — generally companies don’t do this unless they’re forced to — is that the actual tools and processes of content moderation at scale are exposed with no filter. And that’s what Musk has done, perhaps to his own peril, but surely to the great interest of anyone who ever wondered what moderators actually do, say, and click as they make decisions that may affect millions.

Pay no attention to the honest, complex conversation behind the curtain

The email chains, Slack conversations, and screenshots (or rather shots of screens) released over the last week provide a glimpse at this important and poorly understood process. What we see is a bit of the raw material, which is not the partisan illuminati some expected — though it is clear, by its highly selective presentation, that this what we are meant to perceive.

Far from it: the people involved are by turns cautious and confident, practical and philosophical, outspoken and accommodating, showing that the choice to limit or ban is not made arbitrarily but according to an evolving consensus of opposing viewpoints.

Leading up to the choice to temporarily restrict the Hunter Biden laptop story — probably at this point the most contentious moderation decision of the last few years, behind banning Trump — there is neither the partisanship nor conspiracy insinuated by the bombshell packaging of the documents.

Instead we find serious, thoughtful people attempting to reconcile conflicting and inadequate definitions and policies: What constitutes “hacked” materials? How confident are we in this or that assessment? What is a proportionate response? How should we communicate it, to whom, and when? What are the consequences if we do, if we don’t limit? What precedents do we set or break?

The answers to these questions are not at all obvious, and are the kind of thing usually hashed out over months of research and discussion, or even in court (legal precedents affect legal language and repercussions). And they needed to be made fast, before the situation got out of control one way or the other. Dissent from within and without (from a U.S. Representative no less — ironically, doxxed in the thread along with Jack Dorsey in violation of the selfsame policy) was considered and honestly integrated.

“This is an emerging situation where the facts remain unclear,” said Former Trust and Safety Chief Yoel Roth. “We’re erring on the side of including a warning and preventing this content from being amplified.”

Some question the decision. Some question the facts as they have been presented. Others say it’s not supported by their reading of the policy. One says they need to make the ad-hoc basis and extent of the action very clear since it will obviously be scrutinized as a partisan one. Deputy General Counsel Jim Baker calls for more information but says caution is warranted. There’s no clear precedent; the facts are at this point absent or unverified; some of the material is plainly non-consensual nude imagery.

“I believe Twitter itself should curtail what it recommends or puts in trending news, and your policy against QAnon groups is all good,” concedes Rep. Ro Khanna, while also arguing the action in question is a step too far. “It’s a hard balance.”

Neither the public nor the press have been privy to these conversations, and the truth is we’re as curious, and largely as in the dark, as our readers. It would be incorrect to call the published materials a complete or even accurate representation of the whole process (they are blatantly, if ineffectively, picked and chosen to fit a narrative), but even such as they are we are more informed than we were before.

Tools of the trade

Even more directly revealing was the next thread, which carried screenshots of the actual moderation tooling used by Twitter employees. While the thread disingenuously attempts to equate the use of these tools with shadow banning, the screenshots do not show nefarious activity, nor need they in order to be interesting.

Image Credits: Twitter

On the contrary, what is shown is compelling for the very reason that it is so prosaic, so blandly systematic. Here are the various techniques all social media companies have explained over and over that they use, but whereas before we had it couched in PR’s cheery diplomatic cant, now it is presented without comment: “Trends Blacklist,” “High Profile,” “DO NOT TAKE ACTION” and the rest.

Meanwhile Yoel Roth explains that the actions and policies need to be better aligned, that more research is required, that plans are underway to improve:

The hypothesis underlying much of what we’ve implemented is that if exposure to, e.g., misinformation directly causes harm, we should use remediations that reduce exposure, and limiting the spread/virality of content is a good way to do that… we’re going to need to make a more robust case to get this into our repertoire of policy remediations – especially for other policy domains.

Again the content belies the context it is presented in: these are hardly the deliberations of a secret liberal cabal lashing out at its ideological enemies with a ban hammer. It’s an enterprise-grade dashboard like you might see for lead tracking, logistics, or accounts, being discussed and iterated upon by sober-minded persons working within practical limitations and aiming to satisfy multiple stakeholders.

As it should be: Twitter has, like its fellow social media platforms, been working for years to make the process of moderation efficient and systematic enough to function at scale. Not just so the platform isn’t overrun with bots and spam, but in order to comply with legal frameworks like FTC orders and the GDPR. (Of which the “extensive, unfiltered access” outsiders were given to the pictured tool may well constitute a breach. The relevant authorities told TechCrunch they are “engaging” with Twitter on the matter.)

A handful of employees making arbitrary decisions with no rubric or oversight is no way to moderate effectively or meet such legal requirements; neither (as the resignation of several on Twitter’s Trust & Safety Council today testifies) is automation. You need a large network of people cooperating and working according to a standardized system, with clear boundaries and escalation procedures. And that’s certainly what seems to be shown by the screenshots Musk has caused to be published.

What isn’t shown by the documents is any kind of systematic bias, which Musk’s stand-ins insinuate but don’t quite manage to substantiate. But whether or not it fits into the narrative they wish it to, what is being published is of interest to anyone who thinks these companies ought to be more forthcoming about their policies. That’s a win for transparency, even if Musk’s opaque approach accomplishes it more or less by accident.

Musk’s ‘Twitter Files’ offer a glimpse of the raw, complicated and thankless task of moderation by Devin Coldewey originally published on TechCrunch

Pin It on Pinterest