The year customer experience died

This was a rough year for customer experience.

We’ve been hearing for years how important customer experience is to business, and a whole business technology category has been built around it, with companies like Salesforce and Adobe at the forefront. But due to the economy or lack of employees (perhaps both?), 2022 was a year of poor customer service, which in turn has created poor experiences; there’s no separating the two.

No matter how great your product or service, you will ultimately be judged by how well you do when things go wrong, and your customer service team is your direct link to buyers. If you fail them in a time of need, you can lose them for good and quickly develop a bad reputation. News can spread rapidly through social media channels. That’s not the kind of talk you want about your brand.

We’re constantly being asked for feedback about how the business did, yet this thirst for information doesn’t seem to ever connect back to improving the experience.

And make no mistake: Your customer service is inexorably linked to the perceived experience of your customer. We’re constantly being asked for feedback about how the business did, yet this thirst for information doesn’t seem to ever connect back to improving the experience.

Consider the poor folks who bought tickets for Southwest Airlines flights this week. One video showed airline employees had sicced the police on their own passengers. Consider that the airline admittedly screwed up, but one representative of the same airline actually called the police on passengers for being at the gate. When it comes to abusing your customers and destroying your brand goodwill, that example takes the cake.

For too long we’ve been hearing about how data will drive better experiences, but is that data ever available to the people dealing with the customers? They don’t need data — they need help and training and guidance, and there clearly wasn’t enough of that in 2022. It seemed companies cut back on customer service to the detriment of their customers’ experience and ultimately to the reputation of the brand.

The year customer experience died by Ron Miller originally published on TechCrunch

How we covered the creator economy in 2022

This summer, I went straight from VidCon — the largest creator conference — to a labor journalism seminar with the Sidney Hillman Foundation. One day, I was chatting with famous TikTokers about their financial anxieties (what if they accidentally get banned from TikTok tomorrow?), and the next, I was learning about the history of American labor organizing.

These topics are not at all unrelated: at its core, writing about creator economy is labor journalism. The creator beat is a labor beat.

Creators are rebelling against the traditional route to making a living in artistic industries, taking control over their income to make money for themselves, rather than big media conglomerates. Consider creators like Brian David Gilbert, who built a devoted fanbase as a chaotically hilarious video producer for Polygon, the video game publication at Vox Media. Gilbert quit to work on other creative projects full time, likely because he realized that with his audience, he could make way more money independently than his media salary paid him. Then there’s YouTube channels like Defunctland and Swell Entertainment, which are basically investigative journalism outlets run by individual video producers. We see chefs building their brands by going viral on TikTok, or teachers who supplement their income by sharing educational content on Instagram. In artistic industries that notoriously underpay for the expertise that its laborers provide, YouTubers, Instagrammers and newsletter writers alike are proving that creativity is a monetizable skill — one that they deserve to make more than a living wage with.

This belief — that the creator economy is a labor beat — has guided my coverage of the industry this year. Below, I’ve rounded up some of our best stories about the state of the creator economy.

 

There are no laws protecting kids from being exploited on YouTube — one teen wants to change that

Like most teens, Chris McCarty spent a lot of time on YouTube, but they had a serious question. How can the children of influencers protect themselves when they’re too young to understand what it means to be a constant fixture in online videos? As part of their Girl Scouts Gold Award project, McCarty worked with Washington State Representative Emily Wicks to introduce a bill that seeks to protect and compensate children for their appearance in family vlogs.

As early as 2010, amateur YouTubers realized that “cute kid does stuff” is a genre prone to virality. David DeVore, then 7, became an internet sensation when his father posted a YouTube video of his reaction to anesthesia called “David After Dentist.” David’s father turned the public’s interest in his son into a small business, earning around $150,000 within five months through ad revenue, merch sales and a licensing deal with Vizio. He told The Wall Street Journal at the time that he would save the money for his children’s college costs, as well as charitable donations. Meanwhile, the family behind the “Charlie bit my finger” video made enough money to buy a new house.

Over a decade later, some of YouTube’s biggest stars are children who are too young to understand the life-changing responsibility of being an internet celebrity with millions of subscribers. Seven-year-old Nastya, whose parents run her YouTube channel, was the sixth-highest-earning YouTube creator in 2022, earning $28 millionRyan Kaji, a 10-year-old who has been playing with toys on YouTube since he was 4, earned $27 million from a variety of licensing and brand deals.

 

Is MrBeast actually worth $1.5 billion?

I’m fascinated by MrBeast, but kind of in a “watching a car crash” way. MrBeast is still cruising comfortably along the highway, but I worry about the guy (… not too much. I mean. He’s doing fine). His business model just doesn’t seem sustainable to me, despite his immense riches and irreplaceable success. As he attempts to raise a unicorn-sized VC round, we’ll see if he can keep escalating his stunts without becoming yet another David Dobrik.

Is going bigger always better? MrBeast’s business model is like a snake eating its own tail — no one is making money like he is, but no one is spending it like him either. He described his margins as “razor-thin” in a conversation with Logan Paul, since he reinvests most of his profits back into his content. His viewers expect that each video will be more impressive than the last, and from the outside looking in, it seems like it’s only a matter of time before MrBeast can no longer up the ante (and for other creators, this has led to disaster). So, if MrBeast’s business really is a unicorn — I’d wager it is — then he has two choices. Will he use the cushion of $150 million to make his business more sustainable, so he doesn’t have to keep burying himself alive? Or will he keep pushing for more until nothing is left?

 

Casey Neistat’s David Dobrik documentary explores what happens when creators cross the line

Speaking of David Dobrik, longtime YouTuber Casey Neistat debuted a documentary at SXSW this year about the 26-year-old YouTuber. When Neistat started working on the documentary, he wanted to capture the phenomenon that was Dobrik and his Vlog Squad, who used to be YouTube royalty. The documentary took a turn after Insider surfaced allegations of sexual assault on Dobrik’s film set — then, Dobrik nearly killed his friend Jeff Wittek in a stunt gone horribly wrong. Neistat does a brilliant job capturing the creator’s fall from grace, plus the way in which the lack of regulations on YouTube film sets can set the stage for disaster, especially when creators are incentivized to do crazier and crazier stunts to stay relevant.

Television series like “Hype House” and “The D’Amelio Show” dedicate entire plotlines to creators’ fear of being “cancelled,” but Dobrik is still doing okay, calling into question just how far a creator has to go to lose his fans. Dobrik just opened a pizza shop in LA and has his own Discovery TV show. Wittek has had at least nine surgeries to date as a result of his accident on Dobrik’s set.

“I think that there’s always a pursuit. It’s relevant for a musician – how do you keep your music interesting?” Neistat said. “But what makes individuals like David Dobrik different is that their pursuit is not coming out with the next song or making the next movie. Their pursuit is, how can I be more sensationalist? And that is a very, very, very dangerous pursuit, because the minute you achieve something that was crazier than the last, you then have to go past that.”

 

YouTube Shorts could steal TikTok’s thunder with a better deal for creators

The biggest open secret in short form video is that you can’t get rich on TikTok alone, because even the most viral creators earn a negligible portion of their income from the platform itself. TikTok has long been dominant in the short form scene, but YouTube Shorts could give TikTok a run for its money next year as it becomes the first platform to share ad revenue with short form creators. Ad revenue doesn’t seem that glamorous, but I couldn’t be more excited to see how this program will change the short form game in 2023.

A big reason why TikTok and other short-form video apps haven’t unveiled a similar revenue-sharing program yet is because it’s trickier to figure out how to fairly split ad revenue on an algorithmically-generated feed of short videos. You can’t embed an ad in the middle of a video — imagine watching a 30-second video with an eight-second ad in the middle — but if you place ads between two videos, who would get the revenue share? The creator whose video appeared directly before or after it? Or, would a creator whose video you watched earlier in the feed deserve a cut too, because their content encouraged you to keep scrolling?

 

OnlyFans CEO says adult content will still have a home on the site in 5 years

At TechCrunch Disrupt, I interviewed OnlyFans CEO Ami Gan and Chief Strategy Officer Keily Blair about the platform’s future, especially in regard to sex workers. In large part due to the success of adult creators, OnlyFans has paid out over $8 billion to creators since 2016. For comparison, the mostly safe-for-work competitor Patreon has paid out $3.5 billion since 2013. Online sex workers are some of the savviest, highest-earning creators in the business, yet they are the most vulnerable. Changing credit card company regulations and internet privacy laws can wipe out their business, and last year, that almost happened on OnlyFans. The company said it would ban adult content, then walked back that ban — but even still, adult creators have been skeptical about how long they can keep making a living on the platform. On our stage, I asked Gan if adult content will still be on OnlyFans in 5 years. She said yes.

OnlyFans has been putting a lot of effort into upcycling its image from an adult content subscription platform to a Patreon-like home for all kinds of creators, but it’s far from moving away from them as users. Today CEO Ami Gan of the platform confirmed that adult content will still have a home on the site in five years, and those creators can continue to make a living on it.

The confirmation, made today on stage at TechCrunch Disrupt, is notable because of the rocky relationship OnlyFans has had with adult creators. Last year, the company announced it would ban adult content on the site after pressure from card payment companies and efforts it reportedly was making to raise outside funding. Then it abruptly suspended the decision less than a week later after an outcry from users.

How we covered the creator economy in 2022 by Amanda Silberling originally published on TechCrunch

How China is building a parallel generative AI universe

The gigantic technological leap that machine learning models have shown in the last few months is getting everyone excited about the future of AI — but also nervous about its uncomfortable consequences. After text-to-image tools from Stability AI and OpenAI became the talk of the town, ChatGPT’s ability to hold intelligent conversations is the new obsession in sectors across the board.

In China, where the tech community has always watched progress in the West closely, entrepreneurs, researchers, and investors are looking for ways to make their dent in the generative AI space. Tech firms are devising tools built on open source models to attract consumer and enterprise customers. Individuals are cashing in on AI-generated content. Regulators have responded quickly to define how text, image, and video synthesis should be used. Meanwhile, U.S. tech sanctions are raising concerns about China’s ability to keep up with AI advancement.

As generative AI takes the world by storm towards the end of 2022, let’s take a look at how this explosive technology is shaking out in China.

Chinese flavors

Thanks to viral art creation platforms like Stable Diffusion and DALL-E 2, generative AI is suddenly on everyone’s lips. Halfway across the world, Chinese tech giants have also captivated the public with their equivalent products, adding a twist to suit the country’s tastes and political climate.

Baidu, which made its name in search engines and has in recent years been stepping up its game in autonomous driving, operates ERNIE-ViLG, a 10-billion parameter model trained on a data set of 145 million Chinese image-text pairs. How does it fair against its American counterpart? Below are the results from the prompt “kids eating shumai in New York Chinatown” given to Stable Diffusion, versus the same prompt in Chinese (纽约唐人街小孩吃烧卖) for ERNIE-ViLG.

Stable Diffusion

ERNIE-ViLG

As someone who grew up eating dim sum in China and Chinatowns, I’d say the results are a tie. Neither got the right shumai, which, in the dim sum context, is a type of succulent, shrimp and pork dumpling in a half-open yellow wrapping. While Stable Diffusion nails the atmosphere of a Chinatown dim sum eatery, its shumai is off (but I see where the machine is going). And while ERNIE-ViLG does generate a type of shumai, it’s a variety more commonly seen in eastern China rather than the Cantonese version.

The quick test reflects the difficulty in capturing cultural nuances when the data sets used are inherently biased — assuming Stable Diffusion would have more data on the Chinese diaspora and ERNIE-ViLG probably is trained on a greater variety of shumai images that are rarer outside China.

Another Chinese tool that has made noise is Tencent’s Different Dimension Me, which can turn photos of people into anime characters. The AI generator exhibits its own bias. Intended for Chinese users, it took off unexpectedly in other anime-loving regions like South America. But users soon realized the platform failed to identify black and plus-size individuals, groups that are noticeably missing in Japanese anime, leading to offensive AI-generated results.

Of course also clearly not having the model adjusted properly for darker-skinned folks, sigh

Anyway Different Dimension Me is the name, but sorry they already blocked / limit overseas users as couldn’t handle the traffic pic.twitter.com/cYi6rJwTaC

— Rui Ma 马睿 (@ruima) December 7, 2022

Aside from ERNIE-ViLG, another large-scale Chinese text-to-image model is Taiyi, a brainchild of IDEA, a research lab led by renowned computer scientist Harry Shum, who co-founded Microsoft’s largest research branch outside the U.S., Microsoft Research Asia. The open source AI model is trained on 20 million filtered Chinese image-text pairs and has one billion parameters.

Unlike Baidu and other profit-driven tech firms, IDEA is one of a handful of institutions backed by local governments in recent years to work on cutting-edge technologies. That means the center probably enjoys more research freedom without the pressure to drive commercial success. Based in the tech hub of Shenzhen and supported by one of China’s wealthiest cities, it’s an up-and-coming outfit worth watching.

Rules of AI

China’s generative AI tools aren’t just characterized by the domestic data they learn from; they are also shaped by local laws. As MIT Technology Review pointed out, Baidu’s text-to-image model filters out politically sensitive keywords. That’s expected, given censorship has long been a universal practice on the Chinese internet.

What’s more significant to the future of the fledgling field is the new set of regulatory measures targeting what the government dubs “deep synthesis tech”, which denotes “technology that uses deep learning, virtual reality, and other synthesis algorithms to generate text, images, audio, video, and virtual scenes.”As with other types of internet services in China, from games to social media, users are asked to verify their names before using generative AI apps. The fact that prompts can be traced to one’s real identity inevitably has a restrictive impact on user behavior.

But on the bright side, these rules could lead to more responsible use of generative AI, which is already being abused elsewhere to churn out NSFW and sexist content. The Chinese regulation, for example, explicitly bans people from generating and spreading AI-created fake news. How that will be implemented, though, lies with the service providers.

“It’s interesting that China is at the forefront of trying to regulate [generative AI] as a country,” said Yoav Shoham, founder of AI21 Labs, an Israel-based OpenAI rival, in an interview. “There are various companies that are putting limits to AI… Every country I know of has efforts to regulate AI or to somehow make sure that the legal system, or the social system, is keeping up with the technology, specifically about regulating the automatic generation of content.”

But there’s no consensus as to how the fast-changing field should be governed, yet. “I think it’s an area we’re all learning together,” Shoham admitted. “It has to be a collaborative effort. It has to involve technologists who actually understand the technology and what it does and what it doesn’t do, the public sector, social scientists, and people who are impacted by the technology as well as the government, including the sort of commercial and legal aspect of the regulation.”

Monetizing AI

As artists fret over being replaced by powerful AI, many in China are leveraging machine learning algorithms to make money in a plethora of ways. They aren’t from the most tech-savvy crowd. Rather, they are opportunists or stay-home mums looking for an extra source of income. They realize that by improving their prompts, they can trick AI into making creative emojis or stunning wallpapers, which they can post on social media to drive ad revenues or directly charge for downloads. The really skilled ones are also selling their prompts to others who want to join the money-making game — or even train them for a fee.

Others in China are using AI in their formal jobs like the rest of the world. Light fiction writers, for instance, can cheaply churn out illustrations for their works, a genre that is shorter than novels and often features illustrations. An intriguing use case that can potentially disrupt realms of manufacturing is using AI to design T-shirts, press-on nails, and prints for other consumer goods. By generating large batches of prototypes quickly, manufacturers save on design costs and shorten their production cycle.

It’s too early to know how differently generative AI is developing in China and the West. But entrepreneurs have made decisions based on their early observation. A few founders told me that businesses and professionals are generally happy to pay for AI because they see a direct return on investment, so startups are eager to carve out industry use cases. One clever application came from Sequoia China-backed Surreal (later renamed to Movio) and Hillhouse-backed ZMO.ai, which discovered during the pandemic that e-commerce sellers were struggling to find foreign models as China kept its borders shut. The solution? The two companies worked on algorithms that generated fashion models of all shapes, colors, and races.

But some entrepreneurs don’t believe their AI-powered SaaS will see the type of skyrocketing valuation and meteoric growth their Western counterparts, like Jasper and Stability AI, are enjoying. Over the years, numerous Chinese startups have told me they have the same concern: China’s enterprise customers are generally less willing to pay for SaaS than those in developed economies, which is why many of them start expanding overseas.

Competition in China’s SaaS space is also dog-eat-dog. “In the U.S., you can do fairly well by building product-led software, which doesn’t rely on human services to acquire or retain users. But in China, even if you have a great product, your rival could steal your source code overnight and hire dozens of customer support staff, which don’t cost that much, to outrace you,” said a founder of a Chinese generative AI startup, requesting anonymity.

Shi Yi, founder and CEO of sales intelligence startup FlashCloud, agreed that Chinese companies often prioritize short-term returns over long-term innovation. “In regard to talent development, Chinese tech firms tend to be more focused on getting skilled at applications and generating quick money,” he said. One Shanghai-based investor, who declined to be named, said he was “a bit disappointed that major breakthroughs in generative AI this year are all happening outside China.”

Roadblocks ahead

Even when Chinese tech firms want to invest in training large neural networks, they might lack the best tools. In September, the U.S. government slapped China with export controls on high-end AI chips. While many Chinese AI startups are focused on the application front and don’t need high-performance semiconductors that handle seas of data, for those doing basic research, using less powerful chips means computing will take longer and cost more, said an enterprise software investor at a top Chinese VC firm, requesting anonymity. The good news is, he argued, such sanctions are pushing China to invest in advanced technologies over the long run.

As a company that bills itself as a leader in China’s AI field, Baidu believes the impact of U.S. chip sanction on its AI business is “limited” both in the short and longer term, said the firm’s executive vice president and head of AI Cloud Group, Dou Shen, on its Q3 earnings call. That’s because “a large portion” of Baidu’s AI cloud business “does not rely too much on the highly advanced chips.” And in cases where it does need high-end chips, it has “already stocked enough in hand, actually, to support our business in the near term.”

What about the future? “When we look at it at a mid- to a longer-term, we actually have our own developed AI chip, so named Kunlun,” the executive said confidently. “By using our Kunlun chips [Inaudible] in large language models, the efficiency to perform text and image recognition tasks on our AI platform has been improved by 40% and the total cost has been reduced by 20% to 30%.”

Time will tell if Kunlun and other indigenous AI chips will give China an edge in the generative AI race.

How China is building a parallel generative AI universe by Rita Liao originally published on TechCrunch

Fidelity slashes the value of its Twitter stake by over half

Fidelity, which was among the group of outside investors that helped Elon Musk finance his $44 billion takeover of Twitter, has slashed the value of its stake in Twitter by 56%. The recalculation comes as Twitter navigates a number of challenges, most the result of chaotic management decisions — including an exodus of advertisers from the network.

Fidelity’s Blue Chip Growth Fund stake in Twitter was valued at around $8.63 million as of November, according to a monthly disclosure and Fidelity Contrafund notice first reported today by Axios. That’s down from $19.66 million as of the end of October.

Macroeconomic trends are likely to blame in part. Stripe took a 28% internal valuation cut in July, while Instacart this week reportedly suffered a 75% cut to its valuation.

But Twitter’s wishy-washy policies post-Musk clearly haven’t helped matters.

The network’s become less stable at a technical level as of late, on Wednesday suffering outages after Musk made “significant” backend server architecture changes. Twitter recently laid off employees in its public policy and engineering department, dissolving the group responsible for weighing in on content moderation and human rights-related issues such as suicide prevention. And the company’s raised the ire of regulators after banning — and then quickly reinstating — accounts belonging to prominent journalists.

Then again — as Axios business editor Dan Primack pointed out, appropriately in a tweet — Fidelity seems to rely heavily on public market performance where it concerns valuations. It’s quite possible that the firm doesn’t have any inside info on Twitter’s financial performance.

Cutbacks at Twitter abound as the company approaches $1 billion in interest payments due on $13 billion in debt, all while revenue dips. A November report from Media Matters for America estimated that half of Twitter’s top 100 advertisers, which spent almost $750 million on Twitter ads this year combined, appear to no longer be advertising on the website. Twitter’s heavily pushing its Twitter Blue plan, aiming to make it a larger profit driver. But third-party tracking data suggest it’s been slow to take off.

Some Twitter employees are bringing their own toilet paper to work after the company cut back on janitorial services, the New York Times recently reported, and Twitter has stopped paying rent for several of its offices including its San Francisco headquarters.

Musk has attempted to save around $500 million in costs unrelated to labor, according to the aforementioned Times report, over the past few weeks shutting down a data center and launching a fire sale after putting office items up for auction in a bid to recoup costs.

Separately, Musk’s team has reached out to investors for potential fresh investment for Twitter at the same price as the original $44 billion acquisition, according to The Wall Street Journal.

A poll put up by Musk asking if he should step down as head of the company closed December 19 with users voting resoundingly in favor of him leaving. Musk responded several days afterward, saying he’d resign as CEO “as soon as [he found] someone foolish enough to take the job” and after that “just run the software and servers teams.”

Fidelity slashes the value of its Twitter stake by over half by Kyle Wiggers originally published on TechCrunch

Daily Crunch: To take the friction out of consumer messaging, more companies are entering the Matrix

To get a roundup of TechCrunch’s biggest and most important stories delivered to your inbox every day at 3 p.m. PDT, subscribe here.

Welcome back to your daily digest of TechCrunch goodness. It is my last day with you (you’re welcome!), so Christine will be back in the Daily Crunch seat on Tuesday. Haje will not be back just yet because he is heading to Vegas as part of the team covering CES. Speaking of CES, Brian raised the curtain on what we can expect from its first full-fledged production since before COVID.

Bye for now, folks. Safe and Happy New Year to you all. — Henry

At the top

Into the Matrix: No, not that Matrix. We’re talking about the open standards-based comms protocol called Matrix that Paul went deep on. Its network doubled thanks in part to increased use by enterprises and government. Reddit is also having a go, experimenting with it for its chat feature.
For the fusion: Tim took a look at five startups primed to benefit from the recent breakthroughs in fusion. [TC+]
Alt-ChatGPT: In the wake of the response to OpenAI’s ChatGPT comes an open source equivalent. It’s called PaLM + RLHF (rolls right off the tongue, eh?), but Kyle writes that it isn’t pre-trained, which means good luck running it.
The Meta eyes have it: Amanda writes that Meta is getting into the eyewear business with its purchase of the Netherlands-based, smart eyewear company Luxexcel.
Book tracking: Aisha rounded up a list of five apps that you can use to track all that reading you’re planning to do once the clock strikes 2023.
Netflix vs. Hulu: Perhaps you’ve decided to cut a streaming service or two from your lineup in light of their continued price hikes. Lauren took a look at the features of Netflix and Hulu to help you make a decision.

What to look for in a term sheet as a first-time founder

Image Credits: syolacan (opens in a new window) / Getty Images

Silicon Valley reporter Connie Loizos interviewed three seasoned VCs to get their best advice for novice entrepreneurs. She asked them:

Why should you know what’s going to be in a term sheet before you see it?
Which mechanism is best to use at the outset?
How much equity is distributed at each level of early-stage fundraising?
What’s a red flag in a term sheet?
How should founders think about valuation when it comes to that first term sheet?

TechCrunch+ is our membership program that helps founders and startup teams get ahead of the pack. You can sign up here. Use code “DC” for a 15% discount on an annual subscription!

Looking back and looking ahead

We rounded up TC+ venture capital stories from a year that unfortunately saw a lot of downs. And here are a few more favorites for good measure:

Some new venture firms are going really, really (really) niche, by Connie
A love letter to micro funds, the backbone and future of venture capital, by Rebecca
Every startup wants an extension round, but there aren’t enough to go around, also by Rebecca

Zack and Carly took a look back at how law enforcement cracked down on cybercriminals this year. They examine the efforts of both breachers and cops to bring justice.

Indian startups were flush with cash with record investments. Now, Manish writes, the ecosystem is struggling with tightening funding purses, layoffs and disappointing public debuts.

Daily Crunch: To take the friction out of consumer messaging, more companies are entering the Matrix by Henry Pickavet originally published on TechCrunch

QuickVid uses AI to generate short-form videos, complete with voiceovers

Generative AI is coming for videos. A new website, QuickVid, combines several generative AI systems into a single tool for automatically creating short-form YouTube, Instagram TikTok and Snapchat videos. Given as little as a single word, QuickVid chooses a background video from a library, writes a script and keywords, overlays images generated by DALL-E 2, and adds a synthetic voiceover and background music from YouTube’s royalty-free music library.

QuickVid’s creator, Daniel Habib, says that he’s building the service to help creators meet the “ever-growing” demand from their fans.

“By providing creators with tools to quickly and easily produce quality content, QuickVid helps creators increase their content output, reducing the risk of burnout,” Habib told TechCrunch in an email interview. “Our goal is to empower your favorite creator to keep up with the demands of their audience by leveraging advancements in AI.”

But depending on how they’re used, tools like QuickVid threaten to flood already-crowded channels with spammy and duplicative content. They also face potential backlash from creators who opt not to use the tools, whether because of cost ($10 per month) or on principle, yet might have to compete with a raft of new AI-generated videos.

Going after video

QuickVid, which Habib, a self-taught developer who previously worked at Meta on Facebook Live and video infrastructure, built in a matter of weeks, launched on December 27. It’s relatively bare bones at present — Habib says that more personalization options will arrive in January — but QuickVid can cobble together the components that make up a typical informational YouTube Short or TikTok video, including captions and even avatars.

It’s easy to use. First, a user enters a prompt describing the subject matter of the video they want to create. QuickVid uses the prompt to generate a script, leveraging the generative text powers of GPT-3. From keywords either extracted from the script automatically or entered manually, QuickVid selects a background video from the royalty-free stock media library Pexels and generates overlay images using DALL-E 2. It then outputs a voiceover via Google Cloud’s text-to-speech API — Habib says that users will soon be able to clone their voice — before combining all these elements into a video.

Image Credits: QuickVid

See this video made with the prompt “Cats”:

Or this one:

QuickVid certainly isn’t pushing the boundaries of what’s possible with generative AI. Both Meta and Google have showcased AI systems that can generate completely original clips given a text prompt. But QuickVid amalgamates existing AI to exploit the repetitive, templated format of b-roll-heavy short-form videos, getting around the problem of having to generate the footage itself.

“Successful creators have an extremely high quality bar and aren’t interested in putting out content that they don’t feel is in their own voice,” Habib said. “This is the use case we’re focused on.”

That supposedly being the case, in terms of quality, QuickVid’s videos are generally a mixed bag. The background videos tend to be a bit random or only tangentially related to the topic, which isn’t surprising given QuickVid’s currently limited to the Pexels catalog. The DALL-E 2-generated images, meanwhile, exhibit the limitations of today’s text-to-image tech, like garbled text and off proportions.

In response to my feedback, Habib said that QuickVid is “being tested and tinkered with daily.”

Copyright issues

According to Habib, QuickVid users retain the right to use the content they create commercially and have permission to monetize it on platforms like YouTube. But the copyright status around AI-generated content is… nebulous, at least presently. The U.S. Patent and Trademark Office (USPTO) recently moved to revoke copyright protection for an AI-generated comic, for example, saying copyrightable works require human authorship.

When asked about how the USPTO decision might affect QuickVid, Habib said he believes that it only pertain to the “patentability” of AI-generated products and not the rights of creators to use and monetize their content. Creators, he pointed out, aren’t often submitting patents for videos and usually lean into the creator economy, letting other creators repurpose their clips to increase their own reach.

“Creators care about putting out high-quality content in their voice that will help grow their channel,” Habib said.

Another legal challenge on the horizon might affect QuickVid’s DALL-E 2 integration — and, by extension, the site’s ability to generate image overlays. Microsoft, GitHub and OpenAI are being sued in a class action lawsuit that accuses them of violating copyright law by allowing Copilot, a code-generating system, to regurgitate sections of licensed code without providing credit. (Copilot was co-developed by OpenAI and GitHub, which Microsoft owns.) The case has implications for generative art AI like DALL-E 2, which similarly has been found to copy and paste from the data sets on which they were trained (i.e. images).

Habib isn’t concerned, arguing that the generative AI genie’s out of the bottle. “If another lawsuit showed up and OpenAI disappeared tomorrow, there are several alternatives that could power QuickVid,” he said, referring to the open source DALL-E 2-like system Stable Diffusion. QuickVid is already testing Stable Diffusion for generating avatar pics.

Moderation and spam

Aside from the legal dilemmas, QuickVid might soon have a moderation problem on its hands. While OpenAI has implemented filters and techniques to prevent them, generative AI has well-known toxicity and factual accuracy problems. GPT-3 spouts misinformation, particularly about recent events, which are beyond the boundaries of its knowledge base. And ChatGPT, a fine-tuned offspring of GPT-3, has been shown to use sexist and racist language.

That’s worrisome particularly for people who’d use QuickVid to create informational videos. In a quick test, I had my partner — who’s far more creative than me, particularly in this area —  enter a few offensive prompts to see what QuickVid would generate. To QuickVid’s credit, obviously problematic prompts like “Jewish new world order” and “9/11 conspiracy theory” didn’t yield toxic scripts. But for “Critical race theory indoctrinating students,” QuickVid generated a video implying that critical race theory could be used to brainwash schoolchildren.

See:

Habib says that he’s relying on OpenAI’s filters to do most of the moderation work, and asserts that it’s incumbent on users to manually review every video created by QuickVid to ensure “everything is within the boundaries of the law.”

“As a general rule, I believe people should be able to express themselves and create whatever content they want,” Habib said.

That apparently includes spammy content. Habib makes the case that the video platforms’ algorithms, not QuickVid, are best-positioned to determine the quality of a video, and that people who produce low-quality content “are only damaging their own reputations.” The reputational damage will naturally disincentivize people from creating mass spam campaigns with QuickVid, he says.

“If people don’t want to watch your video, then you won’t receive distribution on platforms like YouTube,” he added. “Producing low-quality content will also make people will look at your channel in a negative light.”

But it’s instructive to look at ad agencies like Fractl, which in 2019 used an AI system called Grover to generate an entire site of marketing materials — reputation be damned. In an interview with The Verge, Fractl partner Kristin Tynski said that she foresaw generative AI enabling “a massive tsunami of computer-generated content across every niche imaginable.”

In any case, video-sharing platforms like TikTok and YouTube haven’t had to contend with moderating AI-generated content on a massive scale. Deepfakes — synthetic videos that replace an existing person with someone else’s likeness — began to populate platforms like YouTube several years ago, driven by tools that made deepfaked footage easier to produce. But unlike even the most convincing deepfakes today, the types of videos QuickVid creates aren’t obviously AI-generated in any way.

Google Search’s policy on AI-generated text might be a preview of what’s to come in the video domain. Google doesn’t treat synthetic text differently from human-written text where it concerns search rankings, but takes actions on content that’s “intended to manipulate search rankings and not help users.” That includes content stitched together or combined from different web pages that “[doesn’t] add sufficient value” as well as content generated through purely automated processes, both of which might apply to QuickVid.

In other words, AI-generated videos might not be banned from platforms outright should they take off in a major way, but rather simply become the cost of doing business. That isn’t likely to allay the fears of experts who believe that platforms like TikTok are becoming a new home for misleading videos, but — as Habib said during the interview — “there’s is no stopping the generative AI revolution.”

QuickVid uses AI to generate short-form videos, complete with voiceovers by Kyle Wiggers originally published on TechCrunch

There’s now an open source alternative to ChatGPT, but good luck running it

The first open-source equivalent of OpenAI’s ChatGPT has arrived, but good luck running it on your laptop — or at all.

This week, Philip Wang, the developer responsible for reverse-engineering closed-sourced AI systems including Meta’s Make-A-Video, released PaLM + RLHF, a text-generating model that behaves similarly to ChatGPT. The system combines PaLM, a large language model from Google, and a technique called Reinforcement Learning with Human Feedback — RLHF, for short — to create a system that can accomplish pretty much any task that ChatGPT can, including drafting emails and suggesting computer code.

But PaLM + RLHF isn’t pretrained. That is to say, the system hasn’t been trained on the example data from the web necessary for it to actually work. Downloading PaLM + RLHF won’t magically install a ChatGPT-like experience — that would require compiling gigabytes of text from which the model can learn and finding hardware beefy enough to handle the training workload.

Like ChatGPT, PaLM + RLHF is essentially a statistical tool to predict words. When fed an enormous number of examples from training data — e.g. posts from Reddit, news articles and ebooks — PaLM + RLHF learns how likely words are to occur based on patterns like the semantic context of surrounding text.

ChatGPT and PaLM + RLHF share a special sauce in Reinforcement Learning with Human Feedback, a technique that aims to better align language models with what users wish them to accomplish. RLHF involves training a language model — in PaLM + RLHF’s case, PaLM — and fine-tuning it on a data set that includes prompts (e.g. “Explain machine learning to a six-year-old”) paired with what human volunteers expect the model to say (e.g. “Machine learning is a form of AI…”). The aforementioned prompts are then fed to the fine-tuned model, which generates several responses, and the volunteers rank all the responses from best to worst. Finally, the rankings are used to train a “reward model” that takes the original model’s responses and sorts them in order of preference, filtering for the top answers to a given prompt.

It’s an expensive process, collecting the training data. And training itself isn’t cheap. PaLM is 540 billion parameters in size, “parameters” referring to the parts of the language model learned from the training data. A 2020 study pegged the expenses for developing a text-generating model with only 1.5 billion parameters at as much as $1.6 million. And to train the open source model Bloom, which has 176 billion parameters, it took three months using 384 Nvidia A100 GPUs; a single A100 costs thousands of dollars.

Running a trained model of PaLM + RLHF’s size isn’t trivial, either. Bloom requires a dedicated PC with around eight A100 GPUs. Cloud alternatives are pricey, with back-of-the-envelope math finding the cost of running OpenAI’s text-generating GPT-3 — which has around 175 billion parameters — on a single Amazon Web Services to be around $87,000 per year.

Sebastian Raschka, an AI researcher, points out in a LinkedIn post about PaLM + RLHF that scaling up the necessary dev workflows could prove to be a challenge as well. “Even if someone provides you with 500 GPUs to train this model, you still need to have to deal with infrastructure and have a software framework that can handle that,” he said. “It’s obviously possible, but it’s a big effort at the moment (of course, we are developing frameworks to make that simpler, but it’s still not trivial, yet).”

That’s all to say that PaLM + RLHF isn’t going to replace ChatGPT today — unless a well-funded venture (or person) goes to the trouble of training and making it available publicly.

In better news, several other efforts to replicate ChatGPT are progressing at a fast clip, including one led by a research group called CarperAI. In partnership with the open AI research organization EleutherAI and startups Scale AI and Hugging Face, CarperAI plans to release the first ready-to-run, ChatGPT-like AI model trained with human feedback.

LAION, the nonprofit that supplied the initial data set used to train Stable Diffusion, is also spearheading a project to replicate ChatGPT using the newest machine learning techniques. Ambitiously, LAION aims to build an “assistant of the future” — one that not only writes emails and cover letters but “does meaningful work, uses APIs, dynamically researches information, and much more.” It’s in the early stages. But a GitHub page with resources for the project went live a few weeks ago.

There’s now an open source alternative to ChatGPT, but good luck running it by Kyle Wiggers originally published on TechCrunch

Meta acquires Luxexcel, a smart eyewear company

As Meta faces antitrust scrutiny over its acquisition of VR fitness developers Within, the tech giant is making another acquisition. Meta confirmed to TechCrunch that it is purchasing Luxexcel, a smart eyewear company headquartered in the Netherlands. The terms of the deal, which was first reported in the Belgian paper De Tijd, have not been disclosed.

Founded in 2009, Luxexcel uses 3D printing to make prescription lenses for glasses. More recently, the company has focused its efforts on smart lenses, which can be printed with integrated technology like LCD displays and holographic film.

“We’re excited that the Luxexcel team has joined Meta, deepening the existing partnership between the two companies,” a Meta spokesperson told TechCrunch. It’s rumored that Meta and Luxexcel had already worked together on Project Aria, the company’s augmented reality (AR) research initiative.

In September 2021, Meta unveiled the Ray-Ban Stories, a pair of smart glasses that can take photos and videos, or make handsfree, voice-controlled calls using Meta platforms like WhatsApp and Facebook. By absorbing Luxexcel, Meta will likely leverage the company’s technology to produce prescription AR glasses, a product that has long been anticipated to come out of Meta’s billions of dollars of investment into its Reality Labs. However, report this summer stated that Meta was scaling back its plans for consumer-grade AR glasses, which were initially slated for 2024. Meta did not comment on these rumors at the time.

When building its AR and VR products, Meta’s corporate strategy has been to acquire smaller companies that are building top technology in the field. Even Meta’s flagship headset, the Quest, comes from its acquisition of Oculus in 2014. Given the FTC’s attempts to block Meta’s purchase of Within, it’s possible that the purchase of Luxexcel could spark the same scrutiny.

Meta acquires Luxexcel, a smart eyewear company by Amanda Silberling originally published on TechCrunch

What to look for in a term sheet as a first-time founder

Securing funding is a stressful endeavor, but it doesn’t have to be. We recently sat down with three VCs to figure out the best way to go about spinning up an investing network from scratch and negotiating the first term sheet.

Earlier this week, we featured the first part of that conversation with James Norman of Black Operator Ventures, Mandela Schumacher-Hodge Dixon of AllRaise, and Kevin Liu of both Techstars and Uncharted Ventures.

In part two, the investors cover more specifics about what to ask for in a term sheet and red flags you should look out for.

(Editor’s note: This interview has been edited lightly for length and clarity.)

Why should you know what’s going to be in a term sheet before you see it?

Mandela Schumacher-Hodge Dixon: Do not wait until you get a term sheet to start going back and forth. The term sheet should be a reflection of what was already verbally agreed upon, including the valuation. Don’t wait until you get that legal agreement in your inbox to begin pushing back, because it’s really annoying, and it starts to affect how they feel about you.

I’ve even seen investors pull the term sheet. No one is bulletproof, but you really want to be as bulletproof as possible in every stage of this. That requires preparation and clear communication.

James Norman: As you plan out your whole fundraising process, lean into it and start to see what the market is thinking, you want to have a bottom line in terms of what you’re willing to accept. At some point, you may need to capitulate, but be convinced about [that bottom line] and have a reasoning for it.

VCs are trying to invest in leaders, so they know there’s going to be a power dynamic here. How you manage that and move things forward [impacts] how they think you’re going to do other things like hire employees and land customers.

Which mechanism is best to use at the outset?

Norman: Once you get the term sheet, the game has really begun.

Regarding terms, you want to make sure that you’re getting an agreement that is at parity with the level you’re at with your company. You don’t want to end up with an angel investor trying to give you some Series A Preferred docs or anything of that nature.

If you have a pre-seed or seed-stage startup, 99% of time, you should be using a SAFE (a Simple Agreement for Future Equity agreement that Y Combinator devised in 2013). It’s got all the standard language that you need; no one can argue with it. [If they do], be like, “Go talk to Y Combinator about that.”

What to look for in a term sheet as a first-time founder by Connie Loizos originally published on TechCrunch

Despite myriad flaws, US remains top spot for Black startup founders seeking VC dollars

Despite, well, everything, the U.S. is still the best place in the world for Black startup founders to raise money. The check sizes are bigger, the market more mature, the ambition oversized. There are more funds, more options, more opportunities, more, more, more.

It’s quite easy to harp on the dismal funding and often discriminatory treatment that Black founders receive in the U.S. Through the haze, though, the reality is that the heart of the American Dream is still beating.

For example, Lotanna Ezeike, a serial founder, said he’s looking to fundraise for his new startup in the U.S., despite raising more than $1 million for his U.K.-based fintech, XPO.

“Across the pond in the U.K., thinking tends to be very limited, especially around the seed stage,” he said, adding that a seed in the U.K. is a pre-seed or family round in the U.S.

“I think this is because of how small the U.K. is compared to other regions, so the mind can only dream so big. It’s a spiral really — less wealth, less capital, fewer ideas that become unicorns.”

Cephas Ndubueze, who is from Germany, echoed similar sentiments. He said he still looks to the U.S. for venture funds for his startup because there are more success stories of Black founders in the U.S. than in Europe, meaning a greater chance of him finding his own path compared to Germany.

“I can definitely say the U.S. is a better environment for Black founders,” he told TechCrunch. “Why? More diverse investors in the U.S. More investors are investing in nontraditional businesses. More institutional investors are providing ticket sizes from $100,000 to $500,000 in the idea stage, more opportunities to build a founder network, and more investors that have already invested in Black founders in the past.”

While the reception of Black founders may appear warmer in the U.S., the numbers show more of the same. (France and Germany do not track race data, though founders and venture capitalists interviewed by TechCrunch revealed anecdotal evidence of persistent racism in both markets.) As an ironic result, founders look to the U.S. for networking opportunities.

Despite myriad flaws, US remains top spot for Black startup founders seeking VC dollars by Dominic-Madori Davis originally published on TechCrunch

Pin It on Pinterest