FPGA startup Rapid Silicon lands $15M to bring its first chip to market

Field Programmable Gate Arrays (FPGA), or integrated circuits sold off-the-shelf, are a hot topic in tech. Because they’re relatively affordable and can be programmed for a range of use cases, they’ve caught on particularly in the AI and machine learning space, where they’ve been used to accelerate the training of AI systems.

The global FPGA market size could reach $14 billion by 2028, according to one estimate, up from $6 billion in 2021.

One startup looking to get in on the ground floor is Rapid Silicon, which this week announced that it raised $15 million in a Series A round led by Cambium Capital. Launched in 2021, the goal with Rapid Silicon is to promote, adopt and implement open source tech to address the low- to mid-range FPGA market, according to CEO and co-founder Naveed Sherwani.

“Rapid Silicon’s … software leads the programmable revolution as the industry’s first and only commercial open source FPGA design suite,” Sherwani, previously a GM at Intel and the former CEO of semiconductor startup SiFive, said in an email interview. “The latest round of funding will be used to further invest in Rapid Silicon’s product portfolio, support the launch of its premier low-end FPGA product … and to build on the company’s momentum in leading the adoption of open source software for commercial applications.”

Rapid Silicon is developing two products at present: Raptor and Gemini. Raptor is electronic design automation software with an interface for FPGA application design, while Gemini is a 16-nanometer FPGA with hardware including a dual-core Arm processor, external memory controller and ethernet connectivity. Sherwani emphasized that Rapid is based on open source software — another industry first, according to him — and designed to meet the needs of FPGA developers tackling challenges in sectors such as healthcare, automotive and industry.

“Customers are looking for innovative ways to program FPGAs, reduce support load by leveraging the open source ecosystem of active expertise and development engineers, and shorten time-to-market,” Sherwani added. “With open source software, Rapid Silicon is removing the barriers and providing its customers with a robust end-to-end FPGA design workflow. The open source software enables users to design complex applications quickly and efficiently on our FPGA devices.”

Gemini isn’t commercially available, but Sherwani says he expects the FPGA will come to market by the end of Q1. In the meantime, Rapid Silicon is generating revenue — between $2 million and $3 million a year — from licensing its IP.

The FPGA space has formidable competitors including Intel, which several years ago acquired U.K.-based Omnitek and Altera to double down on FPGA-based solutions for video and AI applications. But Landon Downs, the managing partner at Cambium Capital, said that he sees “immense” potential in Rapid Silicon’s tooling and hardware strategy. While that might sound like hyperbole coming from a VC, Rapid Silicon evidently has investors intrigued; the company expects to close a $15 million extension of its Series A within the next few months at an $80 million pre-money valuation.

“Driven by its purpose and world-class talent, we believe Rapid Silicon is ready to revolutionize design-to-silicon turnaround time and provide solutions that meet and exceed the robust performance, power, area and time-to-market requirements for next-generation applications,” he said in a press release. “We see immense potential in the company’s AI-enhanced EDA tools, and we believe this team has the experience needed to bring these solutions to the global market.”

FPGA startup Rapid Silicon lands $15M to bring its first chip to market by Kyle Wiggers originally published on TechCrunch

VALL-E’s quickie voice deepfakes should worry you, if you weren’t worried already

The emergence in the last week of a particularly effective voice synthesis machine learning model called VALL-E has prompted a new wave of concern over the possibility of deepfake voices made quick and easy — quickfakes, if you will. But VALL-E is more iterative than breakthrough, and the capabilities aren’t so new as you might think. Whether that means you should be more or less worried is up to you.

Voice replication has been a subject of intense research for years, and the results have been good enough to power plenty of startups, like WellSaid, Papercup, and Respeecher. The latter is even being used to create authorized voice reproductions of actors like James Earl Jones. Yes: from now on Darth Vader will be AI generated.

VALL-E, posted on GitHub by its creators at Microsoft last week, is a “neural codec language model” that uses a different approach to rendering voices than many before it. Its larger training corpus and some new methods allow it to create “high quality personalized speech” using just 3 seconds of audio from a target speaker.

That is to say, all you need is an extremely short clip like the following (all clips from Microsoft’s paper):


https://techcrunch.com/wp-content/uploads/2023/01/in1.wav

https://techcrunch.com/wp-content/uploads/2023/01/in2.wav

To produce a synthetic voice that sounds remarkably similar:

https://techcrunch.com/wp-content/uploads/2023/01/outcome1.wav

https://techcrunch.com/wp-content/uploads/2023/01/outcome2.wav

As you can hear, it maintains tone, timbre, a semblance of accent, and even the “acoustic environment,” for instance a voice compressed into a cell phone call. I didn’t bother labeling them because you can easily tell which of the above is which. It’s quite impressive!

So impressive, in fact, that this particular model seems to have pierced the hide of the research community and “gone mainstream.” As I got a drink at my local last night, the bartender emphatically described the new AI menace of voice synthesis. That’s how I know I misjudged the zeitgeist.

But if you look back a bit, in as early as 2017 all you needed was a minute of voice to produce a fake version convincing enough that it would pass in casual use. And that was far from the only project.

The improvement we’ve seen in image-generating models like DALL-E 2 and Stable Diffusion, or in language ones like ChatGPT, has been a transformative, qualitative one: a year or two ago this level of detailed, convincing AI-generated content was impossible. The worry (and panic) around these models is understandable and justified.

Contrariwise, the improvement offered by VALL-E is quantitative, not qualitative. Bad actors interested in proliferating fake voice content could have done so long ago, just at greater computational cost, not something that is particularly difficult to find these days. State-sponsored actors in particular would have plenty of resources at hand to do the kind of compute jobs necessary to, say, create a fake audio clip of the President saying something damaging on a hot mic.

I chatted with James Betker, an engineer who worked for a while on another text-to-speech system, called Tortoise-TTS.

Betker said that VALL-E is indeed iterative, and like other popular models these days gets its strength from its size.

“It’s a large model, like ChatGPT or Stable Diffusion; it has some inherent understanding of how speech is formed by humans. You can then fine tune Tortoise and other models on specific speakers, and it makes them really, really good. Not ‘kind of sounds like,’ good,” he explained.

When you “fine tune” Stable Diffusion on a particular artist’s work, you’re not retraining the whole enormous model (that takes a lot more power), but you can still vastly improve its capability of replicating that content.

But just because it’s familiar doesn’t mean it should be dismissed, Betker clarified.

“I’m glad it’s getting some traction because i really want people to be talking about this. I actually feel that speech is somewhat sacred, the way our culture thinks about it,” and he actually stopped working on his own model as a result of these concerns. A fake Dali created by DALL-E 2 doesn’t have the same visceral effect for people as hearing something in their own voice, that of a loved one, or of someone admired.

VALL-E moves us one step closer to ubiquity, and although it is not the type of model you run on your phone or home computer, that isn’t too far off, Betker speculated. A few years, perhaps, to run something like it yourself; as an example, he sent this clip he’d generated on his own PC using Tortoise-TTS of Samuel L. Jackson, based on audiobook readings of his:

https://techcrunch.com/wp-content/uploads/2023/01/samuel_jackson.mp3

Good, right? And a few years ago you might have been able to accomplish something similar, albeit with greater effort.

This is all just to say that while VALL-E and the 3-second quickfake are definitely notable, they’re a single step on a long road researchers have been walking for over a decade.

The threat has existed for years and if anyone cared to replicate your voice, they could easily have done so long ago. That doesn’t make it any less disturbing to think about, and there’s nothing wrong with being creeped out by it. I am too!

But the benefits to malicious actors are dubious. Petty scams that use a passable quickfake based on a wrong number call, for instance, are already super easy because security practices at many companies are already lax. Identity theft doesn’t need to rely on voice replication because there are so many easier paths to money and access.

Meanwhile the benefits are potentially huge — think about people who lose the ability to speak due to an illness or accident. These things happen quickly enough that they don’t have time to record an hour of speech to train a model on (not that this capability is widely available, though it could have been years ago). But with something like VALL-E, all you’d need is a couple clips off someone’s phone of them making a toast at dinner or talking with a friend.

There’s always opportunity for scams and impersonation and all that — although more people are parted with their money and identities via far more prosaic ways, like a simple phone or phishing scam. The potential for this technology is huge, but we should also listen to our collective gut, saying there’s something dangerous here. Just don’t panic — yet.

VALL-E’s quickie voice deepfakes should worry you, if you weren’t worried already by Devin Coldewey originally published on TechCrunch

Robot or fauxbot?

This is my week of debriefs. CES has a way of hurling you into the new year, kicking and screaming, and it can be hard to find your bearings as you emerge on the other side. As the dust has cleared, one thing remains very clear: No two people have the same notion of what does and doesn’t constitute a robot.

That’s not a problem, per se. Language evolves and so does technology. I was on the Equity Podcast this week to talk CES with Haje, one of our hardware reporters. He posited that much — or even most — technology is essentially a robot at this point. One can fairly credibly make the argument that “robot” as a term is much broader than how we tend to deploy it. Certainly, machine learning is becoming pervasive in ways few imagined.

The upside of casting as wide a net as possible is that one can also make the case that ubiquitous robotics isn’t some vision of the future. It’s very much already here, and maybe that’s heartening. A stricter definition — and one I’ve tended to prefer — involves some level of autonomy and perception. You know, taking in information from external sources and making decisions accordingly, much like we do.

I’m not wedded to this definition to the point that it precludes me from covering other stories in the space that don’t precisely line up. Nor is everything that qualifies necessarily in my purview. For one thing, we have a couple of automotive reporters who are very good at their jobs, so they take first crack at all of the self-driving car stuff. Truth of the matter is, our definition of what does and doesn’t qualify as a robot is also governed by some editorial decision making. It can also be porous.

This could easily become a newsletter about the eight billion lidar startups out there, but I don’t really want that, and I suspect most of you don’t, either. For me, at least, there’s not that much value in strict adherence to pedantry or orthodoxy when I determine what does and doesn’t make sense on these pages. But it is helpful to have some guardrails.

Seeing a “smart” washing machine in this newsletter might be novel the first time, but you would (rightfully) get annoyed with me if I suddenly started forcing them on you every week. Robotic vacuums, on the other hand, do tick off the robot requirements for many — or most — people, myself include. I absolutely cover them here, but I’ve also been doing this long enough to know that covering every single robot vacuum on the market is a good way to hemorrhage readers.

Image Credits: Robosen

All of this stuff bleeds together at a show like CES. I spent time with Robosen’s extremely neat Transformer robots, for example. But I think it does a disservice to both parties if we attempt to compare those toys to, say, some industrial fulfillment robot exhibiting at the show. I — somewhat cheekily — suggested that robots are “cool technology used for uncool things.” This is obviously not a guiding principle, but it does point to something worth discussing here.

Weird and wacky robot toys are going to grab the headlines at a show like CES. We get that. They’re good for traffic and they’re fun. I’ve written about plenty of them and will likely continue to do so in the future when genuinely interesting ones surface. Another reason people are drawn to them is that they more closely resemble a kind of platonic robot ideal. Robot toys look how we think a robot should look. Whatever you ultimately think about science fiction’s impact on public perception, it’s important to realize that it will never go away.

As robots become a bigger fixture in our daily lives, however, the impact will increasingly be a dialogue. I paid some ungodly sum to see the new “Avatar” in the theater with all of the trappings. One thing that struck me — and likely you, too — was how much the U.S. military robots on Pandora resemble robots that exist in the world today. Want to annoy your date? Point out where each robot in a sci-fi movie gets its inspiration.

Predictably, the robotics writing I’ve done over the past week has largely revolved around CES because, frankly, most everything I’ve done over the past week has revolved around CES. No one ever said the life of a hardware editor would be an easy one, friends.

But two and a half newsletters is more than ample coverage for the show, so let’s round up a couple of the top non-CES stories, shall we?

Image Credits: Mineral

Another company graduated the Alphabet X “moonshot” labs this week. Two years after exiting stealth, Mineral is now its very own Alphabet company. The robot aspect largely revolves around data collection here, monitoring crops to give farmers deep and rich actionable information for growing more sustainably and efficiently.

“After five years incubating our technology at X, Alphabet’s moonshot factory, Mineral is now an Alphabet company,” said CEO Elliott Grant. “Our mission is to help scale sustainable agriculture. We’re doing this by developing a platform and tools that help gather, organize and understand never-before known or understood information about the plant world — and make it useful and actionable.”

Image Credits: iRobot

File this one under: not the best look. Home robots and privacy are coming to an inevitable head. MIT Review rounded up images from Roomba that eventually made their way onto social media; iRobot responded that the shots were all taken with user consent. This is an increasingly important discussion as we bring robots and camera-sporting tech into our homes. And it will likely be pointed to as a compelling argument against Amazon’s acquisition of the firm.

One user called it “a clear breach of the agreement on their side […and] also a violation of trust.” The big question that immediately springs to mind here is what is the reasonable expectation of privacy when inviting this sort of technology into your home?

Image Credits: Bryce Durbin/TechCrunch

That’s it for this week. Bit of a short one, but I’m gonna do my best to sleep through the weekend and be back with you this time next week. Meantime, please subscribe to Actuator here, if you haven’t already.

Robot or fauxbot? by Brian Heater originally published on TechCrunch

Career Karma’s latest layoff underscores edtech’s new challenge

Learning navigation platform Career Karma has laid off another 22 people across its global and domestic workforce, less than five months after it cut 60 staff members, according to sources. CEO and co-founder Ruben Harris confirmed the workforce reduction to TechCrunch.

The cut shows that even as many edtech companies attempt to right-size their staff, there’s more work to do. Harris’ e-mail to remaining staff underscores the tension of today: once eager enterprise customers are still making up their minds on whether or not to sign up for new tools, leading to extended sales cycles and uncertainty.

“Last year, we made the decision to right size the company so that we can orient Career Karma towards working with employers and now that we have started to sign customers it’s clear that we made the right decision,” Harris wrote in the email. “What’s unclear is how Fortune 1000 companies will be responding to the macroeconomic environment and it’s important for us to give ourselves time to work with them to figure that out.” As the market evolves, Career Karma’s service of matching employees or professionals to tech bootcamps is being put in a difficult place. Just last month, BloomTech, a coding bootcamp previously known as Lambda School, cut half of staffin pursuit of profitability.

During Career Karma’s last cut, Harris emphasized that the layoff and its previously-closed $40 million Series B would extend the startup’s runway to three years. After laying off staff this week, Career Karma now has five years of runway.

As TechCrunch has discussed in the past, the strategy of “extending your runway” always comes into vogue whenever investors slow down investing. Career Karma’s shift from the basic three-year rule of thumb to five years shows how that rule may become even more conservative as the downturn continues. Over email, Harris tells TechCrunch that he “always [wants] to have the option to raise, I just don’t want to be forced to raise.”

With 80 staff now remaining at Career Karma, Harris confirmed that no C-suite executives were impacted by the layoff. Those impacted were offered two months of severance as well as extended benefits. The career navigation platform also, fittingly, offered career navigation support to its new alumni.

Current and former Career Karma employees can reach out to Natasha Mascarenhas on Signal, a secure encrypted messaging app, at 925 271 0912. You can also DM her on Twitter, @nmasc_.

Career Karma’s latest layoff underscores edtech’s new challenge by Natasha Mascarenhas originally published on TechCrunch

Web3 could help fashion become more sustainable

Actually, there is one industry that could use web3, and that industry is fashion.

Hear us out. Fashion is one of the most polluting sectors in the world, and, according to the United Nations Environmental Programme, is responsible for up to 10% of the world’s carbon dioxide output, more than the international flights and maritime shipping industries combined. Eighty-five percent of clothes in the United States alone end up in landfills, and at least 20% of all water pollution results from textile dyeing. The ravenous appetite of fast fashion shoppers isn’t settling anytime soon, and fashion’s supply chain remains quite arduous on the environment.

A possible step toward finding the multiple solutions needed to fix this damaging sector is, well, embracing more web3.

Web3 could help fashion become more sustainable by Dominic-Madori Davis originally published on TechCrunch

TikTok fined in France for manipulative cookie consent flow

TikTok is the latest tech giant to be schooled by France’s data protection watchdog for breaking rules on cookie consent.

The €5 million penalty announced today by the CNIL relates to a cookie consent flow TikTok had used on its website (tiktok.com) until early last year — in which the regulator found it was not as easy for users to refuse cookies as to accept them — so it was essentially manipulating consent by making it easier for site visitors to accept its tracking than to opt out.

This was the case when the watchdog checked in on TikTok’s process, in June 2021, until the implementation of a “Refuse all” button on the site in February 2022 — which appears to have resolved the matter. (And may explain the relatively small fine levied in this case, along with the number of users and minors affected — as well as the enforcement relating only to its website, not its mobile app.)

Tracking cookies are typically used to serve behavioral advertising but can also be used for other site activity, such as analytics.

“During the check carried out in June 2021, the CNIL noted that while the companies TikTok United Kingdom and TikTok Ireland did offer a button allowing cookies to be accepted immediately, they did not put in place an equivalent solution (button or other) to allow the Internet user to refuse their deposit just as easily. Several clicks were necessary to refuse all cookies, against only one to accept them,” the watchdog notes in a press release [translated from French with machine translation].

“The Restricted Committee considered that making the refusal mechanism more complex actually amounts to discouraging users from refusing cookies and encouraging them to favor the ease of the “Accept all” button,” it added, saying it found TikTok had therefore breached a legal requirement for freedom of consent — a violation of Article 82 of the French Data Protection Act “since it was not as simple to refuse cookies as to accept them”.

In addition, the CNIL found that TikTok had not informed users “in a sufficiently precise manner” of the purposes of the cookies — both on the information banner presented at the first level of the cookie consent and within the framework of the “choice interface” that was accessible after clicking on a link presented in the banner. Hence finding several breaches of Article 82.

The French enforcement has been taken under the European Union’s ePrivacy Directive — which, unlike the EU’s General Data Protection Regulation (GDPR), does not require complaints that affect users across the bloc to be referred back to a lead data supervisor in an EU country of main establishment (if a company claims that status — as TikTok does with Ireland for the GDPR).

This has enabled the French regulator to issue a series of enforcements over Big Tech cookie infringements in recent years — hitting the likes of Amazon, Google, Facebook and Microsoft with some hefty fines (and correction orders) since 2020, following a 2019 update to its guidance on the ePrivacy Directive which stipulated that consent is necessary for ad tracking.

France’s activity to clean up cookie consent looks like an important adjunct to slower paced cross-border GDPR enforcement — which is only just starting to have an impact on ad-based business models centred on consent-less tracking, such as the final decisions against Facebook and Instagram issued by the Irish Data Protection Commission earlier this month.

If tracking-and-profiling ad giants are forced to rely on gaining user consent to run behavioral advertising it’s critical that the quality of consent gathered is free and fair — not manipulated by deploying deceptive design tricks, as has typically been the case — so the CNIL’s ePrivacy cookie enforcements look important.

Only last summer, for instance, TikTok was prevented from switching away from relying on user consent as its legal basis for processing people’s data to run ‘personalized’ ads to a claim of legitimate interest as the legal basis (implying it intended to stop asking users for their consent) after intervention by EU data protection authorities who warned it such a move would be incompatible with the ePrivacy Directive (and likely breach the GDPR too).

While enforcements under ePrivacy only apply in the regulator’s own market (France, in this case), the impact of these decisions may be wider. Google, for example, followed a sanction from the CNIL by revising how it gathers consent to cookies across the EU. That may not be how every company responds but there is a likely to be a cost associated to applying different compliance configurations for different EU markets — vs just applying one (high) standard in all EU markets. So ePrivacy enforcement may help set the EU bar.

TikTok was contacted for comment on the CNIL’s sanction. A spokesperson for the company sent us this statement:

These findings relate to past practices that we addressed last year, including making it easier to reject non-essential cookies and providing additional information about the purposes of certain cookies. The CNIL itself highlighted our cooperation during the course of the investigation and user privacy remains a top priority for TikTok.

TikTok fined in France for manipulative cookie consent flow by Natasha Lomas originally published on TechCrunch

HBO Max’s ad-free monthly subscription is increasing by $1

HBO Max is raising the price of its ad-free monthly subscription in the U.S. from $14.99 to $15.99 plus applicable taxes, effective immediately for new subscribers. The change marks the first time that HBO Max has increased the price of its service since launching in May 2020.

“Existing subscribers who are currently paying $14.99/month will see their monthly rate increase to $15.99 effective their next billing cycle on or after Saturday, February 11, 2023,” the company said in a statement. “This price increase of one dollar will allow us to continue to invest in providing even more culture-defining programming and improving our customer experience for all users.”

The cost of the HBO Max’s ad-supported tier will remain unchanged at $9.99 per month.

The price hike comes a few days before the debut of HBO Max’s highly anticipated “The Last of Us” TV adaptation on January 15. The launch of the TV show is seen as a way for HBO Max to convince fans of the popular game to subscribe to the streaming service.

It’s an odd time for HBO Max to introduce a price hike, given that it has been removing several titles from its service over the past few months. Last month, the company confirmed that it will be moving over 10 HBO Max original series to third-party free ad-supported streaming TV (FAST) services. These titles include “Westworld,” “The Nevers,” “Raised by Wolves,” “FBOY Island,” “Legendary,” “Finding Magic Mike,” “Head of the Class,” “The Time Traveler’s Wife,” “Gordita Chronicles,” “Love Life,” “Made for Love, “The Garcias” and “Minx.”

Warner Bros. Discovery CEO David Zaslav recently said that it will be hard to meet the company’s 2023 earnings forecast of $12 billion. The price increased announced today could be a way for the company to lessen the blow.

HBO Max’s ad-free monthly subscription is increasing by $1 by Aisha Malik originally published on TechCrunch

Virgin Orbit says issue with rocket’s second stage led to mission failure

Virgin Orbit, the unconventional rocket company founded by billionaire Sir Richard Branson, said its mission failure earlier this week was due to an anomaly with the rocket’s second stage.

Although the LauncherOne rocket managed to reach space and achieve stage separation, the anomaly prematurely terminated the first burn of the upper stage’s engines, at an altitude of around 180 kilometers, Virgin said in a statement. Due to this engine anomaly, both the rocket components and payload fell back to Earth and were destroyed upon atmospheric reentry.

The mission payload consisted of nine small satellites, including two CubeSats for the United Kingdom’s Ministry of Defense, a first test satellite from Welsh in-space manufacturing startup Space Forge and what would’ve been Oman’s first earth observation satellite.

Virgin Orbit engineers and board members have already begun an analysis of mission telemetry data to identify the cause of the anomaly. The company added that a formal investigation into the source of the failure will be led by Jim Sponnick, former VP for the Atlas and Delta launch system programs at United Launch Alliance, and Virgin Orbit’s chief engineer, Chad Foerster.

The company said the investigation will be complete, and corrective measures implemented, before LaucherOne’s next flight from California’s Mojave Air and Space Port. But how long that will take, and when we’ll next see Virgin’s Boeing 747 and rocket system take to the air again, is far from clear. Virgin said it was in talks with the U.K. government to conduct another launch from the country’s new Space Port in Cornwall for “as soon as later this year.”

That degree of uncertainty is never good for a public company, but it’s likely especially straining for Virgin Orbit, which is facing dwindling cash reserves and a pressing need to ramp up launch cadence to boost revenues. As of September 30, the company had $71 million in cash on hand; by the end of the year, Virgin got an injection of $25 million from Richard Branson’s Virgin Group and $20 million from Virgin Investments Ltd. But these funds will do little but delay the inevitable if Virgin doesn’t return to launch soon.

Virgin Orbit says issue with rocket’s second stage led to mission failure by Aria Alamalhodaei originally published on TechCrunch

The Logic School wants to teach tech workers activism

Product folks and engineers know what they are doing, and by and large, they — and the companies they work for — have a disproportionate amount of power about how the world is shaped. Over a 13-week course called Logic School (delivered free, with support from the Omidyar Network), the school aims to teach tech workers to organize to help identify and rectify structural inequities.

Ultimately, the goal of the school is laudable: offering the kind of people who are likely to throw their hands up and say ‘if only I could do something’ the means and techniques to do just that, whether that is through advocacy, identification or ideas for how to speak out in situations where that’s needed.

The school goes through a range of writing and current research in tech and the broader tech industry, including on topics such as critical race theory, economics and sociology.

In addition to a learning element, the school builds a cohort of colleagues who are all banded together to a common goal: working toward a more equitable goal.

Lecturers include folks like Clarissa Redwine from the Kickstarter Union Oral History, Alex Hanna and Timnit Gebru from Distributed AI Research Institute (DAIR), Ari Melenciano from Afrotectopia/NYU/Google Creative Lab, Blunt from Hacking//Hustling, Erin McElroy, Assistant Professor of American Studies at UT Austin, Anti-Eviction Mapping Project and Shazeda Ahmed, Princeton University, Center for Information Technology Policy.

If this sounds exciting to you, apply quickly — applications close tomorrow.

The Logic School wants to teach tech workers activism by Haje Jan Kamps originally published on TechCrunch

TikTok launches a Talent Manager Portal so managers can negotiate brand deals for clients

TikTok is making it easier for brands to work with its “megastar” creators with an update to its Creator Marketplace that now invites talent managers to oversee, execute and analyze the brand opportunities and campaigns being presented to their clients. This week, the video entertainment platform introduced a new Talent Manager Portal as a part of the TikTok Creator Marketplace — its platform that allows brands and agencies to connect with 800,000 qualified creators around the world.

The new service allows talent managers, with creator authorization, to log into the Creator Marketplace to manage deal flow, negotiate contracts on behalf of their talent, handle the creative feedback and review various reports and metrics about a campaign’s performance. The expansion allows TikTok to now not only serve the needs of creators with tens or hundreds of thousands of followers but those “celebrity-level” creators, as well.

For example, TikTok stars like the D’Amelio sisters in 2020 began working with the agency UTA as their online fame led them to into new areas, like podcasts, books, TV, licensing, tours, and other endorsements. It would make sense that they’d want their UTA reps to review the brand inquiries and negotiate deals on their behalf through such a portal, rather than doing it themselves.

TikTok confirmed the Talent Manager Portal is in alpha testing right now. The free service has several agencies already signed up, but it isn’t able to share the names of testers at this time.

In addition, TikTok notes that the talent managers will have access only to their client’s Marketplace accounts, not the creators’ actual TikTok accounts.

The system aims to complement the Creator Marketplace’s existing offerings, targeted toward brands that want to capitalize on the performance of creator-led advertising, which TikTok says delivers higher ad recall among 71% of brands surveyed.

Firstlaunched in 2019, TikTok Creator Marketplace plays a key role in the growing creator monetization ecosystem, joining similar platforms offered by Facebook, Instagram, Snap, and YouTube that aid creators in developing relationships within the influencer marketing space. Beyond being a destination itself, the Creator Marketplace also introduced an API in 2021 that allows marketing companies like Captiv8 and Influential to tap into its first-party data within their own systems.

Before such marketplaces existed, brands looking to work with top creators would have to do more manual labor — they’d have to scroll the app or use search terms to discover creators, and they couldn’t target their searchers by specific parameters. The TikTok Creator Marketplace puts more tools at their fingertips, allowing brands to curate creators by keywords, the content being posted, and filters around metrics like audience size and makeup.

Image Credits: TikTok

Brands can choose to work with talent by reaching out directly (aka a “direct invitation”) or through “application campaigns,” where they’ll create a brief, and creators pitch themselves for the opportunity. The marketplace’s match tool also uses A.I. and natural language processing to map creators to the brief based on the content they’re posting, helping to further automate the process.

Now leading the team behind the Creator Marketplace is Adrienne Lahens, the global head of operations for TikTok’s Creator Marketing Solutions, previously COO at Influential. In her current role, which she’s held for around a year and a half, Lahens is focused on helping TikTok’s creators make a living through branded content and brand and creator collaborations.

TikTok says brands who work with creators see a 26% lift in brand favorability and a 22% lift in brand recommendations. In addition, 71% of TikTok users say that a creator’s authenticity is what now motivates them to make a purchase from a brand.

Image Credits: TikTok

Overcoming challenges around creator monetization are key to retaining top talent on TikTok’s app, especially in light of heavy competition from other tech giants, including Meta, Snap, and YouTube — the latter of which just announced it will begin sharing ad revenue with its Shorts (short-form video) creators as of February 1st. (Though TikTok had announced a rev share program of its own last year, it hasn’t yet scaled.)

With brand campaigns, some top TikTok creators are earning tens of thousands and, in select cases, hundreds of thousands of dollars through the Creator Marketplace. Other campaigns may be smaller scale, only offering gifting, for instance, instead of payments.

TikTok did not say how long its new Talent Manager Portal would remain in alpha testing before launching more publicly.

TikTok launches a Talent Manager Portal so managers can negotiate brand deals for clients by Sarah Perez originally published on TechCrunch

Pin It on Pinterest