Banish vanity metrics from your startup’s pitch deck

Oh man, you got 300 email sign-ups, awesome! Goodness, your web traffic spiked by 200%! Yesssss! Holy god, you got a feature article on TechCrunch — well done! You won an award from the regional chamber of commerce! Break open the champagne, right?

Not so fast. These moments of excitement are, in fact, your body lying to you. The little hits of dopamine feel so good. You want more.

You know who doesn’t care? Your would-be investors.

At the earliest stages of raising money, before you have any real traction, it can be tempting to take anything that looks like traction and shout it from the rooftops. The truth is that investors know what real traction looks like, and none of the above qualify. And yet, I’ve seen all of them in pitch decks. Trust me: At best, your investor doesn’t care. At worst, it shows that you are a founder who doesn’t know what’s important when you’re building your business, which is a huge red flag for investors.

The goal of a startup is to stop being a startup

I subscribe to Steve Blank’s definition of a startup: A “temporary organization in search of a repeatable, sustainable business model.”

Banish vanity metrics from your startup’s pitch deck by Haje Jan Kamps originally published on TechCrunch

We had thoughts in 2022. Here are the top takes from the TechCrunch+ team

In 2022, uncertainty continued: Major acquisitions took place, layoffs swept the tech industry and Elon Musk bought Twitter.

While that last one may not have been on your 2022 bingo card, it certainly caused quite a bit of commotion here at TechCrunch — and got us talking. This year, a big trend for us was doing “three views” and other collaboration pieces. It’s a fun way for us to work with our colleagues while offering differing opinions about trending topics in the tech space. Here are some of our favorites:

3 views on Amazon’s $3.9B acquisition of One Medical

Earlier this year, Amazon acquired One Medical for $3.9 billion, yes that’s billion with a B. We gathered some TechCrunch+ staff to get their thoughts on the purchase. Alex Wilhelm was skeptical because, frankly, he doesn’t want Amazon as his health provider. Miranda Halpern (me, hello, hi) felt that this acquisition followed a logical progression for Amazon since it entered the healthcare space in 2018. Walter Thompson saw this as a chance for Amazon to accrete additional mass.

Should Oracle or Alphabet buy VMWare instead of Broadcom?

VMWare was freed from Dell in April 2022, and Alex and Ron Miller wrote about who they think may buy it. In May, the Broadcom-VMware deal was a go, but Alex and Ron found themselves on opposite sides of a hypothetical — would a higher price or another bidder make sense? Alex didn’t feel that VMWare deserved a higher price and Ron thought that the company’s value was higher than its financial results at the time.

3 views: Thoughts on Flow

We’ve all heard by now that the reason millennials won’t be able to purchase a home is that they keep buying avocado toast. As a zillenial, I disagree with the avocado toast sentiment, partially because I’m allergic to avocados but mainly because homes are no longer affordable. Adam Neumann’s latest startup, Flow, backed by Andreessen Horowitz (a16z), aims to revolutionize the rental industry. Tim De Chant, Dominic-Madori Davis and Amanda Silberling shared their thoughts on whether Flow will make a difference —and whether Neumann even deserved the VC funding at all.

3 views: Pay attention to these startup theses in 2022

In 2022, Alex predicted that open source would become the de facto startup model. Natasha Mascarenhas posited everything would be hybridized. Anna Heim, meantime, suggested a majority of SaaS companies would adopt usage-based pricing. While some of their predictions for 2022 came true, some fell short. In true TechCrunch fashion, they followed up this article by predicting 2023’s key startup themes. We’ll check back in a year to see how well they stood the test of time.

TechCrunch staff on what we lose if we lose Twitter

Where would you scream into the void if Twitter were to disappear (read: die)? While that may be the question for some people, others would miss it for more important reasons. Dominic would miss the community aspect of Twitter, specifically Black Twitter. “The memes are endless, as is the support — and the heat — we give and place onto people and topics. It was a place to find community in a world so unkind to us. It really does feel like its own universe sometimes,” she wrote. Check out the full article to see what Ron, Amanda, Christine Hall, Paul Sawers, Natasha, Ivan Mehta and Alex worry about losing if Twitter goes belly up.

We had thoughts in 2022. Here are the top takes from the TechCrunch+ team by Miranda Halpern originally published on TechCrunch

User Interviews, which helps companies recruit survey participants, raises $27.5M

Most companies agree that user experience is important. In a 2019 report from UserZoom, 70% of enterprise CEOs said that they see user and customer experience as a competitive differentiator. But figuring out what exactly users want — and what frustrates them — can prove to be a challenge. Customer satisfaction and market research surveys have response rates ranging around 10% on the low end, and many user experience researchers say that they don’t have enough time for analysis of the results.

The demand for a solution has led to a wellspring of software-based user research tools, like UserLeap, Airkit and UserZoom. Platforms such as Great Question and Ribbon seek to simplify the process of interviewing customers about product ideas and strategy, while services like Sprig and Maze let product teams observe how users interact with a product and generate reports.

Another player in the highly competitive market is User Interviews, which focuses on the problem of user research recruiting. Co-founded by Dennis Meng, Bob Saris and Basel Fakhoury, the idea for User Interviews arose from a mobile travel app that wasn’t getting a lot of traction.

“As we tried to pivot and find a new idea, we began to do a lot of user research to validate our hypotheses,” Fakhoury, who serves as User Interviews’ CEO, told TechCrunch in an email interview. “The more research we did, the more passionate we were about how valuable research could be and realized there was a huge pain point around finding participants for studies. We then did more research to validate this opportunity and were blown away by how strong the signal was: participant recruiting is the most painful part of user experience research by a mile.”

And the stakes of letting customer research efforts fall through, whether because of recruitment-related reasons or otherwise, can be high. According to an Adobe study, 38% of people will stop engaging with a website if images won’t load or take too long. Clicktale reports that 73% of brands can’t provide a consistent experience across their different digital channels, hurting customers’ impressions of the brands.

User Interviews — which today closed a $27.5 million Series B round that brings the company’s total raised to around $45 million — offers two products aimed at addressing this pain point. One, called Recruit, is designed to help user experience researchers source study participants across different demographics and behavioral criteria. The other, Research Hub, serves as a customer relationship management tool for research teams, allowing them to build user panels for research while streamlining the logistics of getting customers into studies.

Image Credits: User Interviews

Anyone can sign up to participate in a User Interviews-facilitated survey; more than 2.4 million have signed up to date. Once a user creates a profile, they can apply to a study, after which a researcher will approve or deny their admission. Surveyors can choose to “double screen” participants, which might involve contacting users to have them sign an NDA or consent form, and they can opt to reward participants with gift cards and other forms of monetary compensation (usually amounting to between $50 and $200).

That pay range is on the higher end for customer survey portals, but some recent participant reviews of the User Interviews experience on TrustPilot aren’t especially positive. We’ve reached out to the company for more information about why that might be.

According to Fakhoury, User Interviews uses machine learning models to prevent and identify survey fraud. In a support page on its website, the company says that of the roughly 50,000 participants active on its platform each month, around 0.3% — ~150 — are flagged as suspicious.

“With Recruit, Research Hub and a growing suite of integrations, User Interviews is differentiated as a complete solution for participant recruitment and management that plays nicely with any tools researchers like to use for their testing and insights management needs,” Fakhoury said. “We are faster, cheaper and more flexible than established recruiting agencies and our speed, cost and intuitive user experience have opened quality research recruiting to new audiences, like product managers and user experience designers, who previously would try to ‘DIY’ their research recruiting with poor results.”

Fakhoury didn’t reveal revenue figures when asked. But he said that User Interviews currently counts “thousands” of brands in its customer base, including Adobe, CNN, Amazon, Intuit, the Mayo Clinic, Spotify, Pinterest and Citibank.

Sageview Capital led User Interviews’ Series B with participation from Teamworthy, Accomplice, Las Olas VC, Trestle Ventures, ValueStream, ERA’s Remarkable Ventures and FJ Labs. Fakhoury says that the investment will “fuel growth” and help to “further build” the company’s core products.

User Interviews, which helps companies recruit survey participants, raises $27.5M by Kyle Wiggers originally published on TechCrunch

Porsche pumps first synthetic fuel as Chilean plant finally starts producing

After years of promises and millions in investments, Porsche today pumped the first gallons of its fully synthetic fuel into a car. That car? A 911, of course.

Porsche has been talking about eFuels since 2020, when it made a 20 million euro investment into a project with Siemens Energy to create a pilot plant in Punta Arenas, Chile. The house that Ferdinand built then backed that up with a further $75m investment earlier this year, taking a 12.5% stake in HIF Global, the holding company for these eFuel production efforts.

eFuels are meant to be carbon-neutral alternates, allowing legacy vehicles to continue operating in the face of growing restrictions on carbon output from passenger vehicles. However, it’s all theory at this point. While bans for the sale of internal combustion vehicles are already on the books in many places, starting in 2035 in California and the EU, globally, no exemptions have yet been granted for eFuels. The EU plans to draft a proposal for “CO2 neutral fuels” and whether they may prove exempt, but that may apply only to commercial vehicles.

Michael Steiner, Member of the Executive Board at Porsche, hopes such an exemption would cover eFuels use in his company’s cars: “This is still in progress, but at least our expectation is that we could use such eFuel also in passenger cars, especially Porsche cars. This is expectation, but this is not finalized today.”

Image Credits: Porsche

For now, Porsche’s eFuels will exclusively be used off-road, powering the company’s global Porsche SuperCup series. With Porsche strongly rumored to be entering into Formula One soon, and with that series set to switch to carbon-neutral fuels by 2026, it’s not hard to see potential there, too.

Why Chile? eFuels are heavily dependent on the splitting water into its component elements: hydrogen and oxygen. To be done effectively, this electrolysis requires a lot of cheap electricity, provided in Chile by the constant, high winds. Punta Arenas is said to be the windiest area in South America, a force converted into electricity by Siemens Gamesa wind turbines.

The hydrogen from that process is then mixed with CO2 extracted from the air to create a form of methanol. This raw material can then be further refined for a variety of products, including the eFuels that Porsche will use to power its race cars today and hopes will keep its historic vehicles on the road well into the future.

Porsche’s initial plans were for 130,000 liters of the stuff by the end of 2022. Given the date, and the size of that 911’s tank (67 liters at the most), it seems clear that goal will come later. Porsche’s next target is 55 million liters per year within the next three years. At that volume, Porsche’s Michael Steiner says the production cost will drop to roughly $2 per liter.

Right now, average fuel prices in Germany are approximately $1.75 per liter, but that’s at the pump. Transportation, taxes, and other fees will mean eFuels will continue to be significantly more expensive than traditional fuels for some time to come, but their carbon-neutral nature may still make them appealing options for commercial applications in particular.

“There are several initiatives all around the world,” Steiner told me. “Some regions look for tax benefits, some look for blending quotas for different sectors. So this is still open which markets might be most favorable for eFuels.”

One thing is for clear: regardless of the success of eFuels, and indeed exemptions for carbon-neutral internal-combustion, Porsche is sticking to its goal of 80% EV sales by 2030.

“We have a clear strategy,” Steiner said. “The main focus is e-mobility, but in addition we take care of our ICE cars.” Porsche is of course a brand with a strong history. That 911 fueled up today was just one of over a million of the things Porsche has produced since 1963. Keeping them running is clearly a strong incentive.

Porsche pumps first synthetic fuel as Chilean plant finally starts producing by Tim Stevens originally published on TechCrunch

TikTok’s new feature will tell you why a particular video appeared in your For You feed

TikTok is launching a new feature that allows users to see why a particular video was recommended to them in their For You feed, the company announced on Tuesday. The new feature is designed to bring more context to content recommended in For You feeds, TikTok says.

To understand why a particular video has been recommended to you in your For You feed, you can now tap on the share panel and select the question mark icon called “Why this video.” From there, you can see reasons why a particular video was recommended to you.

Image Credits: TikTok

You may be informed that you saw a particular video because of your interactions, such as content you watch, like or share, comments you post, or searches. Or, you may be told that you have been shown the video because of accounts you follow. TikTok says you may also be informed that you were shown a particular video because it was posted recently in your region or that the content is popular in your region.

“This feature is one of many ways we’re working to bring meaningful transparency to the people who use our platform, and builds on a number of steps we’ve taken towards that goal,” the company said in a blog post. “Looking ahead, we’ll continue to expand this feature to bring more granularity and transparency to content recommendations.”

TikTok’s personalized For You page algorithm is largely behind the app’s success due to its ability to show users content they will likely find interesting. But, the algorithm system isn’t perfect, as you may sometimes come across a video that doesn’t cater to you. In cases like these, you can now learn more about why the video appeared on your For You page. Although TikTok has already explained how its recommendations work, the new feature launching today offers users additional and specific context about why a specific video was shown to them.

TikTok’s new feature will tell you why a particular video appeared in your For You feed by Aisha Malik originally published on TechCrunch

Simplify debugging to reduce the complexity of embedded system development

The complexity associated with the development of embedded systems is increasing rapidly. For instance, it is estimated that the average complexity of software projects in the automotive industry has increased by 300% over the past decade.

Today, every piece of hardware is driven by software and most hardware is composed of multiple electronic boards running synchronized applications. Devices have more and more features, but adding features means increasing development and debugging complexity. A University of Cambridge report found that developers spend up to 50% of their programming time debugging.

Thankfully, there are practical ways to reduce the complexity of debugging embedded systems. Let’s take a look.

Earlier is better

Debugging will only be efficient if you have the right information.

Bugs will pop up during the entirety of a product’s lifetime: in development, testing and in the field. Resolving a bug later down the road can increase costs by as much as 15 times and lead to user frustration, in addition to creating challenges associated with updating embedded devices that are in production.

However, identifying bugs at the early stages of your product’s development will allow you to track them while prioritizing them by severity. This will allow you to debug before other dependencies and variables are introduced later in the lifecyle, which makes bugs easier and cheaper to resolve.

Manage versioning

To properly replicate a bug, you must be able to have a device in the exact same state it was when the bug first appeared. With embedded devices, there are three distinct variables to look at when issues crop up:

The software version. This is the version of each feature. This applies to the code you build as well as to potential dependencies, such as imported libraries.
The board version. Specifically, the design of the board. Board design changes constantly as components are added/removed or moved around.

Simplify debugging to reduce the complexity of embedded system development by Ram Iyer originally published on TechCrunch

Early-stage Mexico fintech Aviva is making loans as easy as a video call

There are some 40 million Mexicans who are excluded from certain financial products due to banks not thinking it is a segment worth going after, but Filiberto Castro does.

The former banking executive worked at banks including Citi and Scotiabank for nearly a decade before moving into the fintech space to be chief of growth at Konfio. That’s where Castro said he saw how well technology could help people access financial services that were previously out of reach.

He met his co-founders David Hernandez and Amran Frey at Konfio, and, along with Israel Garcia, started Aviva, a Mexico-based fintech startup focused on bringing working capital to unserved communities.

Aviva’s approach uses artificial intelligence and natural language processing to match customers’ spoken word to the fields of a real-time credit application. Within minutes, customers can qualify for a nano-business or house improvement loan of up to $1,000.

Aviva co-founders, from left, Amran Frey, David Hernández and Filiberto Castro. Image Credits: Aviva

Unlike other fintechs that are concentrating on large, urban areas, Aviva is concentrating on smaller communities where the company can address the lack of trust in banks, predatory interest rates and help users who may not have the technical ability, like a smartphone, to purchase financial products directly.

Now buoyed by $2.2 million in pre-seed funding, the company is rolling out a network of physical and digital onboarding kiosks. The five-minute “video call booths” use biometrics and biosignals to determine the client’s risk and willingness to pay in order to underwrite the loans.

“No one has done anything for this segment in the last 25 years,” Castro told TechCrunch. “Much has been done in big cities, but by creating deep tech, AI and the video calls, we can establish elements to examine credit and lower interest rates. This has the potential to create a new middle class in Mexico and later across Latin America.”

The company makes money from financing the interest on the loans, but is able to charge less than current banks. Average interest rates in Mexico can get as high as triple digits, but Aviva is able to charge around 80%, though still high, he added.

Aviva is still very much in its early stages. It launched its product in November with 10 employees and has three kiosk locations where more than 500 customers have passed through since. The kiosks are located in Chalco de Díaz, Ixtapaluca and Texcoco, towns about an hour’s drive from Mexico City. The company is also seeing a lower percentage of loan delinquencies than initially thought, Castro said.

The pre-seed was led by Wollef Ventures, which was joined by Newtopia VC, Seedstars International Ventures, 500 Startups, Magna Capital VC, Xtraordinary VP and a group of angel investors.

With that new capital, Aviva is going to invest in building out its credit and underwriting system, preparing to launch the company’s own credit card and expanding its kiosks. In the future, Castro also sees the company providing a full banking offer to its customers.

“The credit card will give us a way to deposit loans if customers don’t have a bank account,” he said. “That is great for us because it shows we are tackling the right segment — people who don’t have any relationship with a bank.”

Early-stage Mexico fintech Aviva is making loans as easy as a video call by Christine Hall originally published on TechCrunch

Petals is creating a free, distributed network for running text-generating AI

BigScience, a community project backed by startup Hugging Face with the goal of making text-generating AI widely available, is developing a system called Petals that can run AI like ChatGPT by joining resources from people across the internet. With Petals, the code for which was released publicly last month, volunteers can donate their hardware power to tackle a portion of a text-generating workload and team up others to complete larger tasks, similar to Folding@home and other distributed compute setups.

“Petals is an ongoing collaborative project from researchers at Hugging Face, Yandex Research and the University of Washington,” Alexander Borzunov, the lead developer of Petals and a research engineer at Yandex, told TechCrunch in an email interview. “Unlike … APIs that are typically less flexible, Petals is entirely open source, so researchers may integrate latest text generation and system adaptation methods not yet available in APIs or access the system’s internal states to study its features.”

Open source, but not free

For all its faults, text-generating AI such as ChatGPT can be quite useful — at least if the viral demos on social media are anything to go by. ChatGPT and its kin promise to automate some of the mundane work that typically bogs down programmers, writers and even data scientists by generating human-like code, text and formulas at scale.

But they’re expensive to run. According to one estimate, ChatGPT is costing its developer, OpenAI, $100,000 per day, which works out to an eye-watering $3 million per month.

The costs involved with running cutting-edge text-generating AI have kept it relegated to startups and AI labs with substantial financial backing. It’s no coincidence that the companies offering some of the more capable text-generating systems tech, including AI21 Labs, Cohere and the aforementioned OpenAI, have raised hundreds of millions of dollars in capital from VCs.

But Petals democratizes things — in theory. Inspired by Borzunov’s earlier work focused on training AI systems over the internet, Petals aims to drastically bring down the costs of running text-generating AI.

“Petals is a first step towards enabling truly collaborative and continual improvement of machine learning models,” Colin Raffel, a faculty researcher at Hugging Face, told TechCrunch via email. “It … marks an ongoing shift from large models mostly confined to supercomputers to something more broadly accessible.”

Raffel made reference to the gold rush, of sorts, that’s occurred over the past year in the open source text generation community. Thanks to volunteer efforts and the generosity of tech giants’ research labs, the type of bleeding-edge text-generating AI that was once beyond reach of small-time developers suddenly became available, trained and ready to deploy.

BigScience debuted Bloom, a language model in many ways on par with OpenAI’s GPT-3 (the progenitor of ChatGPT), while Meta open sourced a comparably powerful AI system called OPT. Meanwhile, Microsoft and Nvidia partnered to make available one of the largest language systems ever developed, MT-NLG.

But all these systems require powerful hardware to use. For example, running Bloom on a local machine requires a GPU retailing in the hundreds to thousands of dollars. Enter the Petals network, which Borzunov claims will be powerful enough to run and fine-tune AI systems for chatbots and other “interactive” apps once it reaches sufficient capacity. To use Petals, users install an open source library and visit a website that provides instructions to connect to the Petals network. After they’re connected, they can generate text from Bloom running on Petals, or create a Petals server to contribute compute back to the network.

The more servers, the more robust the network. If one server goes down, Petals attempts to find a replacement automatically. While servers disconnect after around 1.5 seconds of inactivity to save on resources, Borzunov says that Petals is smart enough to quickly resume sessions, leading to only a slight delay for end-users.

Testing the Bloom text-generating AI system running on the Petals network. Image Credits: Kyle Wiggers / TechCrunch

In my tests, generating text using Petals took anywhere between a couple of seconds for basic prompts (e.g. “Translate the word ‘cat’ to Spanish”) to well over 20 seconds for more complex requests (e.g. “Write an essay in the style of Diderot about the nature of the universe”). One prompt (“Explain the meaning of life”) took close to three minutes, but to be fair, I instructed the system to respond with a wordier answer (around 75 words) than the previous few.

Image Credits: Kyle Wiggers / TechCrunch

That’s noticeably slower than ChatGPT — but also free. While ChatGPT doesn’t cost anything today, there’s no guarantee that that’ll be true in the future.

Borzunov wouldn’t reveal how large the Petals network is currently, save that “multiple” users with “GPUs of different capacity” have joined it since its launch in early December. The goal is to eventually introduce a rewards system to incentivize people to donate their compute; donators will receive “Bloom points” that they can spend on “higher priority or increased security guarantees” or potentially exchange for other rewards, Borzunov said.

Limitations of distributed compute

Petals promises to provide a low-cost, if not completely free, alternative to the paid text-generating services offered by vendors like OpenAI. But major technical kinks have yet to be ironed out.

Most concerning are the security flaws. The GitHub page for the Petals project notes that, because of the way Petals works, it’s possible for servers to recover input text — including text meant to be private — and record and modify it in a malicious way. That might entail sharing sensitive data with other users in the network, like names and phone numbers, or tweaking generated code so that it’s intentionally broken.

Petals also doesn’t address any of the flaws inherent in today’s leading text-generating systems, like their tendency to generate toxic and biased text (see the “Limitations” section in the Bloom entry on Hugging Face’s repository). In an email interview, Max Ryabinin, the senior research scientist at Yandex Research, made it clear that Petals is intended for research and academic use — at least at present.

“Petals sends intermediate … data though the public network, so we ask not to use it for sensitive data because other peers may (in theory) recover them from the intermediate representations,” Ryabinin said. “We suggest people who’d like to use Petals for sensitive data to set up their own private swarm hosted by orgs and people they trust who are authorized to process this data. For example, several small startups and labs may collaborate and set up a private swarm to protect their data from others while still getting benefits of using Petals.”

As with any distributed system, Petals could also be abused by end-users, either by bad actors looking to generate toxic text (e.g. hate speech) or developers with particularly resource-intensive apps. Raffel acknowledges that Petals will inevitably “face some issues” at the start. But he believes that the mission — lowering the barrier to running text-generating systems — will be well worth the initial bumps in the road.

“Given the recent success of many community-organized efforts in machine learning, we believe that it is important to continue developing these tools and hope that Petals will inspire other decentralized deep learning projects,” Raffel said.

Petals is creating a free, distributed network for running text-generating AI by Kyle Wiggers originally published on TechCrunch

Pin It on Pinterest