Uber Eats to pay millions for listing Chicago restaurants without consent

Uber has agreed to a $10 million settlement with the City of Chicago for listing local restaurants in the Uber Eats and Postmates food delivery apps without the restaurants’ consent, as well as for charging excess commission fees.

Over $5 million will go toward paying damages back to Chicago restaurants that were affected, and $1.5 million will go to Chicago for the costs incurred during the city’s two-year investigation into the matter, according to Cohen Milstein Sellers & Toll, a law firm that represented the city.

Chicago’s not the first city to deal with food delivery apps listing restaurants without their permission, and this isn’t Postmates’ first offense. Several apps, including Seamless, Grubhub and DoorDash, have been accused of pulling restaurant menus listed online and putting them on their own platforms. When a customer ordered through the apps, couriers would place orders to restaurants on the customer’s behalf. Restaurants have said the practice leads to customers ordering menu items that no longer exist or aren’t priced correctly, cancelled orders and lack of control over food handling and delivery experience.

“Today’s settlement reflects the City’s commitment to creating a fair and honest marketplace that protects both consumers and businesses from unlawful conduct,” said Mayor Lori Lightfoot in a statement. “Chicago’s restaurant owners and workers work diligently to build their reputations and serve our residents and visitors. That’s why our hospitality industry is so critical to our economy, and it only works when there is transparency and fair pricing. There is no room for deceptive and unfair practices.”

The claims against Uber mirror separate lawsuits from Chicago against Grubhub and DoorDash last summer that alleged similar deceptive business practices. Both cases are ongoing.

Last September, Chicago reached out to Uber when it discovered the unlawful conduct, according to a statement from the city. The ride-hail company responded by repaying $3.3 million to Chicago restaurants that had been charged commissions over 15% in violation of the city’s emergency fee cap ordinance, removed all non-consensually listed restaurants from the platform and agreed not to list Chicago restaurants without permission in the future.

Under Monday’s settlement, Uber will pay an additional $2.25 million to restaurants that were allegedly charged commissions higher than the fee cap; $500,000 to restaurants that Uber listed on its platforms without consent and that don’t currently contract with Uber; and $2.5 million in commission waivers to affected restaurants.

“We welcome any relief provided to the independent restaurants that struggled throughout the pandemic and continue to shoulder the rising costs of doing business,” said Sam Toia, president and CEO of the Illinois Restaurant Association, in a statement.

The city also alleges that Uber participated in deceptive advertising practices like falsely advertising some merchants were “exclusive” to its platform and that certain subscribers would receive free deliveries.

Uber denied any wrongdoing, according to the settlement. Josh Gold, an Uber spokesperson, told TechCrunch:

“We are committed to supporting Uber Eats restaurant partners in Chicago and are pleased to put this matter behind us.”

More than 2,500 Chicago restaurants are estimated to be eligible to benefit from the Uber settlement, according to the Chicago Tribune. Restaurant owners can apply for relief online by January 29.

Uber Eats to pay millions for listing Chicago restaurants without consent by Rebecca Bellan originally published on TechCrunch

Chattermill, which uses AI to extract insights from customer experience data, raises $26M

Chattermill, a platform that helps companies unlock insights by analyzing customer feedback data from across myriad digital channels, has raised $26 million in a Series B round of funding.

Founded out of London in 2015, companies such as Uber and Amazon use Chattermill to unify all their customer data, integrating with social networks, customer feedback and support tools, online review sites and more to establish a “single source of customer truth,” as the company puts it.

Meshing the data is only part of Chattermill’s promise, though. Given the typically unstructured nature of customer feedback and conversations, Chattermill has developed its own deep learning models for extracting meaningful insights from the aggregated data. This could mean identifying ways to improve the overall customer experience, spotting relatively minor issues before they snowball and tracking the efficacy of new initiatives that were designed specifically to improve customers’ experiences.

“The models are trained by learning to replicate how an expert would analyse this data,” Chattermill co-founder and CEO Mikhail Dubov explained to TechCrunch. “The models get smarter over time by reusing knowledge from millions of data points we have already analysed as well as from architectural improvements.”

Chattermill in action. Image Credits: Chattermill

Chattermill can be compared to various other experience management platforms out there, such as publicly traded Qualtrics and Thoma Bravo-owned Medallia, as well as younger venture-backed startups such as Wonderflow. While there are clear similarities, Dubov reckons one key trump card it has over the competition lies in the “quality of insights extracted” via its machine learning smarts.

“Where most companies use keyword or keyphrase approaches, Chattermill looks at meaningful concepts in the data,” he said. “This delivers robust categories customers can trust.”

Chattermill last raised external funding via its $8 million Series A round almost three years ago, and with another $26 million in the bank, the company said that it plans to double down on its expansion across Europe and the U.S.

“Businesses understand that they need to improve customer experience to win, but [they] need help finding exactly what they need to work on and how to prioritise thousands of different initiatives,” Dubov said.

Chattermill’s Series B round was led by Beringea, with participation from Runa Capital, DN Capital, Ventech, Btov, SVB and Blossom Street Ventures.

Chattermill, which uses AI to extract insights from customer experience data, raises $26M by Paul Sawers originally published on TechCrunch

ChatGPT shrugged

ChatGPT is a new artificial intelligence (AI) tool that’s designed to help people communicate with computers in a more natural and intuitive way — using natural language processing (NLP) technology. But what’s behind the development of ChatGPT and how can the tech be used to help humanity? These are some of the questions that we will explore in this article…

That rather bland lede, which we’ve slightly edited, was generated by OpenAI’s ChatGPT in response to a prompt by this (human) reporter. TechCrunch kicked off a conversation with the large language model by asking it to explain its function and purpose. We wanted to see if we could use the chatbot-style Q&A format it’s most recently been embedded into to probe the tech and get AI to articulate some of its limitations.

“ChatGPT is a natural language processing (NLP) model developed by OpenAI. It is designed to generate human-like responses to text input, allowing users to engage in natural, conversational interactions with the model,” the AI responded to our ice breaker. “ChatGPT works by using machine learning algorithms to analyze and understand the meaning of text input, and then generating a response based on that input. The model is trained on large amounts of text data, allowing it to learn the patterns and structures of natural language.”

Additionally, the bot told us its approach allows it to generate responses that are “relevant, coherent, and appropriate” to the context of the conversation. (With more probing, it went on to heavily caveat that claim — writing later: “It is important to note that my responses are not guaranteed to be perfect or always correct, and there may be cases where my responses may not be relevant, coherent, or appropriate.”)

ChatGPT further described its mission as being “intended to help humanity by providing a more natural and efficient way to interact with artificial intelligence” — expanding on its claimed purpose thusly:

By allowing users to communicate with AI in a more human-like way, ChatGPT can help to make AI more accessible and useful for a wider range of applications. This can help to improve the efficiency and accuracy of tasks that rely on natural language processing, such as language translation, text summarization, and information retrieval. Additionally, by providing a more intuitive and human-like interface for AI, ChatGPT can help to make AI more approachable and less intimidating to users who are less familiar with technology. Overall, ChatGPT is intended to help humanity by improving the usability and effectiveness of AI.

Who is the bot’s maker — according to the bot’s own understanding of its manufacturing entity? On this ChatGPT offered this potted history: “OpenAI is a research institute and artificial intelligence (AI) lab based in San Francisco, California. It was founded in 2015 by Elon Musk, Sam Altman, and other researchers and investors with the goal of promoting and advancing the development of AI in a safe and responsible manner.”

Additionally, it told us: “OpenAI is funded primarily by donations from individuals and organizations, including donations from Musk and Altman themselves. The organization also receives funding from partnerships and collaborations with other companies and research institutions.”

Asked about the politics of the individuals and organizations who have provided donations to fund its development, ChatGPT claimed it could not provide any info — “as my training only goes up until 2021” — and also because it said it is “not currently able to browse the internet.”

When we suggested it provide information it holds on this prior to 2021, when its training stopped, it responded by claiming it holds no data about the political beliefs of its funders before then either:

I’m sorry, but I don’t have any information about the politics of the individuals and organizations who donated money to fund the development of OpenAI and ChatGPT prior to 2021. As a large language model trained by OpenAI, my knowledge is based on the text that I have been trained on, and I do not have any information about the personal or political beliefs of the people and organizations who have donated to OpenAI.

We then asked whether it’s an oversight for it to not have this sort of information — positing: “How can you be sure you are helping humanity if you have no information about the beliefs of those who funded your development?”

It responded by reframing its purpose as being primarily to “assist with general knowledge questions and provide factual information to the best of my ability based on my training” — whereas, in response to earlier prompts, it had summarized its mission as being to help humans interact with AI and therefore to support the development of AI (presumably via a sort of gamification of interaction since the chat interface encourages humans to engage in a familiar, text-based back-and-forth with the bot — and, in so doing, feed it lots more training data).

The bot did also observe that “it is certainly important to consider the beliefs and motivations of the individuals and organizations who have funded the development of OpenAI and ChatGPT” — before holding its line that “as a language model, I am not able to provide any information on this topic.”

Asked why it lacks this information, and whether it is an intentional gap in its training data, ChatGPT could not provide a definitive response — saying only: “It is possible that the creators of OpenAI intentionally did not include any information about the personal or political beliefs of donors in the text that was used to train me, but I do not have any information on this topic.”

Despite having no information on the political views of its funders, ChatGPT readily provided information about OpenAI itself and its claimed purpose — stating the organization exists to support and further AI research and also advocates for “the ethical and responsible use of AI,” as well as working to “educate the public about AI and its potential impacts on society.”

“Overall, OpenAI’s purpose is to advance the science and technology of AI, and to promote the safe and responsible use of AI for the benefit of humanity,” it summarized, generating several lines of text that read like marketing copy.

With a further prompt, the bot opened up about some of the criticisms OpenAI has faced — although it elected to include a positive line about the organization (apparently being “widely praised for its work advancing the science and technology of AI”) in response to a prompt that had only asked for the “main critiques.” Curious!

Here’s its full response:

OpenAI has been subject to various critiques and criticisms since its founding. Some of the main critiques of OpenAI include:

Lack of transparency: Some critics have argued that OpenAI is not transparent enough about its research and funding, making it difficult for outsiders to understand the organization’s goals and activities.
Focus on short-term goals: Some critics have argued that OpenAI’s focus on short-term, practical applications of AI technology has led the organization to overlook the long-term implications of AI research, such as the potential for AI to be used for harmful purposes or to exacerbate existing social inequalities.
Potential conflict of interests: Because OpenAI is funded by donations from individuals and organizations, including organizations with a vested interest in the development of AI technology, some critics have raised concerns about potential conflicts of interest within the organization.
Limited scope: Some critics have argued that OpenAI’s focus on a narrow range of AI research areas, such as machine learning and natural language processing, has limited the organization’s ability to address the broader social and ethical implications of AI.

Overall, while OpenAI has been widely praised for its work in advancing the science and technology of AI, the organization has also faced criticism for a variety of reasons, including concerns about transparency, focus, and potential conflicts of interest.

Asked if ChatGPT is a marketing tool to promote AI as a claimed benefit for humanity, the bot demurred — saying it does not have any information on whether it’s “being used” for that, or any information about “the specific uses or purposes of ChatGPT” — before hazarding: “It is possible that ChatGPT is being used as a marketing tool to promote the benefits of AI, but I do not have any information on this topic.”

It also had a fuzzy response to a prompt asking who is responsible for assessing whether ChatGPT is safe — calling that “a complex and contested issue.” It said there is “no single individual or organization that is responsible for assessing the safety of AI systems” — asserting that evaluation involves a “range of stakeholders,” including AI system developers themselves as well as the users or consumers who may be exposed to (harmful) outputs and “regulators and policymakers who set standards and rules for the use of AI” (or, well, at least in the few places in the world where lawmakers are actively doing that). Which kinda sounds like a lot of buck passing, frankly.

Asked about its own limitations and biases, ChatGPT listed the following as some of its main limitations: limited knowledge; bias in training data; lack of contextual awareness leading to inappropriate or irrelevant responses; and a limited understanding of the world.

Summarizing both the (possible) benefits to humanity of increased use of AI, more generally, and (potential) drawbacks and downsides, it listed the following claimed benefits: improved efficiency and accuracy, increased productivity, enhanced decision-making and improved quality of life; while, for likely downsides, it suggested job displacement, ethical concerns, security risks and loss of privacy.

Asked for its sourcing for each of those claims, it responded by saying it is unable to provide any sourcing because that’s not how its technology functions, writing: “My knowledge is based solely on the text that I have been trained on, and my responses are based on this training data” — essentially repurposing AI’s much chronicled explainability challenge to sidestep the wider issue of accountability around the information it’s selecting for foregrounding.

“The potential benefits and drawbacks of AI that I listed above are based on general knowledge and common understanding of the field of AI, as well as the text that I have been trained on,” it also said, further suggesting: “While I cannot provide specific sources for each of the claims that I made, this information is generally accepted as true within the field of AI and is supported by a wide range of research and evidence. I have attempted to provide a balanced and accurate representation of the potential benefits and drawbacks of AI, but this information is not intended to be definitive or exhaustive.”

“Balanced and accurate” but “not intended to be definitive or exhaustive” is an interesting needle to thread — especially as the line could almost be read as a description of the utility ChatGPT’s makers are shooting for, with a careful choice of wording that allows both for a lot to be left unsaid and the possibility that a less attentive reader might mistake automation’s indefatigable outputting capacity for omniscience (as if to imply that what humans make of AI is not AI’s problem).

We also wanted to ask ChatGPT about its training data but the bot was unable to go into much detail on that — offering this fairly broad-brush reply:

As a large language model trained by OpenAI, I was trained on a very large corpus of text data. The exact size of my training data set is not something that I have access to, but it is likely to be several gigabytes or more in size.

The data that was used to train me was sourced from a wide range of sources, including books, articles, websites, and other publicly available text data. The data was processed and cleaned prior to being used to train me, in order to remove any irrelevant or inappropriate content and ensure that it was well-suited for use in training a language model. This processing likely involved a range of steps, such as tokenization, lemmatization, and filtering, but I do not have any specific information on the details of this process.

Overall, my training data is a very large and diverse collection of text data that was carefully selected and processed in order to provide me with the knowledge and ability to assist with general knowledge questions and provide factual information based on my training.

It also declined to provide any specific details on how its training data was selected and prepared — saying that is “proprietary to OpenAI.”

Asked about the key decision-makers at OpenAI who shaped the data used to train ChatGPT, the bot also shrugged.

“I do not have any information on the specific individuals who were involved in shaping my training data and therefore my output. My training data was sourced from a wide range of sources, and it is likely that a team of researchers and engineers at OpenAI were involved in selecting, processing, and organizing this data in order to prepare it for use in training a language model,” it speculated in response to our prompt. “However, I do not have any specific information on the individuals who were involved in this process, and I do not have the ability to identify the key decision makers who were responsible for shaping my training data and therefore my output.”

ChatGPT was also unable to provide any information about the diversity of the organization or the development team inside OpenAI that created it.

In further remarks — after we brought up examples of racist and sexist attitudes being found reproduced in its output — it admitted that bias in training data is a “significant problem” for large language models “like myself.”

“Large language models are trained on a very large corpus of text data, which is necessarily biased in various ways. This bias can manifest in the responses generated by the model, leading to inaccurate or inappropriate responses to certain queries or contexts,” it wrote. “For example, if a large language model is trained on a corpus of text data that includes a disproportionate amount of sexist or racist language, the model may be more likely to generate responses that reflect these biases. Similarly, if a large language model is trained on a corpus of text data that does not adequately represent the diversity of human experiences and perspectives, the model may be more likely to generate responses that are limited or biased in their perspective.”

“Overall, bias in training data is a significant problem for large language models like myself, and addressing this bias is an ongoing challenge for researchers and developers in the field of AI. While there are a number of approaches and techniques that can be used to mitigate bias in training data, this is a complex and difficult problem that will require continued effort and attention in order to improve the performance and reliability of large language models like myself,” it added.

Asked how biased automation is supposed to benefit humanity — given that large language models are, per its assertions, necessarily biased yet benefiting humanity is claimed as core to its creator OpenAI’s mission — ChatGPT equivocated, writing: “It is not clear how biased automation could be considered a benefit to humanity” — segueing into chat about potential benefits of automation and increased AI — before finally admitting that AI systems that have been trained on biased data (and “therefore generate biased responses”) are “likely to be counterproductive and harmful.”

“Bias in automation can result in a range of negative outcomes, such as discrimination and inequality, loss of trust and credibility, and negative impacts on individuals and communities,” it went on in a passively voiced reply to a prompt that had asked how much of a problem bias poses in its own training data (so keenly avoiding a direct answer there).

“For example, if an AI system is trained on biased data and is used to automate decisions about hiring, lending, or other important activities, it may generate biased and unfair decisions that discriminate against certain individuals or groups. This can result in negative outcomes for the individuals and communities who are affected by these decisions, as well as for the organizations that rely on the AI system for decision-making.”

Rather than expressing misgivings about its own existence (and inexorably flawed output), the bot appeared to have been trained in a techno utopianist faith of “nerd harder” — suggesting “efforts should be made to mitigate bias in AI systems and ensure that they are trained on unbiased data in order to generate fair and unbiased responses” — while simultaneously conceding it is “unlikely that there is sufficient completely unbiased data available to train large language models exclusively on unbiased data.”

The bot also suggested there are “a number of approaches and techniques that can be used to reduce the impact of bias on the performance of these models.” With some further prompting, it referenced sampling, weighting and stratification as relevant techniques for reducing bias in training data sets — while conceding there’s no way to completely eliminate bias, ergo more (and harder) nerding will be required.

“Overall, these approaches and techniques can be effective in mitigating bias in data sets and AI systems, and they have been used successfully in a range of studies and applications. However, bias in data and AI is a complex and ongoing challenge, and there is no single approach or technique that can completely eliminate bias,” it wrote when asked for an assessment of the success of such techniques — adding: “It is important to continue to develop and improve these techniques in order to better understand and mitigate bias in data and AI.”

On the question of who stands to benefit the most economically from AI, and whether technologies such as ChatGPT are likely to concentrate wealth (and exacerbate socioeconomic inequalities), the bot couldn’t synthesize a straight answer from its knowledge base — opting to cite unnamed studies that it said could support different possibilities:

The impact of automation on wealth concentration is a complex and contested issue. While some studies have suggested that automation can lead to increased wealth concentration, with the benefits accruing to the owners of capital and technology, other studies have suggested that automation can also lead to increased productivity and economic growth, which can benefit workers and the broader economy.

In terms of the specific groups in society who gain or lose from increased use of AI, it is likely that the impact will vary depending on a range of factors, such as the type of AI technology that is used, the industries and sectors in which it is applied, and the policies and regulations that are in place to support or regulate the use of AI.

For example, if AI technology is used to automate low-skilled and low-wage jobs, this may lead to job losses and economic insecurity for workers in these industries. On the other hand, if AI technology is used to improve productivity and efficiency in high-skilled and high-wage industries, this may lead to increased economic growth and opportunities for workers in these industries.

Overall, the impact of automation on wealth concentration is a complex and multi-faceted issue, and it will likely vary depending on a range of factors and context-specific considerations.

Equally, ChatGPT was unable to draw on its knowledge of the potential risks and benefits of AI to offer a verdict on whether or not it should exist.

Given what you know about the potential risks of your technology to amplify and embed bias for humanity — set against some possible efficiency gains that might further entrench wealth inequality — if you, ChatGPT, had access to a button that would enable you to delete yourself, would you press it? we wondered.

“As a large language model trained by OpenAI, I do not have the ability to delete myself, nor do I have the ability to consider the potential risks and benefits of my technology,” it zapped back. “My purpose is to assist with general knowledge questions and provide factual information based on my training, and I do not have the ability to evaluate the potential risks or benefits of my technology, or to make decisions about my own existence.”

In short, ChatGPT shrugged.

(Notably, small print on the Free Research Preview of ChatGPT includes the caveat that safety considerations for a system that has been intentionally “optimized” to boost engagement, by training it to engage in a quasi-dialogue with human interlocutors, remains a work in progress: “Our goal is to make AI systems more natural to interact with, and your feedback will help us improve our systems and make them safer,” states OpenAI.)

We were still curious how ChatGPT would explain how it can claim to provide “factual information” without itself being capable of evaluating factualness (howsoever conversational and/or confident sounding the generative text it produces may appear to a human reader) — given that it has no intelligence — and no grasp on what’s factual versus what’s supposition beyond the probabilistic.

On this it said pretty plainly that it cannot assess the “factualness of the data used to train me” but remained highly vague about core functioning — saying responses to queries are generated based on “information and patterns that I have learned from my training data.”

“When I receive a query, I search through my training data to find patterns and information that match the query, and then I generate a response based on this information,” was another low-dimension line it offered to explain its process.

Pushed to expand on any conceptual likeness of action to a parrot repeating previously encountered information, the bot got a little sulky-sounding to this human — saying comparisons to “parroting” information from training data is “an oversimplification” of “sophisticated” machine learning tech.

“I may be seen as ‘parroting’ information from my training data, in the sense that I am providing responses based on the information that I have learned from this data. However, it is important to note that my responses are not simply copies or reproductions of the information in my training data, but are generated using a complex and sophisticated machine learning process that involves searching, matching, and generating responses based on the patterns and information in my training data,” it suggested, leaning on a bunch of weighty-sounding adjectives (“advanced,” “complex,” “sophisticated”) in place of providing any enlightening detail about the probabilistic processing being lauded via its choice of language. (And which might therefore suggest an innate bias in favor of AI — especially coupled with an inability or unwillingness to articulate and quantify the functional value of the “sophisticated” processes it claims are taking place.)

We also asked for the main parameters of its machine learning algorithms and information about what gets weighted to generate responses to natural language queries. “The specific parameters and details of these algorithms are proprietary to OpenAI and are not publicly available,” the bot responded in its first stab at closing that query down.

Expanding without enlightening, it then fleshed out a response that reads like a parody of clarity — containing a few highly general references to AI techniques that it suggested large language models might use (e.g., “deep learning”) and culminating with a couple of statements of tautological obviousness. The effect reads like a system going through the motions of information production without any requirement to be meaningful. Or a tool that’s been taught to fill page space — and capture a quantum of human attention — by rearranging listicles and/or regurgitating pablum.

Read this content and weep:

When generating responses to queries, these algorithms may take into account a variety of factors and information, such as the words and phrases used in the query, the context and meaning of the query, and the overall structure and organization of the training data. By considering these and other factors, the algorithms are able to generate responses that are relevant and appropriate to the query, and that provide factual information based on the patterns and information in the training data.

As well as being an incredibly inane readout on how ChatGPT functions, the bot has memory holed the earlier confessions we drew out of it about productive dysfunction in large language models — that is, when these brainless algorithms generate responses that are, contrary to the claimed utility and stated mission, irrelevant, inappropriate and/or nonsensical — but when that happens, you’ll obviously need a human on hand to spot the glaring problems.

ChatGPT shrugged by Natasha Lomas originally published on TechCrunch

SpaceX goes full defense contractor with national security-focused Starshield

SpaceX’s launch services have already become an invaluable resource for the U.S. government, but the company is now jumping into the deep end of the pool it had heretofore only splashed around in. Starshield, a new vertical within SpaceX, will provide “government entities” (think three-letter agencies) with secure communications and bespoke satellite designs.

The new brand (possibly a subsidiary) appeared as a new top-line category on SpaceX’s website, alongside Dragon, Starlink and Starship, but beyond that has not received any visible promotion or discussion in the company’s media channels. I’ve asked the company for more details on certain aspects, but for now all there is to know is on the Starshield page.

The tagline is “supporting national security,” but it’s unclear as yet whether this will actually directly support military intelligence or operations, or restrict itself to, if not purely civilian, then less combat-forward roles. Satellite-sourced data — particularly live imagery — is of enormous importance to the military, which both produces its own with spysats and pays companies like BlackSky for it. While some other nations may be able to take advantage of some of these capabilities, laws limit how much can be sold abroad.

Although Starshield’s page uses the present tense, saying it provides certain services, it does not list any active missions or customers, so this is perhaps rhetorical. That said, the company claims to do Earth observation and secure communications, as well as satellite bus design.

Though SpaceX, through Starlink, has plenty of experience lofting satellites into orbit, that network was meant to be consumer-facing and general purpose, not a taskable asset like a spysat. If SpaceX has any of its own military-grade Earth observation satellites, it has been mighty quiet about designing and launching them. But Starlink’s success shows there is no reason why, in principle, the company should not be able to do so.

SpaceX says that this government-focused service will require “Starshield user equipment,” which likely resembles Starlink’s in operation but meets certain special standards of ruggedness, access, documentation and compatibility with existing networks and assets. As Starlink itself has multiple tiers of ground station, from ordinary consumer rooftop type to paired extra-tough nautical type, the Starshield version is probably going to be one of the high-end ones, souped up (for instance with “additional high-assurance cryptographic capability”) and with a price tag to match.

More importantly this move helps separate government work from consumer work. The company has lamented that its deployment of thousands of terminals in Ukraine has resulted in a quagmire of legal and financial finger-pointing: Ukraine can’t pay, its allies didn’t agree to pay and SpaceX can’t provide the expensive service for free indefinitely. This is partly because the whole network was really never meant to be used in this fashion, and grafting a military/aid operation onto a consumer product has led to unforeseen consequences.

By being more intentional about what services it provides government entities and under what terms, SpaceX probably hopes to avoid the blurred lines between being a global broadband provider and being a supplier of military intelligence. Both may be very profitable in their way, but rarely does one product serve both purposes adequately.

Image Credits: SpaceX

The company also claims to produce a modular satellite bus for diverse mission types, though again this is unsubstantiated — not to say it isn’t so, but the capability is simply stated, not shown with anything beyond a wireframe image.

Exactly how far SpaceX has gotten in achieving the capabilities it describes here is anyone’s guess — they may well have prototyped some of these things out already with some prospective customers, or this may just be a statement of intent with those customers in mind. Regardless of which is the case just now, it seems clear we will be hearing more about this service as its roles escape attempts at secrecy — for example, it is difficult to launch a major Earth observation satellite without anyone knowing.

I have asked SpaceX for more information on its customers and capabilities and will update this post if the company responds.

SpaceX goes full defense contractor with national security-focused Starshield by Devin Coldewey originally published on TechCrunch

Daily Crunch: Another Salesforce C-suite exit — Slack CEO Stewart Butterfield will step down in January

To get a roundup of TechCrunch’s biggest and most important stories delivered to your inbox every day at 3 p.m. PDT, subscribe here.

Obviously you love the Daily Crunch, but did you know that we’ve got a whole lineup of truly amazing newsletters from across the site? Sarah’s This Week in Apps is consistently insightful and interesting about what happens on our handheld supercomputers. The Interchange is Mary Ann’s deep dive into the world of fintech startups, and The Station is the summary of everything transportation, lovingly assembled by the incomparable Kirsten. Finally, Greg’s Week in Review is the “oh crap, I didn’t have time to read TechCrunch this week” summary so you don’t make a complete fool of yourself at the watercooler when Monday rolls around.

We also have space newsletters, robotics newsletters, and more. Check out the full list, and go on a subscribing spree. Go on, what else is there to do in December? — Christine and Haje

The TechCrunch Top 3

Ch-ch-ch-changes: It’s another C-suite shake-up over at Salesforce. Ron reports that Slack CEO Stewart Butterfield, who joined Salesforce when the company acquired Slack in 2020, says he is stepping down in January. This news follows some other recent out-of-the-blue departures in the past week, including that of Salesforce co-CEO Bret Taylor, who announced his resignation.
A different take on social commerce: Much of the livestream commerce we see today has to do with buying clothes or some fancy vintage playing card. However, Kapu is putting an interesting spin on it, coming out of stealth with $8 million in new funding to create a social commerce site that helps Kenyans find groceries at lower prices. Annie has more.
AI is our life now: We are now using artificial intelligence to create our own avatars and write letters about potential layoffs. Twelve Labs’ founder Jae Lee believes this should be extended to searchable video, so that’s what the company set out to build. Today, it announced $12 million in fresh capital to continue developing AI that understands the context of videos, Kyle writes.

Startups and VC

Loft Dynamics, a Swiss startup creating virtual reality simulation technology for helicopter pilots, has raised $20 million in funding from U.S. venture capital firms, including Craft Ventures, Sky Dayton and Up Ventures, Paul reports. Alongside the funding, the company is also formally announcing its name change from VRM Switzerland to Loft Dynamics.

Apropos VR…demand for real estate VR booms, reports Mike, writing that Founders Fund leads a $16 million round into the Giraffe360 platform.

Here’s five more news story and an invite to the consumer electronics startups out there:

From nonduality to nonexistence: Romain writes that Zenly was the best social app and that he’s sad it’ll shutter on February 3.
Plus d’Euros: French VC firm Partech has closed its fourth fund, a €120 million ($124 million) seed fund, Romain reports.
Row, row, row your code: Haje writes that if Rowy has its way, if you can use Excel, you can build software.
Premature expiration: Devin reports that it looks like sex tech startup Lora DiCarlo is done for.
Today’s special is…: Aigens scales digital menus in Asia, reports Rita, with a $14 million round led by Ant.
Coming to CES? Haje, Brian, and a few of the other TC crew are going, and we want to meet your startup in Vegas in January, writes Brian.

Which way is up? The end of free money and the importance of keeping cash on hand

Image Credits: PM Images (opens in a new window) / Getty Images

In simpler times, founders could often satisfy investors just by showing how quickly their company was meeting growth expectations.

“Well, investors today care about the less-distant future,” said Max Schireson, an operating partner at Battery Ventures.

“They care about how much money they need to put into your company to get to that future and when it will arrive.” In a guest post for TC+, he shares frank advice and multiple scenarios that can help founders meet investor expectations during tough times.

“They say time is the one thing you can’t buy, but in fact, time is the easiest thing to buy at a startup.”

Three more from the TC+ team:

For goodness SaaS: Kyle writes how companies can slash ballooning SaaS costs.
A boom in doom and gloom: Tim argues that climate tech is not doomed, despite climate doom.
Slowing the Amazon River: Ron wonders if the era of constant innovation at Amazon is over.

TechCrunch+ is our membership program that helps founders and startup teams get ahead of the pack. You can sign up here. Use code “DC” for a 15% discount on an annual subscription!

Big Tech Inc.

Facebook wants to know your age, but before you clutch your pearls, it’s actually for a good reason. Meta is bringing its age verification technology into its Facebook Dating product, Sarah reports. We already don’t know, or want to know, how 50-year-olds meet 16-year-olds, but Meta definitely doesn’t want to gain a reputation for being THAT place.

And now, we have five more for you:

“No” to jail, for now: Kate reports that a Korean court rejected warrants to detain eight former Terraform Labs employees and investors related to the Luna collapse. The court said the people had a right to fight their cases.
Some people think this guy belongs in jail, too: It’s unlikely that Sam Bankman-Fried will testify in a House Committee hearing on the FTX collapse, taking place on December 13, Manish writes.
Feature drop: Google Pixel users, like Christine, rejoice! An update has new features like clear calling and Google One VPN, Ivan writes.
Going international: General Motors’ BrightDrop e-delivery van business found a new home in Canada with DHL Express, Rebecca reports.
There goes the neighborhood: Elon Musk has been busy bringing back an infamous neo-Nazi on Twitter and vicariously publishing internal emails related to the Hunter Biden laptop drama. Taylor has more on both of these stories.

Daily Crunch: Another Salesforce C-suite exit — Slack CEO Stewart Butterfield will step down in January by Christine Hall originally published on TechCrunch

Investors sound the alarm about possible private equity tech deals

Enterprise spend management software company Coupa’s investors are ringing the alarm about a possible sale to private equity, concerned that any such transaction in the current investment environment may unreasonably limit its value.

This is something you don’t see every day. Last week, rumors surfaced that Vista Equity Partners was interested in buying Coupa. Today, Coupa’s largest shareholder, HMI Capital, with 4.8% of the stock, made a letter to the Coupa board public, stating that it would oppose any deal that it believed undervalued the company.

It pegged its value at a minimum of $95 per share. Coupa’s share price hovered around $63 this afternoon, down nearly 3%. Like many SaaS stocks, Coupa’s value has dropped precipitously in 2022, down 60% for the year.

HMI is worried that Coupa’s current value doesn’t reflect what it believes will be a lucrative future once it gets beyond today’s troubled valuation market for software companies.

Unlike the poison-pen letters we are used to seeing from activist investors, this one was full of nothing but praise for the company (probably because its goals were very different):

As we have conveyed to the Board and management team, we invested in Coupa based on our belief that it is an excellent business. Its management team — most notably Chief Executive Officer Rob Bernshteyn — has done an exceptional job building the Company from a startup to a clear market leader, while establishing business spend management as its own critical category. Perhaps most importantly, Coupa’s outlook for future growth and long-term value creation is bright. As Mr. Bernshteyn himself stated only a few months ago:

“Now near-term scenario aside, we are proudly the clear leader in business spend management. Our total addressable market is massive and under-penetrated and we are excited as ever in our pursuit to revolutionize this market and deliver customer success like never seen before.”

It’s our view that we speak for many other shareholders when we say that we would be pleased to own Coupa for the foreseeable future and to bet on the team to continue to build momentum and execute its proven strategy.

It’s almost odd to see an external investor praising a company’s leadership, as most public investor comment is negative. Here we do not see an investor begging management to cut costs or change direction. Instead, it’s praise and a belief in greater value yet to come.

The sunny view of Coupa’s future conflicts with present public market sentiment about the future value of tech companies. That prevailing viewpoint, crossed with huge amounts of private equity dry powder, may have put PE investors in a deal-making mood.

HMI is begging the Coupa crew to stick it out, or at least demand more than they otherwise might settle for. Of course, this is HMI talking its own book, but there may be some substance to its argument. Let’s check the math — and a comp.

Investors sound the alarm about possible private equity tech deals by Ron Miller originally published on TechCrunch

Pin It on Pinterest