Automotus raises $9M to scale automated curb management tech

The new mobility landscape has made curb space in cities a hot commodity. No longer are curbs just for buses, taxis, deliveries and parking. Now those traditional use cases have to contend with bike lanes, ride-hail, same-day deliveries, dockless vehicles and more. As a result, cities and investors are starting to prioritize software that helps manage curb space.

Enter Automotus, a four-year-old startup that has just closed a $9 million seed round to advance its automated curb management solution. The company says its tech can reduce congestion and emissions by up to 10%; reduce double-parking hazards by 64%; increase parking turnover by 26%; and increase parking revenue for cities by over 500%.

Automotus works with cities like Santa Monica, Los Angeles, Pittsburgh, Omaha and Bethlehem to automate payments for vehicle unloading and parking, enforce curb violations and manage preferred loading zones and discounted rates for commercial EVs, the startup said.

“We also integrate with other mobility services providers to help cities get a more comprehensive view of how the public right of way is being used and by which modes for planning, policy, and pricing efforts,” Jordan Justus, CEO of Automotus, told TechCrunch.

In March 2021, Automotus raised $1.2 million in seed funding, so the company has managed to tack on an additional $7.8 million in the intervening year and a half. The most recent funds were led by City Rock Ventures, Quake Capital, Bridge Investments, Unbridled Ventures, Keiki Capital, NY Angels, Irish Angles, SUM Ventures, and LA’s Cleantech Incubator Impact Fund.

“The bulk of the funding will be used to execute and support deployments in at least 15 new cities coming online in 2023,” said Justus. “We have a big year of launches ahead of us and are laser-focused on delivering the best possible solutions for our clients and continuing to scale up previous pilots.”

While Automotus is largely offering a Software-as-a-Service product, installing the right hardware is an important element in collecting data. In its partner cities, the startup deploys cellular-enabled cameras equipped with Automotus’s proprietary computer vision technology. The cameras are mounted onto traffic and street lights in areas where you might see plenty of loading and unloading or in zero emissions delivery zones.

With Automotus’s tech, there’s no need to download mobile apps or use meters. The cameras capture images of license plates and automatically collect data, issue invoices for parking or send out citations if a vehicle has been non-compliant to the city’s regulations. The technology blurs any faces and de-identifies data to ensure privacy of street users.

Automotus raises $9M to scale automated curb management tech by Rebecca Bellan originally published on TechCrunch

Avoid 3 common sales mistakes startups make during a downturn

More than 150,000 workers lost their jobs this year as layoffs swept across the tech landscape since June. Constant news cycles have analyzed every aspect of these staff reductions for meaning and lessons. How did we get here? How are companies managing employees? Are there more layoffs on the way?

And, critically, what’s next for tech? Investors are now demanding profitability over growth. This extreme change in the business model investors want has left companies with difficult decisions ahead and no playbook. Without the liberty a low-cost capital environment affords, for investors, new ventures that promise uncertain returns are a thing of the past, or at least, a much smaller focus.

What every company needs now is efficient sales.

But there is a big difference between knowing that you need efficient revenue and knowing how to get it. Leaner teams, fewer resources, and a tough macro environment mean that CROs are forced to make big changes to budgets, staffing and how they market and sell.

But maintaining revenue while the CFO is cutting costs by 5%-20% is not an easy task for anyone — and doing more of the same won’t get you there.

The unfortunate truth is that unless you move beyond the same old buying group, you won’t move the needle.

The biggest mistakes to avoid

Preliminary data from Databook shows that an unusually high percentage of companies globally are in the midst of shifting their strategic priorities. Since these are typically multiyear commitments, this unprecedented shift dramatically changes the sales landscape for tech startups.

Holding tight to traditional sales incentives and levers won’t yield the step change that is needed to win.

Don’t raise pricing

Most startups are reliant on VC funding, and in today’s market, VCs are looking for a clear path to profitability. One seemingly “easy” way to improve margins is to increase pricing.

This is a fix you can only try once; you don’t want to keep raising prices in a competitive market. This is a temporary workaround at best, and it can easily backfire, as higher prices during a downturn can erode customer trust over the long run. It can also result in fewer renewals when there is less budget available.

Avoid 3 common sales mistakes startups make during a downturn by Ram Iyer originally published on TechCrunch

A brief history of diffusion, the tech at the heart of modern image-generating AI

Text-to-image AI exploded this year as technical advances greatly enhanced the fidelity of art that AI systems could create. Controversial as systems like Stable Diffusion and OpenAI’s DALL-E 2 are, platforms including DeviantArt and Canva have adopted them to power creative tools, personalize branding and even ideate new products.

But the tech at the heart of these systems is capable of far more than generating art. Called diffusion, it’s being used by some intrepid research groups to produce music, synthesize DNA sequences and even discover new drugs.

So what is diffusion, exactly, and why is it such a massive leap over the previous state of the art? As the year winds down, it’s worth taking a look at diffusion’s origins and how it advanced over time to become the influential force that it is today. Diffusion’s story isn’t over — refinements on the techniques arrive with each passing month — but the last year or two especially brought remarkable progress.

The birth of diffusion

You might recall the trend of deepfaking apps several years ago — apps that inserted people’s portraits into existing images and videos to create realistic-looking substitutions of the original subjects in that target content. Using AI, the apps would “insert” a person’s face — or in some cases, their whole body — into a scene, often convincingly enough to fool someone on first glance.

Most of these apps relied on an AI technology called generative adversarial networks, or GANs for short. GANs consist of two parts: a generator that produces synthetic examples (e.g. images) from random data and a discriminator that attempts to distinguish between the synthetic examples and real examples from a training dataset. (Typical GAN training datasets consist of hundreds to millions of examples of things the GAN is expected to eventually capture.) Both the generator and discriminator improve in their respective abilities until the discriminator is unable to tell the real examples from the synthesized examples with better than the 50% accuracy expected of chance.

Sand sculptures of Harry Potter and Hogwarts, generated by Stable Diffusion. Image Credits: Stability AI

Top-performing GANs can create, for example, snapshots of fictional apartment buildings. StyleGAN, a system Nvidia developed a few years back, can generate high-resolution head shots of fictional people by learning attributes like facial pose, freckles and hair. Beyond image generation, GANs have been applied to the 3D modeling space and vector sketches, showing an aptitude for outputting video clips as well as speech and even looping instrument samples in songs.

In practice, though, GANs suffered from a number of shortcomings owing to their architecture. The simultaneous training of generator and discriminator models was inherently unstable; sometimes the generator “collapsed” and outputted lots of similar-seeming samples. GANs also needed lots of data and compute power to run and train, which made them tough to scale.

Enter diffusion.

How diffusion works

Diffusion was inspired by physics — being the process in physics where something moves from a region of higher concentration to one of lower concentration, like a sugar cube dissolving in coffee. Sugar granules in coffee are initially concentrated at the top of the liquid, but gradually become distributed.

Diffusion systems borrow from diffusion in non-equilibrium thermodynamics specifically, where the process increases the entropy — or randomness — of the system over time. Consider a gas — it’ll eventually spread out to fill an entire space evenly through random motion. Similarly, data like images can be transformed into a uniform distribution by randomly adding noise.

Diffusion systems slowly destroy the structure of data by adding noise until there’s nothing left but noise.

In physics, diffusion is spontaneous and irreversible — sugar diffused in coffee can’t be restored to cube form. But diffusion systems in machine learning aim to learn a sort of “reverse diffusion” process to restore the destroyed data, gaining the ability to recover the data from noise.

Image Credits: OpenBioML

Diffusion systems have been around for nearly a decade. But a relatively recent innovation from OpenAI called CLIP (short for “Contrastive Language-Image Pre-Training”) made them much more practical in everyday applications. CLIP classifies data — for example, images — to “score” each step of the diffusion process based on how likely it is to be classified under a given text prompt (e.g. “a sketch of a dog in a flowery lawn”).

At the start, the data has a very low CLIP-given score, because it’s mostly noise. But as the diffusion system reconstructs data from the noise, it slowly comes closer to matching the prompt. A useful analogy is uncarved marble — like a master sculptor telling a novice where to carve, CLIP guides the diffusion system toward an image that gives a higher score.

OpenAI introduced CLIP alongside the image-generating system DALL-E. Since then, it’s made its way into DALL-E’s successor, DALL-E 2, as well as open source alternatives like Stable Diffusion.

What can diffusion do?

So what can CLIP-guided diffusion models do? Well, as alluded to earlier, they’re quite good at generating art — from photorealistic art to sketches, drawings and paintings in the style of practically any artist. In fact, there’s evidence suggesting that they problematically regurgitate some of their training data.

But the models’ talent — controversial as it might be — doesn’t end there.

Researchers have also experimented with using guided diffusion models to compose new music. Harmonai, an organization with financial backing from Stability AI, the London-based startup behind Stable Diffusion, released a diffusion-based model that can output clips of music by training on hundreds of hours of existing songs. More recently, developers Seth Forsgren and Hayk Martiros created a hobby project dubbed Riffusion that uses a diffusion model cleverly trained on spectrograms — visual representations — of audio to generate ditties.

Beyond the music realm, several labs are attempting to apply diffusion tech to biomedicine in the hopes of uncovering novel disease treatments. Startup Generate Biomedicines and a University of Washington team trained diffusion-based models to produce designs for proteins with specific properties and functions, as MIT Tech Review reported earlier this month.

The models work in different ways. Generate Biomedicines’ adds noise by unraveling the amino acid chains that make up a protein and then puts random chains together to form a new protein, guided by constraints specified by the researchers. The University of Washington model, on the other hand, starts with a scrambled structure and uses information about how the pieces of a protein should fit together provided by a separate AI system trained to predict protein structure.

Image Credits: PASIEKA/SCIENCE PHOTO LIBRARY/Getty Images

They’ve already achieved some success. The model designed by the University of Washington group was able to find a protein that can attach to the parathyroid hormone — the hormone that controls calcium levels in the blood — better than existing drugs.

Meanwhile, over at OpenBioML, a Stability AI-backed effort to bring machine learning-based approaches to biochemistry, researchers have developed a system called DNA-Diffusion to generate cell-type-specific regulatory DNA sequences — segments of nucleic acid molecules that influence the expression of specific genes within an organism. DNA-Diffusion will — if all goes according to plan — generate regulatory DNA sequences from text instructions like “A sequence that will activate a gene to its maximum expression level in cell type X” and “A sequence that activates a gene in liver and heart, but not in brain.”

What might the future hold for diffusion models? The sky may well be the limit. Already, researchers have applied it to generating videos, compressing images and synthesizing speech. That’s not to suggest diffusion won’t eventually be replaced with a more efficient, more performant machine learning technique, as GANs were with diffusion. But it’s the architecture du jour for a reason; diffusion is nothing if not versatile.

A brief history of diffusion, the tech at the heart of modern image-generating AI by Kyle Wiggers originally published on TechCrunch

FTX co-founder Gary Wang and Alameda’s Caroline Ellison plead guilty to criminal charges

The FTX/Alameda saga continues, with news late Wednesday that two key Sam Bankman-Fried associates were charged and have been charged with federal criminal offences in the U.S.: Both former Alameda CEO Caroline Ellison and FTX co-founder Gary Wang plead guilty to multiple charges, and accepted plea agreements that offer reduced sentencing in exchange for assistance in ongoing investigations into wrongdoing at FTX/Alameda that prove “substantial.”

Meanwhile, Sam Bankman-Fried was also extradited to the U.S. from the Bahamas on Wednesday, facing suits by the SEC and CFTC over fraud, as well as federal criminal charges. When Southern District of New York attorney Damian Williams announced the charges at a press event last week, he noted that his office was “not done” in terms of levying additional charges, and now we know Ellison and Wang were at least some of the individuals he was referring to at the time.

Ellison and Wang are likely to be key witnesses for the feds in the SBF case, given that they are most likely to have direct and best knowledge that SBF knew about use of FTX customer funds to cover Alameda’s risky crypto trading bets.

This might not be the end of charges for individuals at FTX and Alameda, either – Williams re-iterated at the press conference announcing the charges against Ellison and Wang that if anyone else is considering coming forward to assist authorities in their prosecution of the case in exchange for possible leniency, now is the time.

Both Ellison and Wang also face civil penalties from the SEC and CFTC, announced alongside the criminal charges.

FTX co-founder Gary Wang and Alameda’s Caroline Ellison plead guilty to criminal charges by Darrell Etherington originally published on TechCrunch

Pin It on Pinterest