Zipline is now the national drone service provider for Rwanda

Zipline got its start six years ago using its autonomous electric drones to deliver blood in Rwanda. Now, the logistics and drone delivery startup is expanding its Rwandan government partnership with a lofty aim: complete nearly 2 million instant deliveries and fly more than 200 million autonomous kilometers in the country by 2029.

The government of Rwanda and Zipline announced Thursday an expanded partnership that will add new delivery sites in rural and urban locations throughout the country — a move that is expected to triple its delivery volume.

The idea is to use Zipline to shore up Rwanda’s healthcare supply chain, address malnutrition and support the country’s eco-tourism industry, according to Rwanda Development Board CEO Clare Akamanzi, who touted this as a “national drone service.”

Deliveries will now include medicine, medical supplies, nutrition and animal health products. It also will give any government agency access to Zipline’s services, including the Ministry of Agriculture and Animal Resources, the Ministry of Information Communication Technology, the Rwanda Development Board, the Rwanda Medical Supply, and the National Child Development Agency.

The deal is validation for Zipline and could help it convince other countries to make similar nationwide partnerships. Zipline operates within Ghana, the U.S., Nigeria and Japan, and will be launching Côte D’Ivoire and Kenya soon, according to the company. This is the first time a government has tapped Zipline to provide a national drone service.

Zipline, which was founded in 2014, developed the entire ecosystem from the drones and logistics software to launch and landing system. It’s operations were limited in the beginning, starting in Rwanda with a focus on blood and vaccines and then expanding into Ghana. In the past year, the company has expanded its operational footprint and delivery volume — growth that was powered in part by $250 million in venture capital. (The company has raised $486 million to date.)

Zipline has delivered more than 450,000 packages to date with 215,000 deliveries occurring this year alone. The company has also snagged a number of partnerships in the past two years that signals aspirations to expand within and beyond healthcare. Zipline has partnerships with Toyota Group and UPS, it delivers medical equipment and personal protective gear for Novant Health in North Carolina and health and wellness products for Walmart.

Zipline is now the national drone service provider for Rwanda by Kirsten Korosec originally published on TechCrunch

AWS, Meta, Microsoft and TomTom launch the Overture Maps Foundation

The Linux Foundation today announced the launch of the Overture Maps Foundation, a new nonprofit organization that aims to enable developers to build new mapping products thanks to its interoperable map data and new tooling. The new organization was founded by AWS, Meta, Microsoft and TomTom, but the organization stresses that it is “open to all communities with a common interest in building open map data.”

When you think of open-source mapping, chances are that OpenStreetMap will be top of mind. After all, this project, too, is shepherded by a not-for-profit organization, with Microsoft and Meta also being sponsors of that project, too. Overture notes that it plans to integrate with existing projects like OpenStreetMap but also create new map data based on computer vision and other AI/ML techniques.

But it’s important to stress that the goal here is not to create a new map and compete with the likes of Google Maps or OpenStreetMap. It’s about the layer above that and enriching the existing base maps and allowing developers to build new products on top of them.

You’re not going to launch Overture Maps on your phone to get walking directions. Instead, the organization, for example, plans to define a common data schema and entity reference system for describing this data and run a quality assurance process over it to ensure that it can detect errors and potential data vandalism.

As Jim Zemlin, the Linux Foundation’s Executive Director told me, Overture fits into a larger trend of organizations that want to engage in open data initiatives to bring the same kind of open-source innovation they have seen in software to data initiatives.

“We’ve had several other open data initiatives, but this is one of the larger and I would say most ambitious open data initiatives that were bringing to the [Linux] Foundation,” said Zemlin. “We were approached by the founding companies of this initiative — so Meta, AWS, Microsoft and TomTom — who have an ambition to really create a set of open data for mapping that can allow for greater innovation in what they do, which is providing map services and geolocation data and so forth.”

It’s still very early days for this project. The Overture members are currently in the process of setting up the governance structure and the various committees that will make up their project. That also means that the group is only starting to define some of the data schemas and the kind of tooling it wants to develop.

“Microsoft is committed to closing the data divide and helping organizations of all sizes to realize the benefits of data as well as the new technologies it powers, including geospatial data,” said Russell Dicker, Corporate Vice President, Product, Maps and Local at Microsoft. “Current and next-generation map products require open map data built using AI that’s reliable, easy-to-use and interoperable.”

Microsoft, of course, has long offered Bing Maps (which mostly uses data from TomTom and OpenStreetMap for its base map), which you probably never use. The company’s most interesting use of Bing Maps is likely its Flight Simulator, though, which takes this base data and adds photogrammetry based on satellite imagery and procedurally generated buildings on top of the base map.

It’s also worth noting that TomTom, which you probably mostly remember from its stand-alone GPS hardware back in the day when that was still a thing, recently announced a new initiative to build a new map and development platform.

AWS may seem like the odd one out here, but the cloud giant has long had an interest in geospatial data, though it never offered its own maps and relied on partners like Esri and HERE Technologies. As Zemlin noted, the organization is already talking to a number of other potential partners but decided to launch with this initial group — in part, because at some point, you do have to make a project like this public to bring on additional members and build an ecosystem.

AWS, Meta, Microsoft and TomTom launch the Overture Maps Foundation by Frederic Lardinois originally published on TechCrunch

Artifact wants to record your family history in podcast-like audio recordings

After Ross Chanin’s grandfather died, Chanin mourned not only him, but the fact that he’d never gotten a chance to hear more about his grandfather’s life. Over a conversation with a journalist friend, George Quraishi, it became clear to Chanin that Quraishi’s skill set — interviewing and audio editing — could be conducive to capturing a family’s history.

Chanin and Quraishi started conducting interviews for friends and family and recruited software engineers Martin Gouy and Moncef Biaz to build apps to make it easier to record remote interviews and play them back on the web. Convinced that they had the seeds of a business, Chanin and Quraishi decided to apply to Y Combinator and were accepted into the Summer 2020 batch.

Today, their startup — Artifact — has over 10,000 customers across 15 English-, Spanish- and French-speaking countries. It’s raised $5 million inclusive of a seed round led by GV, which had participation from Atento Capital, Goodwater and Offline Ventures and notable angels such as Y Combinator CEO Michael Seibel, Twitch CEO Emmett Shear and former Blizzard CEO Michael Morhaime.

“Interviews are incredible storytelling spaces, but they’re generally reserved for the rich and powerful and are not about our parents, grandparents and children,” Chanin told TechCrunch in an email interview. “Our dream is that Artifact will become the place where families the world over tell and experience their stories.”

Artifact charges customers $149 to have an interviewer (mostly moonlighting journalists, according to Chanin) conduct an interview with a family member. Packages include one interview and an edit with a custom introduction, sound mixing by an audio engineer and a web page for listening and adding photos.

It’s a four-step process. First, Artifact customers tell the interviewer who they’ll be interviewing and what they’ll discuss. Then, Artifact invites the interviewee to choose a day and time for the interview, which happens via phone or videoconferencing. The resulting recording — usually 30 minutes in length, give or take 15 minutes — is edited down to a 20-minute “episode,” which can be shared via the web with loved ones or publicly.

Artifact aims to turn around episodes within five business days of an interview. Up to two guests are included in the price of a single interview, with a $35-per-guest charge for additional interviewees.

“The people in your life may not be natural storytellers, but when they’re guided by professional interviewers, their stories become heirloom-quality episodes that live in your family’s private account,” Chanin said. “Once there, it’s easy to add photos and then securely share your Artifacts with the people you love.”

Image Credits: Artifact

That’s a lot of sensitive info to upload to the cloud. But Chanin was adamant that Artifact doesn’t share personal data with third parties without “explicit and affirmative” consent from users. The platform stores data for as long as a person maintains an account, although users can delete recordings, notes and photos at any time.

Artifact is one startup among many delivering professional interviewing and audio biography services tailored to families. For instance, Vita’s app lets family members record audio stories and transcribe the text and even hand-select accompanying family photos, recipes and other media content for posterity if they so choose. Tales and Origin offer packages along the same vein while StoryWorth and StoryCorps are more self-service in nature, providing users with the tools to conduct interviews themselves including suggested lines of questioning.

So what sets Artifact apart? Chanin argues it “fills a need that remains unaddressed.”

“The large genealogy platforms do incredible work, helping families trace their lineage and build family trees. Cloud photo and print apps make experiencing family photos easy and fun. [But] while it’s one thing to trace your family’s history, it’s another entirely to record it in the voices of the people themselves,” Chanin said. “It’s the conversation spaces with our professional interviewers that create the magic — the intonations of voice, laughter and emotion that can make it feel like the person we’re listening to is sitting right there in the room with us.”

Artifact is also unique in that it operates on a marketplace model, connecting customers with freelance interviewers, audio editors and sound engineers who piece together each audio biography. The compensation structure for contractors wasn’t immediately clear in our interview with Chanin; we’ve asked Artifact for clarification.

In another differentiator, Artifact has dipped a toe into the corporate market, offering custom podcast creation services to companies, academic institutions and nonprofits. As with its biography business, Artifact’s enterprise-focused offering pairs customers with an interviewer who they instruct to talk to people about certain subjects, with Artifact handling all the scheduling, remote interviewing and editing.

To date, Artifact has produced podcasts for Clipboard Health, Onfleet, Yale, the University of Chicago and the Muscular Dystrophy Association, Chanin claims.

In a bid to remain ahead of rivals, Artifact aims to embrace merging AI technologies to further personalize the experience for its family biography customers. As customers upload photos and videos to their accounts, Artifact will soon begin marrying the images and videos to what’s being spoken about in an interview, Chanin says — no curation required.

“So, this is taking different types of media — image, video and text — and finding connections between them, then surfacing the result to the customer. We are calling this the ‘Sitback Experience,’ where users will simply click play, sit back and listen to people you love telling stories while relevant imagery and video play across the screen. It’ll be like a movie or a Ken Burns documentary about your family.”

Beyond the Sitback Experience, Artifact plans to launch Family Spaces, a dashboard where account holders will be able to add family members in a way that makes it clear which stories the platform’s recorded for individual people.

The swift development roadmap will keep Artifact a step beyond rivals, Chanin asserts, while delivering top-requested improvements to the user base. That’ll be key. Aside from the nascent enterprise venture, Artifact’s growth will depend on convincing existing customers to buy additional packages and new users to join in the first place.

“The pandemic reminded all of us that life is precious and that the people we love must never be taken for granted. In that way, Artifact provided a vehicle for many of our early adopters to act on those feelings and record family stories,” Chanin added. “From day one, we’ve built Artifact lean and as a service that provides immediate value to our customers — that our customers pay for. So in many respects, we launched the company the old fashioned way: introducing a new solution to a universal problem, learning from our customers and not focusing on growth at all costs.”

Chanin wouldn’t disclose Artifact’s burn rate. But he claimed that the company is “well capitalized,” with cash on hand for years. Artifact currently has a 14-person team (excepting the hundreds of freelancers in its marketplace) based in San Francisco and expects to “at least” double headcount over the next 12 months.

Artifact wants to record your family history in podcast-like audio recordings by Kyle Wiggers originally published on TechCrunch

Protect AI lands a $13.5M investment to harden AI projects from attack

Seeking to bring greater security to AI systems, Protect AI today raised $13.5 million in a seed-funding round co-led by Acrew Capital and Boldstart Ventures with participation from Knollwood Capital, Pelion Ventures and Aviso Ventures. Ian Swanson, the co-founder and CEO, said that the capital will be put toward product development and customer outreach as Protect AI emerges from stealth.

Protect AI claims to be one of the few security companies focused entirely on developing tools to defend AI systems and machine learning models from exploits. Its product suite aims to help developers identify and fix AI and machine learning security vulnerabilities at various stages of the machine learning life cycle, Swanson explains, including vulnerabilities that could expose sensitive data.

“As machine learning models usage grows exponentially in production use cases, we see AI builders needing products and solutions to make AI systems more secure, while recognizing the unique needs and threats surrounding machine learning code,” Swanson told TechCrunch in an email interview. “We have researched and uncovered unique exploits and provide tools to reduce risk inherent in [machine learning] pipelines.”

Swanson co-launched Protect AI with Daryan Dehghanpisheh and Badar Ahmed roughly a year ago. Swanson and Dehghanpisheh previously worked together at Amazon Web Services (AWS) on the AI and machine learning side of the business; Swanson was the worldwide leader at AWS’s AI customer solutions team and Dehghanpisheh was the global leader for machine learning solution architects. Ahmed became acquainted with Swanson while working at Swanson’s last startup, DataScience.com, which was acquired by Oracle in 2017. Ahmed and Swanson worked together at Oracle as well, where Swanson was the VP of AI and machine learning.

Protect AI’s first product, NB Defense, is designed to work within Jupyter Notebook, a digital notebook tool popular among data scientists within the AI community. (A 2018 GitHub analysis found that there were more than 2.5 million public Jupyter Notebooks in use at the time of the report’s publication, a number that’s almost certainly climbed since then.) NB Defense scans Jupyter notebooks for AI projects — which usually contain all the code, libraries and frameworks needed to train, run and test an AI system — for security risks and provides remediation suggestions.

What sort of problematic elements might an AI project notebook contain? Swanson suggests internal-use authentication tokens and other credentials, for one. NB Defense also looks for personally identifiable information (e.g., names and phone numbers) and open source code with a “nonpermissive” license that might prohibit it from being used in a commercial system.

Jupyter Notebooks are typically used as scratchpads rather than production environments, and most are locked safely away from prying eyes. According to an analysis by Dark Reading, fewer than 1% of the approximately 10,000 instances of Jupyter Notebook on the public web are configured for open access. But it’s true the exploits aren’t just theoretical. Last December, security firm Lightspin uncovered a method that could allow an attacker to run any code on a victim’s notebook across accounts on AWS SageMaker, Amazon’s fully managed machine learning service.

Other research firms, including Aqua Security, have found that improperly secured Jupyter Notebooks are vulnerable to Python-based ransomware and cryptocurrency mining attacks. In a 2020 Microsoft survey of businesses using AI, the majority said that they don’t have the right tools in place to secure their machine learning models.

It might be premature to sound the alarm bells. There’s no evidence that attacks are happening at scale, despite a Gartner report predicting an increase in AI cyberattacks through the end of this year. But Swanson makes the case that prevention is key.

“[Many] existing security code scanning solutions are not compatible with Jupyter notebooks. These vulnerabilities, and many more, are due to a lack of focus and innovation from current cybersecurity solution providers, and is the largest differentiation for Protect AI: Real threats and vulnerabilities that exist in AI systems, today,” Swanson said.

Beyond Jupyter Notebooks, Protect AI will work with common AI development tools, including Amazon SageMaker, Azure ML and Google Vertex AI Workbench, Swanson says. It’s available for free to start, with paid options to be introduced in the future.

“Machine learning is … complex and the pipelines delivering machine learning at scale create and multiply cybersecurity blind spots that evade current cybersecurity offerings, preventing important risks from being adequately understood and mitigated. Additionally, emerging compliance and regulatory frameworks continue to advance the need to harden AI systems’ data sources, models, and software supply chain to meet increased governance, risk management and compliance requirement,” Swanson continued. “Protect AI’s unique capabilities and deep expertise in the machine leaning lifecycle for enterprises and AI at scale helps enterprises of all sizes meet today’s and tomorrow’s unique, emerging and increasing requirements for a safer, more secure AI powered digital experience.”

That’s promising a lot. But Protect AI has the advantage of entering a market with relatively few direct competitors. Perhaps the closest is Resistant AI, which is developing AI systems to protect algorithms from automated attacks.

Protect AI, which is pre-revenue, isn’t revealing how many customers it has today. But Swanson claims that the company has secured “enterprises in the Fortune 500” across verticals, including finance, healthcare and life sciences, as well as energy, gaming, digital businesses and fintech.

“As we grow our customers, build partners and value chain participants we will use our funding to add additional team members in software development, engineering, security and go-to-market roles throughout 2023,” Swanson said, adding that Protect AI’s headcount stands at 15. “We have several years of cash runway available to continue to advance this field.”

Protect AI lands a $13.5M investment to harden AI projects from attack by Kyle Wiggers originally published on TechCrunch

PayPal and MetaMask team up to make it easier to buy crypto

PayPal is primarily known as an online payment method. But the company wants to become an easy way to get started with cryptocurrencies. In that regard, ConsenSys, the company behind MetaMask, announced that it would add an integration in its crypto wallet so that users can buy cryptocurrencies using their PayPal account.

MetaMask is one of the most popular non-custodial crypto wallet out there. It lets you store crypto assets and interact with web3 products as you can use your wallet as your authentication method.

But you can’t do much if you have an empty MetaMask wallet. That’s why users rely on centralized cryptocurrency exchanges like Coinbase, Kraken and FTX to buy cryptocurrencies and transfer them to their MetaMask wallet. MetaMask also has its own on-ramp features in its mobile app so that you don’t have to switch to another service and go through many intermediate steps. On-ramp partners include MoonPay, Wyre and Transak.

If you buy crypto with one of those partners, you will have to go through a KYC process (“know your customer”). It means that you will have to enter a bunch of personal information and verify your identity with some form of ID.

The partnership between MetaMask and PayPal will benefit both companies. On MetaMask’s side, chances are the conversion rate with existing on-ramp solutions isn’t great. KYC processes can be intimidating.

There are already 430 million PayPal accounts in the world according to the company’s most recent earnings report. If MetaMask users see a big button that says you can buy cryptocurrencies with a PayPal account, it will sound easy and familiar. As for PayPal, more activity means more revenue.

At first, MetaMask users will only be able to buy Ethereum (ETH) with PayPal as the payment method. It will be available to some users in the U.S. before it is rolled out to everyone in the U.S.

If you already have ETH in your PayPal account, you can use those ETH to fund your MetaMask wallet. If that’s not the case, PayPal will help you buy ETH with your PayPal balance or other payment methods.

And that is going to generate some revenue for PayPal as the company charges fees to buy cryptocurrencies. This is PayPal’s first integration as an on-ramp provider for a web3 wallet. But I wouldn’t be surprised if we see more PayPal buttons in crypto wallets going forward.

Earlier this year, PayPal also added support for crypto transfers. PayPal users in the U.S. can get wallet addresses to fund their PayPal account with crypto assets. Similarly, PayPal users can send funds to a third-party crypto wallet.

As many people consider cryptocurrencies as internet money, they think crypto can replace PayPal altogether as a way to send and receive money from a computer and a phone. But there will always be bridges between traditional bank accounts and crypto wallets. And PayPal plans to take advantage of that.

PayPal and MetaMask team up to make it easier to buy crypto by Romain Dillet originally published on TechCrunch

Presto can now make Santa, celebrities, ‘appear’ in your drive-thru

The next time you go through a quick-serve restaurant’s drive-thru lane, you might hear a familiar “ho, ho, ho” over the speaker.

Presto Automation, a publicly traded restaurant technology company, has introduced a new automated custom voice feature for its Presto Voice, where restaurants can use almost any voice they want — celebrities, restaurant brand mascots, seasonal characters and even locally famous people — when assisting customers placing orders in the drive-thru.

Presto Voice uses artificial intelligence to automate speech recognition and can also be integrated with Presto’s other restaurant tools. The new custom voice option was prompted by a company survey that found 68% of consumers aged 18 to 44 years old said they were more likely to go to a drive-thru if it offers a celebrity voice to take orders.

“Automation technology doesn’t have to be boring or impersonal,” said Rajat Suri, founder and CEO of Presto, in a written statement. “We are proud to bring this highly innovative automation solution that delivers exciting guest experiences while improving staff productivity.”

The goal is to help restaurants increase sales by offering upsells, reduce wait times, improve order accuracy and just delight customers, the company said. It also serves to free up employees to do other things, like make food or cater to in-restaurant customers.

Checkers Drive-In Restaurants launched Presto Voice in early 2022. Cristina Perez, general manager of a Checkers location in Florida, gave a testimonial earlier this year in which she described having two dedicated drive-thru employees, one taking the orders and one taking the cash. Presto Voice has enabled Perez to redistribute one of the employees to another station and also increase sales.

“It’s all about upsell,” she said. “A human cashier can have errors as well, but they don’t hit the upsells because sometimes they are in a rush, or they don’t greet or give the ‘please’ and ‘thank you.’ With Presto, it’s always the same. There is no missed hit or missed upsell.”

Presto Voice can also supplement when an employee can’t come in. Over the past year, quick-serve restaurants have had a challenge of worker shortages, and companies have brought technology approaches to solving the problem with everything from robotic servers to recruitment tools to spend management, better employee onboarding and guest experience.

Presto has processed over 300 million transactions since being founded in 2008, and though the company touts its new feature as “an industry first,” there are others also tackling the drive-thru with artificial intelligence.

In 2020, Will Clem and Orin Wilson co-founded Bite Ninja to develop technology for remote drive-thru workers that enabled restaurants to reopen during the global pandemic. The company raised $15 million in August.

Meanwhile, ConverseNow raised $10 million in August for its voice technology that puts virtual assistants inside restaurants to automate order-taking so that human employees can do other things.

At the time, Vinay Shukla, co-founder and CEO of ConverseNow, told TechCrunch that voice AI technology continues to evolve.

“The applications of AI into different verticals are still new,” he said. “Food ordering becomes even more nuanced and drive-thru is complex. Even the best AI platforms may still need human help. What happens when there are birds chirping, kids screaming and engine noise? This is still a new space and market that companies like us created.”

Presto can now make Santa, celebrities, ‘appear’ in your drive-thru by Christine Hall originally published on TechCrunch

US claims major DDoS-for-hire takedown, but some ‘seized’ sites still load

U.S. officials say they have seized dozens of domains linked to some of the world’s leading distributed denial-of-service sites for-hire websites. But TechCrunch found that several of the seized sites are still active.

In a press release on Wednesday, the U.S. Department of Justice announced the takedown of 48 domains associated with some of the world’s most popular DDoS booter platforms, according to the corresponding warrant. These services, often marketed as sites for bandwidth stress-testing networks, allow low-skilled individuals to carry out DDoS attacks designed to overwhelm websites and networks and force them offline.

The takedowns were carried out as part of a joint operation between the U.K.’s National Crime Agency, Dutch police, and Europol, known as “Operation PowerOFF.”

The DOJ said these booter sites were involved in attacks against a wide array of victims in the U.S. and abroad, including educational institutions, government agencies, and gaming platforms. Europol notes that one of the sites seized has been to carry out more than 30 million attacks.

While many of the websites targeted by the operation now display a message stating that they have been seized by the FBI, TechCrunch found that — at the time of writing — at least eight of the sites supposedly seized by U.S. prosecutors continue to load as normal. It’s unclear why these sites continue to load.

A DOJ spokesperson did not return a request for comment.

One of the DDoS booter sites allegedly seized by the DOJ, but which remains active and operational. Image Credits: TechCrunch (screenshot).

Operation PowerOff also saw law enforcement officials arrest seven individuals who allegedly oversaw the DDoS booter services. In the U.S., criminal charges have been filed against six individuals: John M. Dobbs, Jeremiah Sam Evans, Angel Manuel Colon Jr., Shamar Shattock, Cory Anthony Palmer, and Joshua Laing.

At the time of writing, the DDoS-for-hire service allegedly run by Laing remains fully operational.

The U.K.’s NCA announced that it has also arrested an 18-year-old man in Devon, who is suspected of being an administrator of one of the seized sites. The NCA added that customer data from all of the DDoS booter sites was obtained and will be analyzed by law enforcement.

“Admins and users based in the UK will be visited by the National Crime Agency or police in the coming months,” the NCA warned.

US claims major DDoS-for-hire takedown, but some ‘seized’ sites still load by Carly Page originally published on TechCrunch

Spotify’s grand plan to monetize developers via its open source Backstage project

With nearly a third of the global music-streaming market share, Spotify needs little in the way of introduction. Some 456 million people consume music, podcasts and audiobooks through Spotify each month, 42% of which pay a monthly fee while the rest are subjected to advertisements.

Indeed, ads and subscriptions have been the cornerstone of Spotify’s business model since its inception, though it has expanded into tangential verticals such as concert tickets. However, the company is now exploring another potential money-spinner that has little to do with its core consumer product.

Back in October, Spotify teased plans to commercialize a developer-focused project that it open-sourced nearly three years ago, a project that has been adopted by engineers at Netflix, American Airlines, Box, Roku, Splunk, Epic Games, VMware, Twilio, LinkedIn, and at least 200 companies.

Today, those plans are coming to fruition.

Infrastructure frontend

The project in question is Backstage, a platform designed to bring order to companies’ infrastructure by enabling them to build customized “developer portals,” combining all their tooling, apps, data, services, APIs, and documents in a single interface. Through Backstage, users can monitor Kubernetes, for example, check their CI/CD status, view cloud costs, or track security incidents.

Spotify: Backstage in action

While there are other similar-ish tools out there, such as Compass which Atlassian introduced earlier this year, Backstage’s core selling point is that it’s flexible, extensible, and open source, enabling companies to avoid vendor lock-in.

Spotify had used a version of Backstage internally since 2016, before releasing it under an open source license in early 2020. And earlier this year, Backstage was accepted as an incubating project at the Cloud Native Computing Foundation (CNCF).

Most of the big technology companies have developed fairly robust open source programs, often involving contributing to third-party projects that are integral to their own tech stack, or through donating internally-developed projects to the community to spur uptake. And that is precisely what led Spotify to open-source Backstage, having previously been blindsided by the rise of Kubernetes in the microservices realm.

For context, Spotify was an early adopter of so-called “microservices,” an architecture that makes it easier for companies to compile complex software through integrating components developed separately and connecting them via by APIs — this is versus the traditional monolothic architecture, that is simpler in many regards, but difficult to maintain and scale.

Spotify was basically in the right place, at the right time, when the great transition from monolith to microservices was happening.

But with microservices, there is a greater need to coordinate all the different moving parts which can be an unwieldy process involving different teams and disciplines. To help, Spotify developed a home-grown container (which hosts the different microservices) orchestration platform called Helios, which it open-sourced back in 2014. However, with Kubernetes arriving from the open source vaults of Google the same year and eventually going on to conquer the world, Spotify eventually made the “painful” decision to ditch Helios and go all-in on Kubernetes.

“Kubernetes kind of took off and got better — we had to swap that [Helios] out, and that was painful and expensive for us to do all of that work,” Tyson Singer, Spotify’s head of technology and platforms, explained to TechCrunch. “But we needed to do it, because we couldn’t invest at the same rate to keep it up to speed [with Kubernetes].”

This proved to be the genesis for Spotify’s decision to open-source Backstage in 2020: once bitten, twice shy. Spotify didn’t want Backstage to lose out to some other project open-sourced by one of its rivals, and have to replace its internal developer portal for something else lightyears ahead by virtue of the fact it’s supported by hundreds of billion-dollar companies globally.

“Backstage is the operating system for our product development teams — it’s literally fundamental,” Singer said. “And we do not want to have to replace that.”

Fast-forward to today, and Spotify is now doubling-down on its efforts with Backstage, as it looks to make it a stickier proposition for some of the world’s biggest companies. And this will involve monetizing the core open source project by selling premium plugins on top of it.

“By generating revenue from these plugins, that allows us to be more confident that we can always be the winner,” Singer continued. “And that’s what we want — because, you know, it will be expensive for us to replace.”

Plugged in

Backstage is already built on a plugin-based architecture that allows engineering teams to tailor things to their own needs. There are dozens of free and open source plugins available via a dedicated marketplace, developed both by Spotify and its external community of users. However, Spotify is taking things further by offering five premium plugins and selling them as a paid subscription.

The plugins include Backstage Insights, which displays data around active Backstage usage within an organization, and which plugins users are engaging with.

Backstage Insights showing week-on-week trendsImage Credits: Spotify

Elsewhere, Pulse powers a quarterly productivity and satisfaction survey directly from inside Backstage, allowing companies to quiz their workforce and identify engineering trends and access anonymized datasets.

Skill Exchange, meanwhile, essentially brings an internal marketplace to help users find mentors, temporary collaborative learning opportunities, or hacks to improve their engineering skills.

Backstage Skill ExchangeImage Credits: Spotify

And then there’s Soundcheck, which helps engineering teams measure the health of their software components and “define development and operational standards.”

Backstage Soundcheck Image Credits: Spotify

Finally, there’s the role-based access control (RBAC) plugin, serving up a no-code interface for companies to manage access to plugins and data within Backstage.

Backstage Role-based access control Image Credits: Spotify

While Backstage and all the associated plugins can be used by businesses of all sizes, it’s primarily aimed at larger organizations, with hundreds of engineers, where the software is likely to be more complex.

“In a small development organisation, the amount of complexity that you have from, say 15 microservices, a developer portal is a nice-to-have, but not a must-have,” Singer said. “But when you’re at the scale of 500 developers or more, then the complexity really gets built out.”

Developer tools

While plenty of companies have commercialized open source technologies through the years, with engineers and developers often the beneficiaries, it is a little peculiar that a $15 billion company known primarily for music-streaming is now seeking to monetize through something not really related to music-streaming.

Moreover, having already open-sourced Backstage, and created a fairly active community of contributors that have developed plugins for others to use, why not continue to foster that goodwill by simply giving away these new plugins for free? It all comes down to one simple fact: developing robust and feature-rich software costs money, regardless of whether it’s proprietary or open source.

Indeed, just like how Kubernetes is supported by a host of big technology companies via their membership of the CNCF, Spotify has sought similar support for Backstage by donating the core project to the CNCF. But value-added services that will help drive adoption still require resources and direct investment, which is what Spotify is looking to fund through a subscription plugin bundle.

“Now it’s just a question of us being able to continue to fund that open source ecosystem, [and] like most large open source projects have, there’s some funding mechanism behind them,” Singer said.

In terms of pricing, Spotify said that costs will be dependent on “individual customer parameters” such as usage and capacity, and will be charged annually on a per-developer basis. In other words, costs will vary, but for a company with hundreds of developers, we’re probably looking at spend in the thousands to tens-of-thousands region. So this could feasibly net Spotify revenue that falls into the millions of dollars each year, though it will likely be a drop in the ocean compared to the $10 billion-plus it makes through selling access to music.

If nothing else, Backstage serves as a reminder that Spotify sees itself not purely as a music-streaming company, but a technology company too. And similar to how Amazon created a gargantuan cloud business off the back of a technology that it built initially to power its own internal operations, Spotify is looking to see what kind of traction it can gain as a developer tools company — or something to that effect.

It’s certainly a question worth pondering: does all this mean that Spotify is going all-out to become some sort of dev tools company? And can we expect to see more premium plugins arrive in the future?

“Who knows what’s gonna happen in the future — I don’t think you’ll see it in the in the next year, we’ll see how it goes,” Singer said. “We think that we have a bit to learn right now in terms of how this fits in the market? I do expect that you’ll see more from us in the future though.”

Spotify’s five new premium plugins are officially available as part of an open beta program today.

Spotify’s grand plan to monetize developers via its open source Backstage project by Paul Sawers originally published on TechCrunch

Microsoft to start multi-year rollout of EU data localization offering on January 1

Microsoft will begin a phased rollout of an expanded data localization offering in the European Union on January 1, it said today.

The EU Data Boundary for the Microsoft Cloud, as it’s branding the provision for local storage and processing of cloud services’ customer data, is intended to respond to a regional rise in demand for digital sovereignty that’s been amplified by legal uncertainties over EU-US data flows stemming from the clash between the bloc’s data protection rights and US surveillance practices.

“Beginning on January 1, 2023, Microsoft will offer customers the ability to store and process their customer data within the EU Data Boundary for Microsoft 365, Azure, Power Platform and Dynamics 365 services,” it wrote of the forthcoming “data residency solution” for customers in the EU and EFTA (the European Free Trade Association), adding: “With this release, Microsoft expands on existing local storage and processing commitments, greatly reducing data flows out of Europe and building on our industry-leading data residency solutions.”

Earlier this week, the European Commission published a draft decision on US adequacy that’s intended to resolve differences between legal requirements with a new deal on secure data transfers. However this EU-US Data Privacy Framework (DPF) won’t be finalized until next year — potentially not before the middle of next year — and in the meanwhile transatlantic transfers of Europeans’ personal data remain clouded in legal risk.

Microsoft’s EU Data Boundary being rolled out in phases means there is no instant fix for the EU-US data flows risk on the horizon for its customers.

Nor is it clear whether the data residency solution will be comprehensive enough to address all the data flows and data protection concerns being attached to Microsoft’s products in Europe.

A long running review of Microsoft’s 365 productivity suite by German data protection regulators made uncomfortable reading for the tech giant last month — as the working group concluded there is still no way to use its software and comply with the EU’s General Data Protection Regulation (GDPR) despite months of engagement with Microsoft over their compliance concerns.

Microsoft disputes the working group’s assessment — but has also said it remains committed to addressing outstanding concerns, and it names the EU Data Boundary as part of its plan for this since the offering will also provide “additional transparency documentation” on customer data flows and the purposes of processing; and more transparency on the processing and location by subprocessors and Microsoft employees outside of the EU (since Microsoft is not proposing a total localization of European customers’ data and zero processing elsewhere; so the EU Data Boundary remains somewhat porous by design).

Its blog post today announcing the kick off of the phased rollout notes that as part of the first phase it will begin publishing “detailed documentation” on what it’s calling its “Boundary commitments” — including, transparency documentation containing descriptions of data flows.

Per Microsoft, these transparency documents will initially be published in English — with “additional languages” slated as coming later (NB: The EU has 24 official languages, per Wikipedia, only one of which is English).

“Documentation will be updated continually as Microsoft rolls out additional phases of the EU Data Boundary and will include details around services that may continue to require limited transfers of customer data outside of the EU to maintain the security and reliability of the service,” it adds, saying these “limited data transfers” are required to ensure EU customers “continue to receive the full benefits of global hyperscale cloud computing while enjoying industry-leading data management capabilities”, as its PR puts it.

The tech giant had been shooting for the EU Data Boundary to be operational by the end of 2022. But given the phased rollout, a January 1st launch date is a pretty meaningless marker. After this initial launch, Microsoft said “coming phases” of the rollout will expand the offering to include the storage and processing of “additional categories of personal data”, including data provided when customers are receiving technical support.

We’ve asked Microsoft for more details on which data will be covered by which phases and when subsequent phases will roll out and will update this report with any response.

Discussing its phased rollout approach with Reuters, Microsoft’s chief privacy officer, Julie Brill, told the news agency: “As we dived deeper into this project, we learned that we needed to be taken more phased approach. The first phase will be customer data. And then as we move into the next phases, we will be moving logging data, service data and other kind of data into the boundary.”

She also said the second phase of the rollout will be completed at the end of 2023 — and phase three will be completed in 2024. Hence the date for Microsoft’s EU Data Boundary fully operational remains years out.

“Based on customer feedback and insights, as well as learnings gained over the past year of developing the boundary, we have adjusted the timeline for the localization of additional personal data categories and data provided when receiving technical support,” it also writes in the blog post — explaining its “adjusted” timeline — and adding: “To ensure that we continue to deliver a world-class solution that meets the overall quality, stability, and security expectations of customers, Microsoft will deliver on-going enhancements to the boundary in phases. To assist customers with planning, we have published a detailed roadmap for our EU Data Boundary available on our Trust Center.”

In a similar move earlier this year, Google announced incoming data flows-related changes for its productivity suite, Workspace, in Europe — saying that by the end of the year it would provide regional customers with extra controls enabling them to “control, limit, and monitor transfers of data to and from the EU”.

Back in February, European data protection regulators kicked off a coordinated enforcement action focused on public sector bodies’ use of cloud services to test whether adequate data protection measures are being applied, including when data is exported out of the bloc — with a ‘state of play’ report due from the European Data Protection Board before the end of the year — a timeline that’s likely to have concentrated US cloud giants’ minds about the need to expand their compliance offerings to European customers.

Microsoft to start multi-year rollout of EU data localization offering on January 1 by Natasha Lomas originally published on TechCrunch

Coinbase launches asset recovery tool for unsupported Ethereum-based tokens

Coinbase, the second-largest crypto exchange globally, has launched a new tool to help its customers recover more than 4,000 unsupported ERC-20 tokens sent to its ledger, the company exclusively told TechCrunch.

“ERC-20 token” is technical terminology for any cryptocurrency created using the Ethereum blockchain. While Coinbase supports hundreds of cryptocurrencies, there are thousands that it doesn’t. The ERC-20 self-service asset recovery tool allows customers to recover different kinds of tokens sent to a Coinbase address.

“It’s been a pain point for customers who sent ERC-20 tokens to a Coinbase receive address,” Will Robinson, vice president of engineering at Coinbase, told TechCrunch. “When people accidentally sent these assets, they were effectively stuck up until this point.”

In the past, if you sent assets not supported by Coinbase to a users’ address on the exchange, you’d get a message saying the assets were successfully delivered on-chain, but they didn’t actually go to the receiver’s wallets. Usually, these assets are unrecoverable because internal operators don’t have access to the private keys needed to reverse transactions.

Such transactions make up a “small fraction of the total transfers” Coinbase receives, but from an individual user’s point of view, such an error could make for a “very bad day,” Robinson said. Coinbase has over 108 million verified users across over 100 countries with $101 billion assets on the platform, according to its website.

Many tokens that are ERC-20 tokens on the Ethereum mainnet that have pricing information on a decentralized exchange or other venue can be recovered, Robinson said. “We make no quality representation of these assets, as they haven’t gone through our review process, but we’re facilitating the returns that accidentally sent it in the first place.”

To recover funds, customers must provide their Ethereum transaction identification for the lost assets and the contract address of the lost asset. The recovery tool only works for select ERC-20 tokens sent into Coinbase. “For supported assets, there’s nothing to be done here,” Robinson said. “The problem doesn’t exist in the same way, because Coinbase users have access and can send them back themselves.”

The feature will be rolled out over the next few weeks, but is not available for Japan or Coinbase Prime users. There’s no recovery fee for amounts less than $100, but those worth over $100 will be charged a 5% fee – aside from the separate network fee, which applies to all recoveries, Coinbase said.

In the long term, support for other asset recoveries beyond ERC-20 tokens could be a reality, but “no firm commitments” exist today, Robinson said. “This is a direction we know is important to users and want to drive forward.”

Coinbase launches asset recovery tool for unsupported Ethereum-based tokens by Jacquelyn Melinek originally published on TechCrunch

Pin It on Pinterest