Presto can now make Santa, celebrities, ‘appear’ in your drive-thru

The next time you go through a quick-serve restaurant’s drive-thru lane, you might hear a familiar “ho, ho, ho” over the speaker.

Presto Automation, a publicly traded restaurant technology company, has introduced a new automated custom voice feature for its Presto Voice, where restaurants can use almost any voice they want — celebrities, restaurant brand mascots, seasonal characters and even locally famous people — when assisting customers placing orders in the drive-thru.

Presto Voice uses artificial intelligence to automate speech recognition and can also be integrated with Presto’s other restaurant tools. The new custom voice option was prompted by a company survey that found 68% of consumers aged 18 to 44 years old said they were more likely to go to a drive-thru if it offers a celebrity voice to take orders.

“Automation technology doesn’t have to be boring or impersonal,” said Rajat Suri, founder and CEO of Presto, in a written statement. “We are proud to bring this highly innovative automation solution that delivers exciting guest experiences while improving staff productivity.”

The goal is to help restaurants increase sales by offering upsells, reduce wait times, improve order accuracy and just delight customers, the company said. It also serves to free up employees to do other things, like make food or cater to in-restaurant customers.

Checkers Drive-In Restaurants launched Presto Voice in early 2022. Cristina Perez, general manager of a Checkers location in Florida, gave a testimonial earlier this year in which she described having two dedicated drive-thru employees, one taking the orders and one taking the cash. Presto Voice has enabled Perez to redistribute one of the employees to another station and also increase sales.

“It’s all about upsell,” she said. “A human cashier can have errors as well, but they don’t hit the upsells because sometimes they are in a rush, or they don’t greet or give the ‘please’ and ‘thank you.’ With Presto, it’s always the same. There is no missed hit or missed upsell.”

Presto Voice can also supplement when an employee can’t come in. Over the past year, quick-serve restaurants have had a challenge of worker shortages, and companies have brought technology approaches to solving the problem with everything from robotic servers to recruitment tools to spend management, better employee onboarding and guest experience.

Presto has processed over 300 million transactions since being founded in 2008, and though the company touts its new feature as “an industry first,” there are others also tackling the drive-thru with artificial intelligence.

In 2020, Will Clem and Orin Wilson co-founded Bite Ninja to develop technology for remote drive-thru workers that enabled restaurants to reopen during the global pandemic. The company raised $15 million in August.

Meanwhile, ConverseNow raised $10 million in August for its voice technology that puts virtual assistants inside restaurants to automate order-taking so that human employees can do other things.

At the time, Vinay Shukla, co-founder and CEO of ConverseNow, told TechCrunch that voice AI technology continues to evolve.

“The applications of AI into different verticals are still new,” he said. “Food ordering becomes even more nuanced and drive-thru is complex. Even the best AI platforms may still need human help. What happens when there are birds chirping, kids screaming and engine noise? This is still a new space and market that companies like us created.”

Presto can now make Santa, celebrities, ‘appear’ in your drive-thru by Christine Hall originally published on TechCrunch

PayPal and MetaMask team up to make it easier to buy crypto

PayPal is primarily known as an online payment method. But the company wants to become an easy way to get started with cryptocurrencies. In that regard, ConsenSys, the company behind MetaMask, announced that it would add an integration in its crypto wallet so that users can buy cryptocurrencies using their PayPal account.

MetaMask is one of the most popular non-custodial crypto wallet out there. It lets you store crypto assets and interact with web3 products as you can use your wallet as your authentication method.

But you can’t do much if you have an empty MetaMask wallet. That’s why users rely on centralized cryptocurrency exchanges like Coinbase, Kraken and FTX to buy cryptocurrencies and transfer them to their MetaMask wallet. MetaMask also has its own on-ramp features in its mobile app so that you don’t have to switch to another service and go through many intermediate steps. On-ramp partners include MoonPay, Wyre and Transak.

If you buy crypto with one of those partners, you will have to go through a KYC process (“know your customer”). It means that you will have to enter a bunch of personal information and verify your identity with some form of ID.

The partnership between MetaMask and PayPal will benefit both companies. On MetaMask’s side, chances are the conversion rate with existing on-ramp solutions isn’t great. KYC processes can be intimidating.

There are already 430 million PayPal accounts in the world according to the company’s most recent earnings report. If MetaMask users see a big button that says you can buy cryptocurrencies with a PayPal account, it will sound easy and familiar. As for PayPal, more activity means more revenue.

At first, MetaMask users will only be able to buy Ethereum (ETH) with PayPal as the payment method. It will be available to some users in the U.S. before it is rolled out to everyone in the U.S.

If you already have ETH in your PayPal account, you can use those ETH to fund your MetaMask wallet. If that’s not the case, PayPal will help you buy ETH with your PayPal balance or other payment methods.

And that is going to generate some revenue for PayPal as the company charges fees to buy cryptocurrencies. This is PayPal’s first integration as an on-ramp provider for a web3 wallet. But I wouldn’t be surprised if we see more PayPal buttons in crypto wallets going forward.

Earlier this year, PayPal also added support for crypto transfers. PayPal users in the U.S. can get wallet addresses to fund their PayPal account with crypto assets. Similarly, PayPal users can send funds to a third-party crypto wallet.

As many people consider cryptocurrencies as internet money, they think crypto can replace PayPal altogether as a way to send and receive money from a computer and a phone. But there will always be bridges between traditional bank accounts and crypto wallets. And PayPal plans to take advantage of that.

PayPal and MetaMask team up to make it easier to buy crypto by Romain Dillet originally published on TechCrunch

US claims major DDoS-for-hire takedown, but some ‘seized’ sites still load

U.S. officials say they have seized dozens of domains linked to some of the world’s leading distributed denial-of-service sites for-hire websites. But TechCrunch found that several of the seized sites are still active.

In a press release on Wednesday, the U.S. Department of Justice announced the takedown of 48 domains associated with some of the world’s most popular DDoS booter platforms, according to the corresponding warrant. These services, often marketed as sites for bandwidth stress-testing networks, allow low-skilled individuals to carry out DDoS attacks designed to overwhelm websites and networks and force them offline.

The takedowns were carried out as part of a joint operation between the U.K.’s National Crime Agency, Dutch police, and Europol, known as “Operation PowerOFF.”

The DOJ said these booter sites were involved in attacks against a wide array of victims in the U.S. and abroad, including educational institutions, government agencies, and gaming platforms. Europol notes that one of the sites seized has been to carry out more than 30 million attacks.

While many of the websites targeted by the operation now display a message stating that they have been seized by the FBI, TechCrunch found that — at the time of writing — at least eight of the sites supposedly seized by U.S. prosecutors continue to load as normal. It’s unclear why these sites continue to load.

A DOJ spokesperson did not return a request for comment.

One of the DDoS booter sites allegedly seized by the DOJ, but which remains active and operational. Image Credits: TechCrunch (screenshot).

Operation PowerOff also saw law enforcement officials arrest seven individuals who allegedly oversaw the DDoS booter services. In the U.S., criminal charges have been filed against six individuals: John M. Dobbs, Jeremiah Sam Evans, Angel Manuel Colon Jr., Shamar Shattock, Cory Anthony Palmer, and Joshua Laing.

At the time of writing, the DDoS-for-hire service allegedly run by Laing remains fully operational.

The U.K.’s NCA announced that it has also arrested an 18-year-old man in Devon, who is suspected of being an administrator of one of the seized sites. The NCA added that customer data from all of the DDoS booter sites was obtained and will be analyzed by law enforcement.

“Admins and users based in the UK will be visited by the National Crime Agency or police in the coming months,” the NCA warned.

US claims major DDoS-for-hire takedown, but some ‘seized’ sites still load by Carly Page originally published on TechCrunch

Spotify’s grand plan to monetize developers via its open source Backstage project

With nearly a third of the global music-streaming market share, Spotify needs little in the way of introduction. Some 456 million people consume music, podcasts and audiobooks through Spotify each month, 42% of which pay a monthly fee while the rest are subjected to advertisements.

Indeed, ads and subscriptions have been the cornerstone of Spotify’s business model since its inception, though it has expanded into tangential verticals such as concert tickets. However, the company is now exploring another potential money-spinner that has little to do with its core consumer product.

Back in October, Spotify teased plans to commercialize a developer-focused project that it open-sourced nearly three years ago, a project that has been adopted by engineers at Netflix, American Airlines, Box, Roku, Splunk, Epic Games, VMware, Twilio, LinkedIn, and at least 200 companies.

Today, those plans are coming to fruition.

Infrastructure frontend

The project in question is Backstage, a platform designed to bring order to companies’ infrastructure by enabling them to build customized “developer portals,” combining all their tooling, apps, data, services, APIs, and documents in a single interface. Through Backstage, users can monitor Kubernetes, for example, check their CI/CD status, view cloud costs, or track security incidents.

Spotify: Backstage in action

While there are other similar-ish tools out there, such as Compass which Atlassian introduced earlier this year, Backstage’s core selling point is that it’s flexible, extensible, and open source, enabling companies to avoid vendor lock-in.

Spotify had used a version of Backstage internally since 2016, before releasing it under an open source license in early 2020. And earlier this year, Backstage was accepted as an incubating project at the Cloud Native Computing Foundation (CNCF).

Most of the big technology companies have developed fairly robust open source programs, often involving contributing to third-party projects that are integral to their own tech stack, or through donating internally-developed projects to the community to spur uptake. And that is precisely what led Spotify to open-source Backstage, having previously been blindsided by the rise of Kubernetes in the microservices realm.

For context, Spotify was an early adopter of so-called “microservices,” an architecture that makes it easier for companies to compile complex software through integrating components developed separately and connecting them via by APIs — this is versus the traditional monolothic architecture, that is simpler in many regards, but difficult to maintain and scale.

Spotify was basically in the right place, at the right time, when the great transition from monolith to microservices was happening.

But with microservices, there is a greater need to coordinate all the different moving parts which can be an unwieldy process involving different teams and disciplines. To help, Spotify developed a home-grown container (which hosts the different microservices) orchestration platform called Helios, which it open-sourced back in 2014. However, with Kubernetes arriving from the open source vaults of Google the same year and eventually going on to conquer the world, Spotify eventually made the “painful” decision to ditch Helios and go all-in on Kubernetes.

“Kubernetes kind of took off and got better — we had to swap that [Helios] out, and that was painful and expensive for us to do all of that work,” Tyson Singer, Spotify’s head of technology and platforms, explained to TechCrunch. “But we needed to do it, because we couldn’t invest at the same rate to keep it up to speed [with Kubernetes].”

This proved to be the genesis for Spotify’s decision to open-source Backstage in 2020: once bitten, twice shy. Spotify didn’t want Backstage to lose out to some other project open-sourced by one of its rivals, and have to replace its internal developer portal for something else lightyears ahead by virtue of the fact it’s supported by hundreds of billion-dollar companies globally.

“Backstage is the operating system for our product development teams — it’s literally fundamental,” Singer said. “And we do not want to have to replace that.”

Fast-forward to today, and Spotify is now doubling-down on its efforts with Backstage, as it looks to make it a stickier proposition for some of the world’s biggest companies. And this will involve monetizing the core open source project by selling premium plugins on top of it.

“By generating revenue from these plugins, that allows us to be more confident that we can always be the winner,” Singer continued. “And that’s what we want — because, you know, it will be expensive for us to replace.”

Plugged in

Backstage is already built on a plugin-based architecture that allows engineering teams to tailor things to their own needs. There are dozens of free and open source plugins available via a dedicated marketplace, developed both by Spotify and its external community of users. However, Spotify is taking things further by offering five premium plugins and selling them as a paid subscription.

The plugins include Backstage Insights, which displays data around active Backstage usage within an organization, and which plugins users are engaging with.

Backstage Insights showing week-on-week trendsImage Credits: Spotify

Elsewhere, Pulse powers a quarterly productivity and satisfaction survey directly from inside Backstage, allowing companies to quiz their workforce and identify engineering trends and access anonymized datasets.

Skill Exchange, meanwhile, essentially brings an internal marketplace to help users find mentors, temporary collaborative learning opportunities, or hacks to improve their engineering skills.

Backstage Skill ExchangeImage Credits: Spotify

And then there’s Soundcheck, which helps engineering teams measure the health of their software components and “define development and operational standards.”

Backstage Soundcheck Image Credits: Spotify

Finally, there’s the role-based access control (RBAC) plugin, serving up a no-code interface for companies to manage access to plugins and data within Backstage.

Backstage Role-based access control Image Credits: Spotify

While Backstage and all the associated plugins can be used by businesses of all sizes, it’s primarily aimed at larger organizations, with hundreds of engineers, where the software is likely to be more complex.

“In a small development organisation, the amount of complexity that you have from, say 15 microservices, a developer portal is a nice-to-have, but not a must-have,” Singer said. “But when you’re at the scale of 500 developers or more, then the complexity really gets built out.”

Developer tools

While plenty of companies have commercialized open source technologies through the years, with engineers and developers often the beneficiaries, it is a little peculiar that a $15 billion company known primarily for music-streaming is now seeking to monetize through something not really related to music-streaming.

Moreover, having already open-sourced Backstage, and created a fairly active community of contributors that have developed plugins for others to use, why not continue to foster that goodwill by simply giving away these new plugins for free? It all comes down to one simple fact: developing robust and feature-rich software costs money, regardless of whether it’s proprietary or open source.

Indeed, just like how Kubernetes is supported by a host of big technology companies via their membership of the CNCF, Spotify has sought similar support for Backstage by donating the core project to the CNCF. But value-added services that will help drive adoption still require resources and direct investment, which is what Spotify is looking to fund through a subscription plugin bundle.

“Now it’s just a question of us being able to continue to fund that open source ecosystem, [and] like most large open source projects have, there’s some funding mechanism behind them,” Singer said.

In terms of pricing, Spotify said that costs will be dependent on “individual customer parameters” such as usage and capacity, and will be charged annually on a per-developer basis. In other words, costs will vary, but for a company with hundreds of developers, we’re probably looking at spend in the thousands to tens-of-thousands region. So this could feasibly net Spotify revenue that falls into the millions of dollars each year, though it will likely be a drop in the ocean compared to the $10 billion-plus it makes through selling access to music.

If nothing else, Backstage serves as a reminder that Spotify sees itself not purely as a music-streaming company, but a technology company too. And similar to how Amazon created a gargantuan cloud business off the back of a technology that it built initially to power its own internal operations, Spotify is looking to see what kind of traction it can gain as a developer tools company — or something to that effect.

It’s certainly a question worth pondering: does all this mean that Spotify is going all-out to become some sort of dev tools company? And can we expect to see more premium plugins arrive in the future?

“Who knows what’s gonna happen in the future — I don’t think you’ll see it in the in the next year, we’ll see how it goes,” Singer said. “We think that we have a bit to learn right now in terms of how this fits in the market? I do expect that you’ll see more from us in the future though.”

Spotify’s five new premium plugins are officially available as part of an open beta program today.

Spotify’s grand plan to monetize developers via its open source Backstage project by Paul Sawers originally published on TechCrunch

Microsoft to start multi-year rollout of EU data localization offering on January 1

Microsoft will begin a phased rollout of an expanded data localization offering in the European Union on January 1, it said today.

The EU Data Boundary for the Microsoft Cloud, as it’s branding the provision for local storage and processing of cloud services’ customer data, is intended to respond to a regional rise in demand for digital sovereignty that’s been amplified by legal uncertainties over EU-US data flows stemming from the clash between the bloc’s data protection rights and US surveillance practices.

“Beginning on January 1, 2023, Microsoft will offer customers the ability to store and process their customer data within the EU Data Boundary for Microsoft 365, Azure, Power Platform and Dynamics 365 services,” it wrote of the forthcoming “data residency solution” for customers in the EU and EFTA (the European Free Trade Association), adding: “With this release, Microsoft expands on existing local storage and processing commitments, greatly reducing data flows out of Europe and building on our industry-leading data residency solutions.”

Earlier this week, the European Commission published a draft decision on US adequacy that’s intended to resolve differences between legal requirements with a new deal on secure data transfers. However this EU-US Data Privacy Framework (DPF) won’t be finalized until next year — potentially not before the middle of next year — and in the meanwhile transatlantic transfers of Europeans’ personal data remain clouded in legal risk.

Microsoft’s EU Data Boundary being rolled out in phases means there is no instant fix for the EU-US data flows risk on the horizon for its customers.

Nor is it clear whether the data residency solution will be comprehensive enough to address all the data flows and data protection concerns being attached to Microsoft’s products in Europe.

A long running review of Microsoft’s 365 productivity suite by German data protection regulators made uncomfortable reading for the tech giant last month — as the working group concluded there is still no way to use its software and comply with the EU’s General Data Protection Regulation (GDPR) despite months of engagement with Microsoft over their compliance concerns.

Microsoft disputes the working group’s assessment — but has also said it remains committed to addressing outstanding concerns, and it names the EU Data Boundary as part of its plan for this since the offering will also provide “additional transparency documentation” on customer data flows and the purposes of processing; and more transparency on the processing and location by subprocessors and Microsoft employees outside of the EU (since Microsoft is not proposing a total localization of European customers’ data and zero processing elsewhere; so the EU Data Boundary remains somewhat porous by design).

Its blog post today announcing the kick off of the phased rollout notes that as part of the first phase it will begin publishing “detailed documentation” on what it’s calling its “Boundary commitments” — including, transparency documentation containing descriptions of data flows.

Per Microsoft, these transparency documents will initially be published in English — with “additional languages” slated as coming later (NB: The EU has 24 official languages, per Wikipedia, only one of which is English).

“Documentation will be updated continually as Microsoft rolls out additional phases of the EU Data Boundary and will include details around services that may continue to require limited transfers of customer data outside of the EU to maintain the security and reliability of the service,” it adds, saying these “limited data transfers” are required to ensure EU customers “continue to receive the full benefits of global hyperscale cloud computing while enjoying industry-leading data management capabilities”, as its PR puts it.

The tech giant had been shooting for the EU Data Boundary to be operational by the end of 2022. But given the phased rollout, a January 1st launch date is a pretty meaningless marker. After this initial launch, Microsoft said “coming phases” of the rollout will expand the offering to include the storage and processing of “additional categories of personal data”, including data provided when customers are receiving technical support.

We’ve asked Microsoft for more details on which data will be covered by which phases and when subsequent phases will roll out and will update this report with any response.

Discussing its phased rollout approach with Reuters, Microsoft’s chief privacy officer, Julie Brill, told the news agency: “As we dived deeper into this project, we learned that we needed to be taken more phased approach. The first phase will be customer data. And then as we move into the next phases, we will be moving logging data, service data and other kind of data into the boundary.”

She also said the second phase of the rollout will be completed at the end of 2023 — and phase three will be completed in 2024. Hence the date for Microsoft’s EU Data Boundary fully operational remains years out.

“Based on customer feedback and insights, as well as learnings gained over the past year of developing the boundary, we have adjusted the timeline for the localization of additional personal data categories and data provided when receiving technical support,” it also writes in the blog post — explaining its “adjusted” timeline — and adding: “To ensure that we continue to deliver a world-class solution that meets the overall quality, stability, and security expectations of customers, Microsoft will deliver on-going enhancements to the boundary in phases. To assist customers with planning, we have published a detailed roadmap for our EU Data Boundary available on our Trust Center.”

In a similar move earlier this year, Google announced incoming data flows-related changes for its productivity suite, Workspace, in Europe — saying that by the end of the year it would provide regional customers with extra controls enabling them to “control, limit, and monitor transfers of data to and from the EU”.

Back in February, European data protection regulators kicked off a coordinated enforcement action focused on public sector bodies’ use of cloud services to test whether adequate data protection measures are being applied, including when data is exported out of the bloc — with a ‘state of play’ report due from the European Data Protection Board before the end of the year — a timeline that’s likely to have concentrated US cloud giants’ minds about the need to expand their compliance offerings to European customers.

Microsoft to start multi-year rollout of EU data localization offering on January 1 by Natasha Lomas originally published on TechCrunch

Coinbase launches asset recovery tool for unsupported Ethereum-based tokens

Coinbase, the second-largest crypto exchange globally, has launched a new tool to help its customers recover more than 4,000 unsupported ERC-20 tokens sent to its ledger, the company exclusively told TechCrunch.

“ERC-20 token” is technical terminology for any cryptocurrency created using the Ethereum blockchain. While Coinbase supports hundreds of cryptocurrencies, there are thousands that it doesn’t. The ERC-20 self-service asset recovery tool allows customers to recover different kinds of tokens sent to a Coinbase address.

“It’s been a pain point for customers who sent ERC-20 tokens to a Coinbase receive address,” Will Robinson, vice president of engineering at Coinbase, told TechCrunch. “When people accidentally sent these assets, they were effectively stuck up until this point.”

In the past, if you sent assets not supported by Coinbase to a users’ address on the exchange, you’d get a message saying the assets were successfully delivered on-chain, but they didn’t actually go to the receiver’s wallets. Usually, these assets are unrecoverable because internal operators don’t have access to the private keys needed to reverse transactions.

Such transactions make up a “small fraction of the total transfers” Coinbase receives, but from an individual user’s point of view, such an error could make for a “very bad day,” Robinson said. Coinbase has over 108 million verified users across over 100 countries with $101 billion assets on the platform, according to its website.

Many tokens that are ERC-20 tokens on the Ethereum mainnet that have pricing information on a decentralized exchange or other venue can be recovered, Robinson said. “We make no quality representation of these assets, as they haven’t gone through our review process, but we’re facilitating the returns that accidentally sent it in the first place.”

To recover funds, customers must provide their Ethereum transaction identification for the lost assets and the contract address of the lost asset. The recovery tool only works for select ERC-20 tokens sent into Coinbase. “For supported assets, there’s nothing to be done here,” Robinson said. “The problem doesn’t exist in the same way, because Coinbase users have access and can send them back themselves.”

The feature will be rolled out over the next few weeks, but is not available for Japan or Coinbase Prime users. There’s no recovery fee for amounts less than $100, but those worth over $100 will be charged a 5% fee – aside from the separate network fee, which applies to all recoveries, Coinbase said.

In the long term, support for other asset recoveries beyond ERC-20 tokens could be a reality, but “no firm commitments” exist today, Robinson said. “This is a direction we know is important to users and want to drive forward.”

Coinbase launches asset recovery tool for unsupported Ethereum-based tokens by Jacquelyn Melinek originally published on TechCrunch

Meta, Microsoft, AWS and TomTom launch the Overture Maps Foundation to develop interoperable open map data

The Linux Foundation has partnered with some of the world’s biggest technology companies to develop interoperable and open map data, in what is a clear move to counter Google’s dominance in the mapping realm.

The Overture Maps Foundation, as the new effort is called, is officially hosted by the Linux Foundation, but the program is driven by Amazon Web Services (AWS), Facebook’s parent company Meta, Microsoft, and Dutch mapping company TomTom.

The ultimate mission of the Overture Maps Foundation to power new map products through openly available datasets that can be used and reused across applications and businesses, with each member throwing their own data and resources into the mix.

“Mapping the physical environment and every community in the world, even as they grow and change, is a massively complex challenge that no one organization can manage,” noted the Linux Foundation’s executive director Jim Zemlin in a press release. “Industry needs to come together to do this for the benefit of all.”

Map and location data plays such a fundamental role across society today, powering everything from IoT (internet of things) devices and self-driving cars, to logistics and big data visualization tools. Having all that data under the auspices of just one or two mega-firms can be hugely restrictive in terms of what companies can do with the data and what features they have at their disposal, not to mention the costs involved in licensing it.

Spatial mapping will also be vital to emerging technologies such as those required for the Metaverse, which Meta is heavily invested in.

“Immersive experiences, which understand and blend into your physical environment, are critical to the embodied internet of the future,” added Jan Erik Solem, engineering director for Maps at Meta. “By delivering interoperable open map data, Overture provides the foundation for an open metaverse built by creators, developers, and businesses alike.”

The anti-Google?

Google is a notable omission from the Overture Maps Foundation’s founding members. Indeed, that such big names and rivals from the technology sphere are coming together in partnership is probably testament to the stranglehold Google has on the world of mapping, a position it has slowly garnered since launching its Android mobile operating system nearly fifteen years ago.

Moreover, with the iPhone arriving around the same time, a combination that brought maps and navigation into the pockets of millions of people globally, this had a monumental impact on incumbents such as TomTom, which had built a substantial business off the back of physical navigation devices plastered to car windshields.

This graph shows how TomTom’s shares plummeted with the advent of the modern smartphone era.

TomTom’s shares since the launch of Android and iOS 15 years ago

In the intervening years, TomTom has tried to evolve, striking map and data partnerships with the likes of Uber and Microsoft, while it has also targeted developers with SDKs and hit the acquisition trail to bolster its autonomous vehicle ambitions. But the fact remains, Google and its mapping empire still rule the roost for the most part, something that this new collaboration will go some way toward addressing.

“Collaborative mapmaking is central to TomTom’s strategy — the Overture Maps Foundation provides the framework to accelerate our goals,” TomTom CEO Harold Goddijn noted in a press release. “TomTom’s Maps Platform will leverage the combination of the Overture base map, a broad range of other data, and TomTom’s proprietary data in a continuously integrated and quality-controlled product that serves a broad range of use cases, including the most demanding applications like advanced navigation, search, and automated driving.”

Open sesame

The emergence of this new foundation jibes with trends elsewhere across the technology spectrum, with a growing push toward decentralized and interoperable social networks driven by regulatory and societal pressures. Elsewhere, the Linux Foundation also recently announced the OpenWallet Foundation to develop interoperable digital wallets, pushing back against the closed payment ecosystems fostered by tech juggernauts including Google and Apple.

Today’s announcement very much fits into that broader trend.

The founding companies are planning to engage in collaborative map-building programs, meshing data from myriad open data sources and knocking it into a format that’s consistent, standardized, and fit for use in production systems and applications. This will include channeling data from long-established projects such as OpenStreetMap, in addition to open data provided by municipalities.

While there are only four member companies at launch, there are plans to expand things in the future to include any company with a direct vested interest in open map data.

For now, the Overture Maps Foundation said that it’s working toward releasing its first datasets in the first half of 2023, and will include “basic” layers such as roads, buildings, and administrative information. Over time, this will expand to include more places, routing and navigation, and 3D building data.

Meta, Microsoft, AWS and TomTom launch the Overture Maps Foundation to develop interoperable open map data by Paul Sawers originally published on TechCrunch

Which Instagram ad placement is more cost-effective: Reels, Feed Posts or Stories?

It’s time to reconsider how you use Instagram’s advertising tools for your business. With the platform’s continuing development, new ad placements and algorithm changes, the creatives, captions and CTA that worked in the past might not deliver the same CPC, reach and engagement in the near future.

Brands are growing concerned about where to spend their main social media marketing budgets. Over 50% of social media marketing budgets are spent inefficiently because of poor creatives, mixed messaging, limited ad types and unfortunate ad captions.

With this article, I want to explore how we optimized ad strategies on Instagram for GLAM LAB London, which allows freelance beauty professionals in the U.K. to register and list their services, which clients can book. The purpose of the marketing campaign on Instagram was to raise awareness about the service and generate leads.

Test 1

Image or video ad creatives?

At first, I was hesitant about testing image ads just to see how they performed and comparing them with the performance of our video ads.

Quick research showed that video ads generate three times more engagement than other formats. I still ran a quick campaign for a couple of days with an image ad, among other formats, and found that Instagram’s algorithms were reluctant to push image ads:

Image Credits: GLAMLAB

I therefore decided to instead use video ads. I started with two video creatives that went viral on Instagram Reels (generating 21,000 and 5,000 views) to test the algorithms and targets. We ran one six-second video of a girl that first showed her without any makeup and then transitioned to her post-makeup to music, and a 13-second video showing a client struggling with makeup and ordering our service in the end.

I used a general ad description with a CTA to book the service on our website, targeting a generic audience — women from London, aged 18-55. I chose all three Instagram ad placements (Reels, Stories and Feed Posts) and the average results were:

Image Credits: GLAMLAB

I had already heard that Instagram Reels could be more expensive than other placements but didn’t expect such a big difference. The same creatives can perform at almost half the cost if used on different ad placements.

Test 2

Which Instagram ad placement is more cost-effective: Reels, Feed Posts or Stories? by Ram Iyer originally published on TechCrunch

Mario Kart 7 receives first update in 10 years

The earlier update was released on May 15, 2012, in which it was made available to eliminate shortcut exploits in the Wuhu Loop, Maka Wuhu, and Bowser Castle 1 tracks when they were played in the Online Multiplayer Mode.

Pin It on Pinterest