Lyst, the UK fashion marketplace, is laying off 25% of staff

Lyst, the UK fashion e-commerce site that last year raised funding at a $700 million valuation, is the latest tech startup to rein in spending by cutting staff. TechCrunch has learned that the company is in the process of laying off 25% of its employees, working out to about 50 people, as part of a larger restructuring to conserve cash flow and move to profitability.

The details were first leaked to us by way of an internal memo from the CEO, Emma McFerran, who took over the role of CEO from founder Chris Morton in July of this year. The company then confirmed the details to us. It’s not clear which departments will be most impacted, but the memo notes that some 85 people are being contacted who will be ‘impacted by this exercise.’

We understand from sources that the company had plans for an IPO next year but that these are now being pushed back, and that it might be looking for another round of funding to shore up its finances.

Lyst last raised money in May 2021, when the picture for e-commerce was rosily tinted, one of the ironic bright business spots in the largely otherwise devastating Covid-19 pandemic: fashion retailers in particular were seeing record-breaking revenues and business growth online as consumers turned away from shopping in person and use disposable income that they were no longer spending on going out. That made for buoyant sales, as well as very bullish prognostications: consumer shoppers, observers said, were unlikely to “go back” to physical shopping in the same numbers even after the pandemic subsided.

Lyst was a product of that: when it announced its $85 million raise, it planned for that to be its last fundraise ahead of an IPO, which it was planning potentially for London or New York as soon as this year.

At the time it said it had 150 million users and a catalog of 8 million products from 17,000 brands and retailers. That list of brands includes a number of high-end labels such as Balenciaga, Balmain, Bottega Veneta, Burberry, Fendi, Gucci, Moncler, Off-White, Prada, Saint Laurent and Valentino, and that combined with an active audience of shoppers led the company to strong growth. In 2020, gross merchandise value on Lyst was over $500 million. Between then and 2021, new user numbers grew 1100% and by the time the round was announced GMV was at more than $2 billion.

Fast forward to today, and the most optimistic and bullish prognostications in e-commerce have failed to play out: online sales have not continued with torrid growth, and people generally haven’t been spending as much online as a share of wallet with the return to in-store shopping.

That has led to some business contractions across the board. Amazon, the biggest of all e-commerce operations (which has been working to build out a strong line in fashion) may lay off as much as 10,000 staff and are cutting a lot of product lines. A more direct rival of Lyst’s, the high-end fashion e-commerce poster child Farfetch, currently has a market cap of just $2.9 billion, a giant drop compared to the $14 billion it commanded in May 2021.

Many look to the holiday season as a critical indicator of how well e-commerce companies are doing in the current economy, and this year so far, the figures are actually not as bad as many thought they would be: Adobe’s tracking of sales have shown big days like Black Friday and Cyber Monday both breaking sales records (respectively over $9 billion and over $11 billion).

Lyst itself has been seeing strong sales to kick off holiday shopping, posting its most profitable Black Friday weekend ever, with average order value up 15% — albeit with more discounting across the brands and stores that sell on the site to gin up activity.

But the bigger picture and the longer-term view are the factors driving today’s news. In addition to a focus on getting profitable, our source tells us that Lyst’s IPO was more recently targeted for 2023, but those plans have now been pushed back; and that it’s looking to do a new round of funding partly because it’s low on cash flow. (To be clear, the company would not comment on these facts.)

We’ll update this post as we learn more.

If you want to contact us with a story tip, you can do so securely here.

Lyst, the UK fashion marketplace, is laying off 25% of staff by Ingrid Lunden originally published on TechCrunch

NopeaRide, Kenya’s first EV taxi service, shuts down

Kenya’s first fully electric taxi service, NopeaRide, is exiting the market after its parent company EkoRent OY failed to raise additional funding to keep it afloat.

NopeaRide said EkoRent Africa, the local subsidiary of the Finnish company, has filed for insolvency in Kenya, bringing to an end the operations of the all-electric vehicle taxi player, which sought to drive a shift to environmentally-friendly transport options, while stepping-up competition for early market entrants Uber and Bolt.

“We have taken our fleet of electric vehicles off the road and have notified our staff and corporate clients. We are now working with relevant authorities to ensure that our operations are wound up in accordance with local legislation,” said NopeaRide in a statement.

“We would like to extend our deepest regret to our dedicated team of staff and drivers. We would also like to thank our loyal NopeaRide customers, corporate clients and other partners who have supported NopeaRide’s vision for electric mobility in Africa,” it said.

Juha Suojanen founded EkoRent Oy in 2014 to develop solutions based on electric vehicles, and solar energy, which later led to the 2018 launch of NopeaRide in Kenya.

NopeaRide provided the charging network and the driver and rider apps, and sourced the electric vehicles. However, the drivers were expected to arrange their own financing frameworks.

The startup grew from three vehicles to 70 by the time of closure, and had also built a charging network across Nairobi after raising undisclosed funding in 2019.

Last year, NopeaRide also received €200,000 funding from EEP Africa, a financing facility for early-stage clean energy in Southern and East Africa, to build more solar charging hubs in Nairobi, and to make it possible for the company to increase its service radius in anticipation of growth.

The startup said it was on a path to recovery this year, after its business was badly hit by the Covid pandemic, which led to a dip in the number of rides as people worked from home.

“In the first half of 2022 our traffic numbers grew to about the same level as before Covid-19. We also started to put more effort in the corporate segment as their employees were returning to office and managed to sign contracts with a few big international companies, some of them potentially reserving the majority of available Nopea capacity,” it said.

“However, EkoRent OY went into insolvency in Finland and was unable to secure additional financing to grow the business in Nairobi to the next level.”

NopeaRide, Kenya’s first EV taxi service, shuts down by Annie Njanja originally published on TechCrunch

UK confirms removal of Online Safety Bill’s ‘legal but harmful’ clause

The UK government has completed a major revision to controversial but popular online safety legislation that’s been in the works for years — and was finally introduced to parliament earlier this year — but has beenpaused since this summerfollowing turmoil in the governing Conservative Party.

In September, new secretary of state for digital, Michelle Donelan, said the reshuffled government, under newly elected prime minister Liz Truss (who has since been replaced by another new PM, Rishi Sunak) would make certain edits to the bill before bringing it back to parliament.

The draft legislation is now due to return to the House of Commons next week when lawmakers will resume scrutiny of the wide-ranging speech regulation proposals.

The government says the changes its made to the Online Safety Bill are in response to concerns it could lead to platforms overblocking content and chilling freedom of expression online — largely focused on adult safety provisions related to so-called ‘legal but harmful’ content, which included mitigation requirements like transparency obligations but did not actually require such material to be removed.

Nonetheless the controversy and concern over this aspect of the bill has been fierce.

In a press release announcing the latest raft of tweaks, the Department for Digital, Culture, Media and Sport (DCMS) and Secretary of state for digital issues, Michelle Donelan, wrote: “Any incentives for social media firms to over-remove people’s legal online content will be taken out of the Online Safety Bill. Firms will still need to protect children and remove content that is illegal or prohibited in their terms of service, however the Bill will no longer define specific types of legal content that companies must address.

I promised I would make some common-sense tweaks and I have.

This is a stronger, better bill for it. It is focused where it needs to be: on protecting children and on stamping out illegality online.

Now it is time to pass it.

— Michelle Donelan MP (@michelledonelan) November 29, 2022

“This removes any influence future governments could have on what private companies do about legal speech on their sites, or any risk that companies are motivated to take down legitimate posts to avoid sanctions. New measures will also be added to make social media platforms more transparent and accountable to their users, as a result of amendments the Government will propose.”

“Parents and the wider public will benefit from new changes to force tech firms to publish more information about the risks their platforms pose to children so people can see what dangers sites really hold. Firms will be made to show how they enforce their user age limits to stop kids circumventing authentication methods and they will have to publish details of when the regulator Ofcom has taken action against them,” DCMS added.

Over the weekend the government revealed another, related amendment to the legislation — saying it would make encouraging self harm a criminal offence, thereby taking that type of problem content out of the ‘legal but harmful’ bucket and meaning platforms will have a legal duty to remove it.

It also recently announced measures to beef up laws against abuse of intimate imagery, including criminalizing the sharing of deepfake porn without consent, among other recent changes.

DCMS is pitching its new approach with the Online Safety Bill as providing what it frames as a “triple shield” of online protection which is most strongly focused on children but still offers measures intended to help general consumers shield themselves from a range of online harms — with social media firms legally required to 1) remove illegal content, 2) take down material in breach of their own terms of service, and 3) provide adults with greater choice over the content they see and engage with.

Provisions in the revised bill could, for example, enable adult users to opt to see a filtered feed if they wish to limit their exposure to content that may be unpleasant to them but which does not meet the bill’s higher bar of being strictly illegal.

The government has also retained measures aimed at empowering adults to be able to block anonymous trolls — via using tools that the biggest platforms will need to offer to let them control whether they can be contacted by unverified social media users.

“To make sure the Bill’s protections for adults online strike the right balance with its protections for free speech, duties relating to ‘legal but harmful’ content accessed by adults will be removed from the legislation and replaced with the consumer-friendly ‘triple shield’,” DCMS wrote. “The Bill will instead give adults greater control over online posts they may not wish to see on platforms.

“If users are likely to encounter certain types of content — such as the glorification of eating disorders, racism, anti-semitism or misogyny not meeting the criminal threshold — internet companies will have to offer adults tools to help them avoid it. These could include human moderation, blocking content flagged by other users or sensitivity and warning screens.”

There has been a lot of misreporting on the return of the Online Safety Bill. Here are my thoughts. pic.twitter.com/m7GpxvPbTy

— Damian Collins (@DamianCollins) November 29, 2022

Donelan mounted an aggressive defence of the changes on BBC Radio 4’s Today program this morning, claiming the government has strengthened provisions to protect children at the same time as adapting it to respond to concerns over the bill’s impact on freedom of expression for adults.

“Nothing is getting watered down or taken out when it comes to children,” she argued. “We’re adding extra in. So there is no change to children.”

Platforms will still be required to prevent children from being exposed to ‘legal but harmful’ speech, she also suggested — arguing that much of the content of greatest concern to child safety campaigners is often prohibited in platforms’ own T&Cs and the problem is they do not enforce them. The legislation will require platforms to live up to their claims, she said.

Earlier in the program, Ian Russell, the father of Molly Russell — the 14-year-old British schoolgirl who killed herself five years ago after viewing social media content promoting self-harm and suicide on algorithmically driven platforms including Instagram and Pinterest — expressed concern that the bill is being watered down, questioning the government’s late stage decision to remove the ‘legal but harmful’ duties clause.

“It’s very hard to understand that something that was important as recently as July — when the bill would have had a third reading in the Commons and [this legal but harmful content was] included in the bill, it’s very hard to understand why that suddenly can’t be there,” he told the BBC.

Discussing why he feels so strongly about risks attached to ‘legal but harmful’ content spreading online, Russell referred to the inquest into his daughter’s death which surfaced evidence from the platforms that showed she had engaged with a lot of such content — giving an example of a pencil-style drawing of a sad girl captioned with the text “who would love a suicidal girl” as one of the pieces of content she had viewed that had particularly stayed with him.

“That in and on its own isn’t necessarily harmful but when the platforms’ algorithms send hundreds if not thousands of those posts or posts like it to someone — particularly if they’re young and vulnerable — then that content had to be regulated against,” he argued. “The algorithms have to be looked into as well. And that’s what the concern is.”

Russell also accused platforms of not taking strong enough measures to prevent minors from accessing their services. “The platforms have not taken seriously the advances in age verification and age assurance that tech now has — they’ve not paid enough attention to that. They’ve sort of turned a blind eye to the age of people on their platforms,” he suggested.

While not embracing the government’s edits to ‘legal but harmful’ duties in the bill, Russell did welcome DCMS’ drive to dial up transparency obligations on platforms as a result of revisions that will require them to publish risk assessments — when previously they may have had to undertaken an assessment but would not have been required to publish it.

Asked by the BBC about Russell’s criticism of the removal of the ‘legal but harmful’ clause, Donelan said: “Content that is harmful or could hurt children but is not illegal — so is legal — will still be removed under this version of the bill. So the content that Molly Russell saw will not be allowed as a result of this bill. And there will no longer be cases like that coming forward because we’re preventing that from happening.”

She also argued the revised bill would force platforms to enforce their own age restrictions — such as by making them explain how they are stopping minors from accessing their services.

“We’ve strengthened the bill,” she reiterated. “We’ve now introduced clauses where companies can’t just say yes we only allow children over 13 to join our platform — then they allow ten year olds and actively promote it to them. We’re stopping that from happening — we’re saying no, you’ve got to enforce that age restriction, you’ve got to tell parents how you’re doing that and everybody else. We’re saying you’ve got to work to the regulator with the children’s commissioner when you’re producing the guidelines and putting them in practice.”

Asked how the government can be sure platforms will really ban underage users, Donelan pointed to what she described as the “very punitive sanctions” still in the bill — including fines of up to 10% of global annual turnover, adding: “If a company breaches any aspect of the bill, including for children, they could face fines… [as large as] billions of pounds. That’s a really big incentive not to breach the bill.”

She said the government has also strengthened this aspect of the bill — saying companies “do have to be assured of the age of their users”.

“Now we’re not saying you have to use ‘X specific tech’ because it will be out of date by next week — this bill has to last the test of time — what we are saying is you could use a range of age assurance technology or age verification technology but whatever you do you’ve got to make sure you know the age of these users to know whether they’re 14 or whether they’re 45 — so you know the protection have got to be in place and I think that’s the right approach.”

This component of the bill is likely to continue to face fierce opposition from digital rights campaigners who are already warning that biased AIs will likely be the tech that gets applied at scale to predict users’ age as platforms seek to meet compliance requirements — and that the legislation therefore risks automating discriminatory outcomes…

Culture Secretary, Michelle Donelan, told @BBCr4today that the Online Safety Bill is silent on what technology can be used for age-verification. The role of age-gating the Internet will be filled by AI that’s known for biased and discriminatory outcomes. #BlockTheBill #privacy pic.twitter.com/HSjbI00VsT

— Open Rights Group (@OpenRightsGroup) November 29, 2022

Another notable revision to the bill the government confirmed today is the removal of a “harmful communications” offence that free speech campaigners had warned risked having a major speech chilling effect based on a disproportionate weighting on someone taking offence to public speech.

Offences on false and threatening comms have been retained.

“To retain protections for victims of abuse, the government will no longer repeal elements of the Malicious Communications Act and Section 127 of the Communications Act offences, which means the criminal law will continue to protect people from harmful communications, including racist, sexist and misogynistic abuse,” DCMS further notes.

There will also be a requirement for major platforms not to remove content that does not breach the law or suspend or ban users where there has not been a breach of their ToS — as another measure the government claims will help bolster freedom of expression online.

Further amendments are aimed at dialling up protections for women and girls online, with the government saying it will add the criminal offence of controlling or coercive behaviour to the list of priority offences in the Bill.

“This means platforms will have to take proactive steps, such as putting in measures to allow users to manage who can interact with them or their content, instead of only responding when this illegal content is flagged to them through complaints,” per DCMS.

Another change recognizes the Children’s Commissioner to the face of the bill as a “statutory consultee” to the regulator, Ofcom’s, codes of practice, which platforms will be required to cleave to as they seek to demonstrate compliance — casting a key child safety advocate in a core role shaping compliance recommendations.

The government has tabled some of the slew of latest amendments to the Bill in the Commons for Report Stage on December 5, when it returns to parliament — but notes that further amendments will be made at later stages of the Bill’s passage.

Commenting in a statement, Donelan added:

“Unregulated social media has damaged our children for too long and it must end.

I will bring a strengthened Online Safety Bill back to Parliament which will allow parents to see and act on the dangers sites pose to young people. It is also freed from any threat that tech firms or future governments could use the laws as a licence to censor legitimate views.

Young people will be safeguarded, criminality stamped out and adults given control over what they see and engage with online. We now have a binary choice: to get these measures into law and improve things or squabble in the status quo and leave more young lives at risk.”

UK confirms removal of Online Safety Bill’s ‘legal but harmful’ clause by Natasha Lomas originally published on TechCrunch

India to pilot retail digital currency on December 1

India will undertake the first pilot for retail digital currency on December 1, the central bank said Tuesday, extending the test to evaluate the creation and distribution of the digital currency in the South Asian market with a closed group of customers and merchants a month after it began evaluating the CBDC for the wholesale segment.

State Bank of India, ICICI Bank, Yes Bank and IDFC will participate in the initial phases of the pilot in four cities (Mumbai, New Delhi, Bengaluru and Bhubaneswar). Bank of Baroda, Union Bank of India, HDFC Bank and Kotak Mahindra Bank will join the pilot “subsequently,” the Reserve Bank of India said, adding that it will extend the pilot later to the cities of Ahmedabad, Gangtok, Guwahati, Hyderabad, Indore, Kochi, Lucknow, Patna and Shimla.

“The scope of pilot may be expanded gradually to include more banks, users and locations as needed,” it said.

The central bank hopes to lower the economy’s reliance on cash, enable cheaper and smoother international settlements, and protect people from the volatility of private cryptocurrencies, RBI officials have said in the past. Based on the test results, the central bank will test additional features and applications of the digital rupee in future pilots, it said.

The limited roll-out of e-rupee comes at a time when several governments across the globe are trialing digital versions of their currencies. Singapore’s monetary authority said in late October that it will test a digital version of the local dollar. The central banks of China, the Bahamas, the euro area have experimented in this area.

“Users will be able to transact with e₹-R through a digital wallet offered by the participating banks and stored on mobile phones / devices. Transactions can be both Person to Person (P2P) and Person to Merchant (P2M). Payments to merchants can be made using QR codes displayed at merchant locations. The e₹-R would offer features of physical cash like trust, safety and settlement finality. As in the case of cash, it will not earn any interest and can be converted to other forms of money, like deposits with banks,” the Reserve Bank of India said in a press announcement.

(More to follow.)

India to pilot retail digital currency on December 1 by Manish Singh originally published on TechCrunch

Pin It on Pinterest