What to expect from AI in 2023

As a rather commercially successful author once wrote, “the night is dark and full of terrors, the day bright and beautiful and full of hope.” It’s fitting imagery for AI, which like all tech has its upsides and downsides.

Art-generating models like Stable Diffusion, for instance, have led to incredible outpourings of creativity, powering apps and even entirely new business models. On the other hand, its open source nature lets bad actors to use it to create deepfakes at scale — all while artists protest that it’s profiting off of their work.

What’s on deck for AI in 2023? Will regulation rein in the worst of what AI brings, or are the floodgates open? Will powerful, transformative new forms of AI emerge, a la ChatGPT, disrupt industries once thought safe from automation?

Expect more (problematic) art-generating AI apps

With the success of Lensa, the AI-powered selfie app from Prisma Labs that went viral, you can expect a lot of me-too apps along these lines. And expect them to also be capable of being tricked into creating NSFW images, and to disproportionately sexualize and alter the appearance of women.

Maximilian Gahntz, a senior policy researcher at the Mozilla Foundation, said he expected integration of generative AI into consumer tech will amplify the effects of such systems, both the good and the bad.

Stable Diffusion, for example, was fed billions of images from the internet until it “learned” to associate certain words and concepts with certain imagery. Text-generating models have routinely been easily tricked into espousing offensive views or producing misleading content.

Mike Cook, a member of the Knives and Paintbrushes open research group, agrees with Gahntz that generative AI will continue to prove a major — and problematic — force for change. But he thinks that 2023 has to be the year that generative AI “finally puts its money where its mouth is.”

Prompt by TechCrunch, model by Stability AI, generated in the free tool Dream Studio.

“It’s not enough to motivate a community of specialists [to create new tech] — for technology to become a long-term part of our lives, it has to either make someone a lot of money, or have a meaningful impact on the daily lives of the general public,” Cook said. “So I predict we’ll see a serious push to make generative AI actually achieve one of these two things, with mixed success.”

Artists lead the effort to opt out of data sets

DeviantArt released an AI art generator built on Stable Diffusion and fine-tuned on artwork from the DeviantArt community. The art generator was met with loud disapproval from DeviantArt’s longtime denizens, who criticized the platform’s lack of transparency in using their uploaded art to train the system.

The creators of the most popular systems — OpenAI and Stability AI — say that they’ve taken steps to limit the amount of harmful content their systems produce. But judging by many of the generations on social media, it’s clear that there’s work to be done.

“The data sets require active curation to address these problems and should be subjected to significant scrutiny, including from communities that tend to get the short end of the stick,” Gahntz said, comparing the process to ongoing controversies over content moderation in social media.

Stability AI, which is largely funding the development of Stable Diffusion, recently bowed to public pressure, signaling that it would allow artists to opt out of the data set used to train the next-generation Stable Diffusion model. Through the website HaveIBeenTrained.com, rightsholders will be able to request opt-outs before training begins in a few weeks’ time.

OpenAI offers no such opt-out mechanism, instead preferring to partner with organizations like Shutterstock to license portions of their image galleries. But given the legal and sheer publicity headwinds it faces alongside Stability AI, it’s likely only a matter of time before it follows suit.

The courts may ultimately force its hand. In the U.S. Microsoft, GitHub and OpenAI are being sued in a class action lawsuit that accuses them of violating copyright law by letting Copilot, GitHub’s service that intelligently suggests lines of code, regurgitate sections of licensed code without providing credit.

Perhaps anticipating the legal challenge, GitHub recently added settings to prevent public code from showing up in Copilot’s suggestions and plans to introduce a feature that will reference the source of code suggestions. But they’re imperfect measures. In at least one instance, the filter setting caused Copilot to emit large chunks of copyrighted code including all attribution and license text.

Expect to see criticism ramp up in the coming year, particularly as the U.K. mulls over rules that would that would remove the requirement that systems trained through public data be used strictly non-commercially.

Open source and decentralized efforts will continue to grow

2022 saw a handful of AI companies dominate the stage, primarily OpenAI and Stability AI. But the pendulum may swing back towards open source in 2023 as the ability to build new systems moves beyond “resource-rich and powerful AI labs,” as Gahntz put it.

A community approach may lead to more scrutiny of systems as they are being built and deployed, he said: “If models are open and if data sets are open, that’ll enable much more of the critical research that has pointed to a lot of the flaws and harms linked to generative AI and that’s often been far too difficult to conduct.”

Image Credits: Results from OpenFold, an open source AI system that predicts the shapes of proteins, compared to DeepMind’s AlphaFold2.

Examples of such community-focused efforts include large language models from EleutherAI and BigScience, an effort backed by AI startup Hugging Face. Stability AI is funding a number of communities itself, like the music-generation-focused Harmonai and OpenBioML, a loose collection of biotech experiments.

Money and expertise are still required to train and run sophisticated AI models, but decentralized computing may challenge traditional data centers as open source efforts mature.

BigScience took a step toward enabling decentralized development with the recent release of the open source Petals project. Petals lets people contribute their compute power, similar to Folding@home, to run large AI language models that would normally require an high-end GPU or server.

“Modern generative models are computationally expensive to train and run. Some back-of-the-envelope estimates put daily ChatGPT expenditure to around $3 million,” Chandra Bhagavatula, a senior research scientist at the Allen Institute for AI, said via email. “To make this commercially viable and accessible more widely, it will be important to address this.”

Chandra points out, however, that that large labs will continue to have competitive advantages as long as the methods and data remain proprietary. In a recent example, OpenAI released Point-E, a model that can generate 3D objects given a text prompt. But while OpenAI open sourced the model, it didn’t disclose the sources of Point-E’s training data or release that data.

Point-E generates point clouds.

“I do think the open source efforts and decentralization efforts are absolutely worthwhile and are to the benefit of a larger number of researchers, practitioners and users,” Chandra said. “However, despite being open-sourced, the best models are still inaccessible to a large number of researchers and practitioners due to their resource constraints.”

AI companies buckle down for incoming regulations

Regulation like the EU’s AI Act may change how companies develop and deploy AI systems moving forward. So could more local efforts like New York City’s AI hiring statute, which requires that AI and algorithm-based tech for recruiting, hiring or promotion be audited for bias before being used.

Chandra sees these regulations as necessary especially in light of generative AI’s increasingly apparent technical flaws, like its tendency to spout factually wrong info.

“This makes generative AI difficult to apply for many areas where mistakes can have very high costs — e.g. healthcare. In addition, the ease of generating incorrect information creates challenges surrounding misinformation and disinformation,” she said. “[And yet] AI systems are already making decisions loaded with moral and ethical implications.”

Next year will only bring the threat of regulation, though — expect much more quibbling over rules and court cases before anyone gets fined or charged. But companies may still jockey for position in the most advantageous categories of upcoming laws, like the AI Act’s risk categories.

The rule as currently written divides AI systems into one of four risk categories, each with varying requirements and levels of scrutiny. Systems in the highest risk category, “high-risk” AI (e.g. credit scoring algorithms, robotic surgery apps), have to meet certain legal, ethical and technical standards before they’re allowed to enter the European market. The lowest risk category, “minimal or no risk” AI (e.g. spam filters, AI-enabled video games), imposes only transparency obligations like making users aware that they’re interacting with an AI system.

Os Keyes, a Ph.D. Candidate at the University of Washington, expressed worry that companies will aim for the lowest risk level in order to minimize their own responsibilities and visibility to regulators.

“That concern aside, [the AI Act] really the most positive thing I see on the table,” they said. “I haven’t seen much of anything out of Congress.”

But investments aren’t a sure thing

Gahntz argues that, even if an AI system works well enough for most people but is deeply harmful to some, there’s “still a lot of homework left” before a company should make it widely available. “There’s also a business case for all this. If your model generates a lot of messed up stuff, consumers aren’t going to like it,” he added. “But obviously this is also about fairness.”

It’s unclear whether companies will be persuaded by that argument going into next year, particularly as investors seem eager to put their money beyond any promising generative AI.

In the midst of the Stable Diffusion controversies, Stability AI raised $101 million at an over-$1 billion valuation from prominent backers including Coatue and Lightspeed Venture Partners. OpenAI is said to be valued at $20 billion as it enters advanced talks to raise more funding from Microsoft. (Microsoft previously invested $1 billion in OpenAI in 2019.)

Of course, those could be exceptions to the rule.

Image Credits: Jasper

Outside of self-driving companies Cruise, Wayve and WeRide and robotics firm MegaRobo, the top-performing AI firms in terms of money raised this year were software-based, according to Crunchbase. Contentsquare, which sells a service that provides AI-driven recommendations for web content, closed a $600 million round in July. Uniphore, which sells software for “conversational analytics” (think call center metrics) and conversational assistants, landed $400 million in February. Meanwhile, Highspot, whose AI-powered platform provides sales reps and marketers with real-time and data-driven recommendations, nabbed $248 million in January.

Investors may well chase safer bets like automating analysis of customer complaints or generating sales leads, even if these aren’t as “sexy” as generative AI. That’s not to suggest there won’t be big attention-grabbing investments, but they’ll be reserved for players with clout.

What to expect from AI in 2023 by Kyle Wiggers originally published on TechCrunch

Samsung aims to expand chip production

According to experts in the industry, the tech giant is considering the expansion of its chip production in order to outperform its competitors and support the share price of the company once the market recovers.

How to use Realme UI 4.0 payment protection feature

Realme has confirmed that the new Realme UI 4.0 operating system, which is based on Android 13, features bright colours for an improved user experience and 30 new icons. The Smart Always-On Display (AOD) feature of Realme’s new custom ROM enables users to access multiple apps without having to unlock their phone.

Indian fintech Money View valued at $900 million in new funding

Indian fintech Money View said on Monday it has raised $75 million in a new funding round, its second this year, despite the market slump as it looks to scale its core credit business and build more products in the South Asian market.

Apis Partners led Money View’s Series E funding round, valuing the Bengaluru-headquartered startup at $900 million, up from $615 million in a $75 million Series D funding round in March. The startup said in a statement that the round hasn’t closed and it expects to raise more capital.

TechCrunch reported in October that Money View was engaging with investors to raise up to $150 million at a valuation of $1 billion. The startup said today that existing backers Tiger Global, Winter Capital and Evolence also participated in the funding.

The eight-year-old startup offers personalized credit products and financial management solutions to customers who otherwise don’t have a credit score and so can’t avail credit from banks and other financial institutions. India’s credit bureau data book is thin, making most individuals in the South Asian market unworthy of credit. Fintechs use modern-age underwriting systems to lend to customers and a maze of regulatory arbitrage — increasingly getting closed — to operate.

Money View is currently disbursing about $1.2 billion in loans, at an annualized-basis, and managing over $800 million, it said. The startup, which says it has been profitable for the past two years, clocked a revenue of $30.6 million and a profit of $2.14 million in the financial year that ended in March, according to regulatory disclosure.

“Our performance and growth over the past two years has allowed us to drive our mission of true financial inclusion in India with great success,” said Puneet Agarwal, founder and chief executive of Money View, in a statement. “We are thrilled to have Apis Partners join us in our journey and with their support, we look forward to becoming India’s leading online credit platform with innovative and holistic financial solutions.”

Money View plans to deploy the fresh funds to grow its credit business, broaden its product portfolio with services such as digital bank accounts, insurance, wealth management and hire more talent, it said.

Its new funding comes at a time when the dealflow activity has slowed down dramatically in the South Asian market as investors grow cautious of writing new checks and evaluate their underwriting models after valuations of publicly listed firms take a tumble.

“Money View has achieved great success already, with their credit products democratising the access for millions of customers in India, and we are truly excited to partner with the company at this stage of its journey,” said Matteo Stefanel, Co-founder and Managing Partner at Apis Partners, in a statement.

Indian fintech Money View valued at $900 million in new funding by Manish Singh originally published on TechCrunch

Don’t stop writing, or your words will vanish off the page

The year is coming to an end, and with it, I continue an annual tradition of writing a x words about x piece. This year, that means trying to cram the year 2022 into 2,022 words. As you might imagine, that’s a lot. I usually write 5,000-6,000 words and then have to ruthlessly edit it down to try to hit my word cap. Part of the challenge, though, is to re-live all the highs and lows of the year without getting overwhelmed. The trick is to keep your fingers moving no matter what. And recently, I found an app for that, which I’d love to share with y’all. ’tis the season, after all.

As a writer, you’ll often find yourself reaching for the save button. It is your lifeline, after all. A short power cut or a computer snafu is all it takes to make all of your hard work crumble to nothingness, after all. But what if there was no save button? What if there was no staring out of the window for inspiration, no pauses to think of a witty turn of phrase, and no way to stop for a break? What if this was like the movie Speed 2, except instead of a boat, you’re on a bus? What if, when you slow down, it explodes? Well. Welcome to the world of extreme writing.

That’s the premise for the Most Dangerous Writing App. If you stop writing for more than a couple of seconds, you’ll see your writing fade out of existence. And, if you’re particularly slow about it, that’s the end. Your words disappear into the digital ether, never to be seen again. Don’t pick up your phone. Don’t react to a notification. If the FedEX guy finally turned up with that parcel you’ve been waiting for well TOUGH, there’s no way to slow down for even a moment.

Encouraging you to stay focused and actually a great tool to find and stay in your flow state, the Most Dangerous Writing App is an awesome idea. Being forced to put a few words down every second means that the fear of the empty page melts away, and having to continue writing helps keep you on your toes.

In many ways, the app reminds me of National Novel Writing Month (NaNoWriMo), where you need to bang out a 50,000-word novel. Or something. I can’t remember. Usually, I’d Google it to make sure I got the right word count, but I can’t stop because if I open a new tab I will lose what I’ve written so far in this article. Argh! But okay, the point is that it’ll both help you start writing and actually force you to finish a piece as well. Because, well, if you don’t finish it, you lose it. And I don’t want that. Nobody wants that.

It’s not exactly a very advanced app, but it is a surprising and fun way to force yourself to start writing and to keep writing. It made me think about how I write very differently. Incidentally, it proves that I am, in fact, able to write for five minutes straight as well, which is a pretty beautiful gift to be able to give to myself.

I am also sure that the TechCrunch editors will be delighted at me writing for five minutes straight before hitting publish, pausing for just long enough to add some links and a featured image, but without letting an editor fix my typos. Sorry, Henry.

Don’t stop writing, or your words will vanish off the page by Haje Jan Kamps originally published on TechCrunch

This year in tech felt like a simulation

This year in tech, too much happened and very little of it made sense. It was like we were being controlled by a random number generator that would dictate the whims of the tech industry, leading to multiple “biggest news stories of the year” happening over the course of a month, all completely disconnected from one another.

I can’t stop thinking about a very good tweet I saw last month, which encapsulated the absurdity of the year — it was something along the lines of, “Meta laid off 11,000 people and it’s only the third biggest tech story of the week.” Normally, a social media giant laying off 13% of its workforce would easily be the week’s top story, but this was the moment when FTX went bankrupt and everyone was impersonating corporations on Twitter because somehow Elon Musk didn’t think through how things would go horribly wrong if anyone could buy a blue check. Oh, good times.

When I say it feels like we’re living in a simulation, what I mean is that sometimes, I hear about the latest tech news and feel like someone threw some words in a hat, picked a few, and tried to connect the dots. Of course, that’s not what’s really happening. But in January, would you have believed me if I told you that Twitter owner Elon Musk polled users to decide that he would unban Donald Trump?

These absurd events in tech have consequences. Crypto collapses like FTX’s bankruptcy and the UST scandal have harmed actual people who invested significant sums of money into something that they believed to be a good investment. It’s funny to think about how you’d react ten years ago if someone told you that Meta (oh yeah, that’s what Facebook is called now) is losing billions of dollars every quarter to build virtual reality technology that no one seems to want. But those management decisions are not a joke for the employees who lost their jobs because of those choices.

Where does this leave us? We’re in a moment in tech history where nothing is too absurd to be possible. That’s both inspiring and horrifying. It’s possible for a team of Amazon fulfillment center workers in Staten Island to win a union election, successfully advocating for themselves in the face of tremendous adversity. It’s also possible for Elon Musk to buy Twitter for $44 billion.

AI technology like Stable Diffusion and ChatGPT encapsulate this fragile balance between innovation and horror. You can make beautiful artworks in seconds, and you can also endanger the livelihoods of working artists. You can ask an AI chatbot to teach you about history, but there’s no way to know if its response is factually accurate (unless you do further research, in which case, you could’ve just done your own research to begin with).

But perhaps part of the reason why AI generators have garnered such mainstream appeal is that they almost feel natural to us. This year’s tech news feels so bizarre that they might as well have been generated by ChatGPT.

Or maybe reality is actually stranger than anything an AI could come up with. I asked ChatGPT to write some headlines about tech news for me, and it came up with these snoozers (in addition to some factually inaccurate headlines, which I omitted for the sake of journalism):

“Apple’s iOS 15 update brings major improvements to iPhones and iPads”
“Amazon’s new line of autonomous delivery robots causes controversy”
“Intel announces new line of processors with advanced security features”

Pretty boring! Here are some actual real things that happened in tech this year:

Tony the Tiger made his debut as a VTuber.
Someone claimed to be a laid off Twitter employee named Rahul Ligma, and a herd of reporters did not get the joke, inadvertently meaning that I had to explain the “ligma” joke on like four different tech podcasts.
Three people got arrested for operating a Club Penguin clone.
One of the Department of Justice’s main suspects in a $3.6 billion crypto money laundering scheme is an entrepreneur-slash-rapper named Razzlekhan.
The new Pokémon game has a line of dialogue with the word “cheugy.”
Donald Trump dropped an NFT collection.
A bad Twitter feature update impacted the stock of a pharmaceutical company.
Elon Musk’s greatest rival is a University of Central Florida sophomore.
FTC chair Lina Khan said that Taylor Swift did more to educate Gen Z about antitrust law than she ever could.
Meta is selling a $1,499 VR headset to be used for remote work.
The UK Treasury made a Discord account to share public announcements but was immediately spammed with people using emoji reactions to make dirty jokes (and speaking of the UK, there have been three different Prime Ministers since September.)

These are strange times. If the rules are made up and the points don’t matter, let’s at least hope that if the absurdity continues into 2023, the tech news is more amusing than harmful. I want more Chris Pratt voicing live action Mario, and fewer tech CEOs being sentenced for fraud. Is that too much to ask?

This year in tech felt like a simulation by Amanda Silberling originally published on TechCrunch

Pin It on Pinterest