Is ChatGPT a ‘virus that has been released into the wild’?

More than three years ago, this editor sat down with Sam Altman for a small event in San Francisco soon after he’d left his role as the president of Y Combinator to become CEO of the AI company he co-founded in 2015 with Elon Musk and others, OpenAI.

At the time, Altman described OpenAI’s potential in language that sounded outlandish to some. Altman said, for example, that the opportunity with artificial general intelligence — machine intelligence that can solve problems as well as a human — is so incomprehensibly enormous that if OpenAI managed to crack it, the outfit could “maybe capture the light cone of all future value in the universe.” He said that the company was “going to have to not release research” because it was so powerful. Asked if OpenAI was guilty of fear-mongering — Elon Musk, a cofounder of the outfit, has repeatedly called all organizations developing AI to be regulated — Altman talked about dangers of not thinking about “societal consequences” when “you’re building something on an exponential curve.”

The audience laughed at various points of the conversation, not certain how seriously to take Altman. No one is laughing now, however. While machines are not yet as smart as people, the tech that OpenAI has since released into the world comes close enough that some critics fear it could be our undoing (and more sophisticated tech is reportedly coming).

Indeed, the ChatGPT model that OpenAI made available to the general public last week is so capable of answering questions like a person that professionals across a range of industries are struggling to process the implications. Educators, for example, wonder how they’ll be able to distinguish original writing from the algorithmically generated essays they are bound to receive — and that can evade anti-plagiarism software.

Paul Kedrosky isn’t an educator per se. He’s an economist, venture capitalist and MIT fellow who calls himself a “frustrated normal with a penchant for thinking about risks and unintended consequences in complex systems.” But he is among those who are suddenly worried about our collective future, tweeting yesterday: “[S]hame on OpenAI for launching this pocket nuclear bomb without restrictions into an unprepared society.” Wrote Kedrosky, “I obviously feel ChatGPT (and its ilk) should be withdrawn immediately. And, if ever re-introduced, only with tight restrictions.”

We talked with him yesterday about some of his concerns, and why he thinks what OpenAI is driving what he believes is the “most disruptive change the U.S.economy has seen in 100 years,” and not in a good way.

Our chat has been edited for length and clarity.

TC: ChatGPT came out last Wednesday. What triggered your reaction on Twitter?

PK: I’ve played with these conversational user interfaces and AI services in the past and this obviously is a huge leap beyond. And what troubled me here in particular is the casual brutality of it, with massive consequences for a host of different activities. It’s not just the obvious ones, like high school essay writing, but across pretty much any domain where there’s a grammar — [meaning] an organized way of expressing yourself. That could be software engineering, high school essays, legal documents. All of them are easily eaten by this voracious beast and spit back out again without compensation to whatever was used for training it.

I heard from a colleague at UCLA who told me they have no idea what to do with essays at the end of the current term, where they’re getting hundreds per course and thousands per department, because they have no idea anymore what’s fake and what’s not. So to do this so casually — as someone said to me earlier today — is reminiscent of the so-called [ethical] white hat hacker who finds a bug in a widely used product, then informs the developer before the broader public knows so the developer can patch their product and we don’t have mass devastation and power grids going down. This is the opposite, where a virus has been released into the wild with no concern for the consequences.

It does feel like it could eat up the world.

Some might say, ‘Well, did you feel the same way when automation arrived in auto plants and auto workers were put out of work? Because this is a kind of broader phenomenon.’ But this is very different. These specific learning technologies are self catalyzing; they’re learning from the requests. So robots in a manufacturing plant, while disruptive and creating incredible economic consequences for the people working there, didn’t then turn around and start absorbing everything going inside the factory, moving across sector by sector, whereas that’s exactly not only what we can expect but what you should expect.

Musk left OpenAI partly over disagreements about the company’s development, he said in 2019, and he has been talking about AI as an existential threat for a long time. But people carped that he didn’t know he’s talking about. Now we’re confronting this powerful tech and it’s not clear who steps in to address it.

I think it’s going to start out in a bunch of places at once, most of which will look really clumsy, and people will [then] sneer because that’s what technologists do. But too bad, because we’ve walked ourselves into this by creating something with such consequentiality. So in the same way that the FTC demanded that people running blogs years ago [make clear they] have affiliate links and make money from them, I think at a trivial level, people are going to be forced to make disclosures that ‘We wrote none of this. This is all machine generated.’

I also think we’re going to see new energy for the ongoing lawsuit against Microsoft and OpenAI over copyright infringement in the context of our in-training, machine learning algorithms. I think there’s going to be a broader DMCA issue here with respect to this service.

And I think there’s the potential for a [massive] lawsuit and settlement eventually with respect to the consequences of the services, which, you know, will probably take too long and not help enough people, but I don’t see how we don’t end up in [this place] with respect to these technologies.

What’s the thinking at MIT?

Andy McAfee and his group over there are more sanguine and have a more orthodox view out there that anytime time we see disruption, other opportunities get created, people are mobile, they move from place to place and from occupation to occupation, and we shouldn’t be so hidebound that we think this particular evolution of technology is the one around which we can’t mutate and migrate. And I think that’s broadly true.

But the lesson of the last five years in particular has been these changes can take a long time. Free trade, for example, is one of those incredibly disruptive, economy-wide experiences, and we all told ourselves as economists looking at this that the economy will adapt, and people in general will benefit from lower prices. What no one anticipated was that someone would organize all the angry people and elect Donald Trump. So there’s this idea that we can anticipate and predict what the consequences will be, but [we can’t].

You talked about high school and college essay writing. One of our kids has already asked — theoretically! — if it would be plagiarism to use ChatGPT to author a paper.

The purpose of writing an essay is to prove that you can think, so this short circuits the process and defeats the purpose. Again, in terms of consequences and externalities, if we can’t let people have homework assignments because we no longer know whether they’re cheating or not, that means that everything has to happen in the classroom and must be supervised. There can’t be anything we take home. More stuff must be done orally, and what does that mean? It means school just became much more expensive, much more artisanal, much smaller and at the exact time that we’re trying to do the opposite. The consequences for higher education are devastating in terms of actually delivering a service anymore.

What do you think of the idea of universal basic income, or enabling everyone to participate in the gainsfrom AI?

I’m much less strong a proponent than I was pre COVID. The reason is that COVID, in a sense, was an experiment with a universal basic income. We paid people to stay home, and they came up with QAnon. So I’m really nervous about what happens whenever people don’t have to hop in a car, drive somewhere, do a job they hate and come home again, because the devil finds work for idle hands, and there’ll be a lot of idle hands and a lot of deviltry.

Is ChatGPT a ‘virus that has been released into the wild’? by Connie Loizos originally published on TechCrunch

Daily Crunch: Grocery delivery app Getir bags rival Gorillas in a $1.2B acquisition

To get a roundup of TechCrunch’s biggest and most important stories delivered to your inbox every day at 3 p.m. PDT, subscribe here.

We’ve made it to Friday, folks. If you’re anything like me, that means finishing the workday with a well-deserved nap and reruns of “The Office.” Tweet, toot or Post at me about your favorite way to end the week.

Mark your calendar for a Twitter Space event on Tuesday, December 13 at 1 p.m. PST/4 p.m. EST featuring Builders VC investor Andrew Chen, who will speak with Walter about the role tech reporting plays in shaping ecosystems.

See you Monday! — Christine

The TechCrunch Top 3

Knock, knock, there’s a competitor at your door: Significant M&A in the food delivery space was only a matter of time, and Romain has details about a big one — Getir acquiring its competitor Gorillas in a deal that the Financial Times originally reported is valued at $1.2 billion.
Bye bye, Twitter Toolbox: Twitter tried to make it work with third-party developers, but alas, the company decided to make a clean break by shutting down some of its developer initiatives, including Toolbox. Ivan has more.
Tesla’s China boss gets a new gig…afactory: Rita is following a story about Tom Zhu, who oversaw Tesla’s China Gigafactory and has now been tapped to work his magic stateside leading Gigafactory Texas.

Startups and VC

More layoffs this week as Ingrid reports on Primer, an e-commerce infrastructure startup based in the U.K. that announced it would lay off one-third of its staff amid some restructuring to manage current and proposed commerce market conditions.

Meanwhile, Haje believes you need the perfect summary slide for your pitch deck and has found some for you (requires a TechCrunch+ subscription).

And we have three more for you:

Going dark: Kirsten reports that executives at Brodmann17, a computer vision technology startup, made the decision to shut down after realizing it would not be able to bring its products to market.
What are your symptoms?: Japanese health tech startup Ubie secured $19 million in new funding to bring its AI-powered symptom checker technology to the U.S., Kate reports.
Making that dollar work for it: Kate also has a story on Akros Technologies, which raised $2.3 million in new capital to inject some artificial intelligence into asset management.

How to respond when a VC asks about your startup’s valuation

Image Credits: boschettophotography (opens in a new window) / Getty Images

When a VC inevitably asks about your valuation expectations, it is a trick question: If your response is too high, it’s a red flag, whereas a lowball figure undervalues the company.

“We’re letting the market price this round” is an appropriate reply, but only if you’ve already gathered substantial data points from other investors — and can fire back with a few questions of your own, says Evan Fisher, founder of Unicorn Capital.

“If that’s all you say, you’re in trouble because it can also be interpreted as ‘we don’t have a clue’ or ‘we’ll take what we’re given,’” said Fisher.

Instead of going in cold, he advises founders to pre-pitch investors from their next round and use takeaways from those conversations to shape current valuations.

In the article, Fisher includes sample questions “you will want to ask every VC you speak with,” along with other tips that will help “when they pop the valuation question.”

Three more from the TC+ team:

Taking that exit: Tim writes about Vanguard’s decision to get out of a carbon emissions initiative and what prompted the move.
SPAC is back: Getaround is now a public company after braving a chilly SPAC environment that has left other companies in the cold. Alex has more.
Bridging blockchain and the physical world: That’s what Jacquelyn writes Solana founders want to see happen as the company and others pick up the pieces and move on from the FTX collapse.

TechCrunch+ is our membership program that helps founders and startup teams get ahead of the pack. You can sign up here. Use code “DC” for a 15% discount on an annual subscription!

Big Tech Inc.

We are over here with our mouths open upon learning that crypto news publication The Block received some significant — and undisclosed — loans from former FTX CEO Sam Bankman-Fried’s company Alameda Research. As a result, CEO Michael McCaffrey is out and Bobby Moran, the company’s chief revenue officer, takes the role but as Jacquelyn and Alex write, the now conflict of interest will take some time to repair, if it can even be done.

As we wait for the Federal Trade Commission to send news of Microsoft’s fate with Activision, Kyle writes that the cloud services giant acquired a different company, this time Lumenisity, a startup developing high-speed cables for transmitting data.

And three more for you:

Looking for that special gift?: Natasha L has some suggestions for your fitness-loving buddy, while Haje has some gift ideas to ensure your other friends are well caffeinated.
Meet Slack’s new CEO: When Slack announced that Lidiane Jones would be its new CEO, Ron wanted to shed some light on her career and how she got where she is today.
Exposed: Carly reports that CommonSpirit Health confirmed that data from over 620,000 patients was stolen during a ransomware attack in October.

Daily Crunch: Grocery delivery app Getir bags rival Gorillas in a $1.2B acquisition by Christine Hall originally published on TechCrunch

Musk’s ‘Twitter Files’ offer a glimpse of the raw, complicated and thankless task of moderation

Twitter’s new owner, Elon Musk, is feverishly promoting his “Twitter Files”: selected internal communications from the company, laboriously tweeted out by sympathetic amanuenses. But Musk’s obvious conviction that he has released some partisan kraken is mistaken — far from conspiracy or systemic abuse, the files are a valuable peek behind the curtain of moderation at scale, hinting at the Sisyphean labors undertaken by every social media platform.

For a decade companies like Twitter, YouTube, and Facebook have performed an elaborate dance to keep the details of their moderation processes equally out of reach of bad actors, regulators, and the press.

To reveal too much would be to expose the processes to abuse by spammers and scammers (who indeed take advantage of every leaked or published detail), while to reveal too little leads to damaging reports and rumors as they lose control over the narrative. Meanwhile they must be ready to justify and document their methods or risk censure and fines from government bodies.

The result is that while everyone knows a little about how exactly these companies inspect, filter, and arrange the content posted on their platforms, it’s just enough to be sure that what we’re seeing is only the tip of the iceberg.

Sometimes there are exposés of the methods we suspected — by-the-hour contractors clicking through violent and sexual imagery, an abhorrent but apparently necessary industry. Sometimes the companies overplay their hands, like repeated claims of how AI is revolutionizing moderation, and subsequent reports that AI systems for this purpose are inscrutable and unreliable.

What almost never happens — generally companies don’t do this unless they’re forced to — is that the actual tools and processes of content moderation at scale are exposed with no filter. And that’s what Musk has done, perhaps to his own peril, but surely to the great interest of anyone who ever wondered what moderators actually do, say, and click as they make decisions that may affect millions.

Pay no attention to the honest, complex conversation behind the curtain

The email chains, Slack conversations, and screenshots (or rather shots of screens) released over the last week provide a glimpse at this important and poorly understood process. What we see is a bit of the raw material, which is not the partisan illuminati some expected — though it is clear, by its highly selective presentation, that this what we are meant to perceive.

Far from it: the people involved are by turns cautious and confident, practical and philosophical, outspoken and accommodating, showing that the choice to limit or ban is not made arbitrarily but according to an evolving consensus of opposing viewpoints.

Leading up to the choice to temporarily restrict the Hunter Biden laptop story — probably at this point the most contentious moderation decision of the last few years, behind banning Trump — there is neither the partisanship nor conspiracy insinuated by the bombshell packaging of the documents.

Instead we find serious, thoughtful people attempting to reconcile conflicting and inadequate definitions and policies: What constitutes “hacked” materials? How confident are we in this or that assessment? What is a proportionate response? How should we communicate it, to whom, and when? What are the consequences if we do, if we don’t limit? What precedents do we set or break?

The answers to these questions are not at all obvious, and are the kind of thing usually hashed out over months of research and discussion, or even in court (legal precedents affect legal language and repercussions). And they needed to be made fast, before the situation got out of control one way or the other. Dissent from within and without (from a U.S. Representative no less — ironically, doxxed in the thread along with Jack Dorsey in violation of the selfsame policy) was considered and honestly integrated.

“This is an emerging situation where the facts remain unclear,” said Former Trust and Safety Chief Yoel Roth. “We’re erring on the side of including a warning and preventing this content from being amplified.”

Some question the decision. Some question the facts as they have been presented. Others say it’s not supported by their reading of the policy. One says they need to make the ad-hoc basis and extent of the action very clear since it will obviously be scrutinized as a partisan one. Deputy General Counsel Jim Baker calls for more information but says caution is warranted. There’s no clear precedent; the facts are at this point absent or unverified; some of the material is plainly non-consensual nude imagery.

“I believe Twitter itself should curtail what it recommends or puts in trending news, and your policy against QAnon groups is all good,” concedes Rep. Ro Khanna, while also arguing the action in question is a step too far. “It’s a hard balance.”

Neither the public nor the press have been privy to these conversations, and the truth is we’re as curious, and largely as in the dark, as our readers. It would be incorrect to call the published materials a complete or even accurate representation of the whole process (they are blatantly, if ineffectively, picked and chosen to fit a narrative), but even such as they are we are more informed than we were before.

Tools of the trade

Even more directly revealing was the next thread, which carried screenshots of the actual moderation tooling used by Twitter employees. While the thread disingenuously attempts to equate the use of these tools with shadow banning, the screenshots do not show nefarious activity, nor need they in order to be interesting.

Image Credits: Twitter

On the contrary, what is shown is compelling for the very reason that it is so prosaic, so blandly systematic. Here are the various techniques all social media companies have explained over and over that they use, but whereas before we had it couched in PR’s cheery diplomatic cant, now it is presented without comment: “Trends Blacklist,” “High Profile,” “DO NOT TAKE ACTION” and the rest.

Meanwhile Yoel Roth explains that the actions and policies need to be better aligned, that more research is required, that plans are underway to improve:

The hypothesis underlying much of what we’ve implemented is that if exposure to, e.g., misinformation directly causes harm, we should use remediations that reduce exposure, and limiting the spread/virality of content is a good way to do that… we’re going to need to make a more robust case to get this into our repertoire of policy remediations – especially for other policy domains.

Again the content belies the context it is presented in: these are hardly the deliberations of a secret liberal cabal lashing out at its ideological enemies with a ban hammer. It’s an enterprise-grade dashboard like you might see for lead tracking, logistics, or accounts, being discussed and iterated upon by sober-minded persons working within practical limitations and aiming to satisfy multiple stakeholders.

As it should be: Twitter has, like its fellow social media platforms, been working for years to make the process of moderation efficient and systematic enough to function at scale. Not just so the platform isn’t overrun with bots and spam, but in order to comply with legal frameworks like FTC orders and the GDPR. (Of which the “extensive, unfiltered access” outsiders were given to the pictured tool may well constitute a breach. The relevant authorities told TechCrunch they are “engaging” with Twitter on the matter.)

A handful of employees making arbitrary decisions with no rubric or oversight is no way to moderate effectively or meet such legal requirements; neither (as the resignation of several on Twitter’s Trust & Safety Council today testifies) is automation. You need a large network of people cooperating and working according to a standardized system, with clear boundaries and escalation procedures. And that’s certainly what seems to be shown by the screenshots Musk has caused to be published.

What isn’t shown by the documents is any kind of systematic bias, which Musk’s stand-ins insinuate but don’t quite manage to substantiate. But whether or not it fits into the narrative they wish it to, what is being published is of interest to anyone who thinks these companies ought to be more forthcoming about their policies. That’s a win for transparency, even if Musk’s opaque approach accomplishes it more or less by accident.

Musk’s ‘Twitter Files’ offer a glimpse of the raw, complicated and thankless task of moderation by Devin Coldewey originally published on TechCrunch

Proposed legislation would force US higher education endowments to reveal where they invest

In January, Missouri Representative Emanuel Cleaver will introduce the Endowment Transparency Act into Congress, a move that could, frankly, change everything.

Cleaver’s proposed legislation seeks to amend the Higher Education Act of 1965 to require universities and colleges to share information about how and whether they are allocating endowment investment funds to women- and minority-owned firms. These higher education institutions, which collectively manage more than $821 billion in assets, have been notoriously secretive about where they invest money and, despite calls for change, many have refused transparency.

“It was noble to desegregate a student body and faculty, but it is nobler to desegregate economics because they are reasons things are as they are.”Rep. Emanuel Cleaver

Educational behemoths play a critical role in the venture market as limited partners. TechCrunch previously reported on the pressing need for those at the top to bear more responsibility for the uneven venture landscape that disproportionately shuts out women and people of color. Those calls have turned to cries for more diverse fund managers and more money invested into minority-led funds — or at least criteria that LPs hold venture general partners accountable for the type of companies in which they invest.

Harrowing stats on the number of diverse fund managers, paired with the dearth of capital allocated to minority fund managers and founders, is indicative of the role endowments can play in maintaining existing inequities. Speaking with TechCrunch, Cleaver said when he attempted to reach out to higher education institutions to discuss their reticent behavior regarding asset managerial diversity and allocation, even he, at times, had doors shut in his face.

“These colleges brag about inclusion because they have minorities on their faculty and an inclusive student body, so they think everything is OK,” he said. “But let me just say, it was noble to desegregate a student body and faculty, but it is nobler to desegregate economics because they are reasons things are as they are.”

Proposed legislation would force US higher education endowments to reveal where they invest by Dominic-Madori Davis originally published on TechCrunch

The Block founder says he’s exploring ways to get the publication into ‘trustworthy’ hands

Crypto news publication The Block announced today that its CEO, Michael McCaffrey, has resigned after failing to disclose a series of loans from former FTX CEO Sam Bankman-Fried’s company Alameda Research. Axios first reported the news.

The capital was used in part to finance an employee-led buyout of the company, among other more extracurricular activities.

McCaffrey will be replaced by the company’s chief revenue officer, Bobby Moran, effective immediately, according to a statement. “No one at The Block had any knowledge of this financial arrangement besides Mike,” Moran wrote.

McCaffrey confirmed that in a series of tweets Friday: “I didn’t disclose the loan to anyone. Absolutely no one at The Block knew about the financial arrangement between my holding company and SBF, including the editorial and the research teams.” He claimed his rationale for this decision was to not “compromise the objectivity” of coverage surrounding SBF.

The Block was founded in 2018 by Mike Dudas. In 2020, McCaffrey took over as CEO. By April 2021, McCaffrey led a buyout of all of The Block investors, making the firm owned by employees, with McCaffrey as the biggest stakeholder. Even today, he remains the company’s majority shareholder.

Dudas told TechCrunch in an exchange after the news came out that he is “exploring what, if any, avenues exist to get The Block into trustworthy ownership.”

Dudas explained to TechCrunch that at the time of The Block’s sale, his “understanding was that Mike McCaffrey’s family was wealthy and loaned him money to buy out [his stake] and the VCs so the team could assume full independent ownership.”

In a tweet, Dudas said, “I’d like to buy The Block back.”

Media companies must disclose conflicts of interest when they arise; even the appearance of conflicts can prove damaging to a brand, as they can undercut reader trust in its impartiality.

“Mike’s decision to take out a loan from SBF and not disclose that information demonstrates a serious lack of judgment,” Moran wrote. “It undermines The Block’s reputation and credibility, especially that of our reporters and researchers, as well as our efforts at industry-leading transparency.”

McCaffrey received three loans for a total of $43 million from Bankman-Fried/Alameda. The first loan was for $12 million and was used to buy out The Block and make him CEO in 2021. The second was for $15 million in January to fund operations at the media and data research company. Finally, earlier this year, a loan for $16 million was provided to McCaffrey to buy personal real estate in the Bahamas.

The Block funds its newsroom via a combination of advertisements and research. The publication’s data section is a tool that TechCrunch utilizes from time to time.

That SBF-related capital went into a media company is not a surprise; the former web3 mogul was prolific in his media investments. However, given the lack of disclosure relating to the loans in question, this particular episode is different. Current staff and former executives were incensed about the transactions, the lack of transparency and effectively being lied to by their leader for a lengthy period.

The Block founder says he’s exploring ways to get the publication into ‘trustworthy’ hands by Jacquelyn Melinek originally published on TechCrunch

Microsoft acquires startup developing high-speed cables for transmitting data

Microsoft today announced that it acquired Lumenisity, a U.K.-based startup developing “hollow core fiber (HCF)” technologies primarily for data centers and ISPs. Microsoft says that the purchase, the terms of which weren’t disclosed, will “expand [its] ability to further optimize its global cloud infrastructure” and “serve Microsoft’s cloud platform and services customers with strict latency and security requirements.”

HCF cables fundamentally combine optical fiber and coaxial cable. They’ve been around since the ’90s, but what Lumenisity brings to the table is a proprietary design with an air-filled center channel surrounded by a ring of glass tubes. The idea is that light can travel faster through air than glass; in a trial with Comcast in April, a single strand of Lumenisity HCF was reportedly able to deliver traffic rates ranging from 10 Gbps to 400 Gbps.

“HCF can provide benefits across a broad range of industries including healthcare, financial services, manufacturing, retail and government,” Girish Bablani, CVP of Microsoft’s Azure Core business, wrote in a blog post. “For the public sector, HCF could provide enhanced security and intrusion detection for federal and local governments across the globe. In healthcare, because HCF can accommodate the size and volume of large data sets, it could help accelerate medical image retrieval, facilitating providers’ ability to ingest, persist and share medical imaging data in the cloud. And with the rise of the digital economy, HCF could help international financial institutions seeking fast, secure transactions across a broad geographic region.”

An illustration of Lumenisity’s cable design. Image Credits: Lumenisity

Lumenisity was founded in 2017 as a spinoff from the Optoelectronics Research Centre at the University of Southampton to commercialize research in HCF. Prior to the acquisition, the startup raised £12.5 million (~$15.35 million) in funding across several funding rounds from investors, including the Business Growth Fund and Parkwalk Advisors.

Lumenisity claims its fibers are deployed in customer networks “with the longest spans ever reported utilizing HCF technology.” Beyond Comcast, U.K. operator BT recently piloted Lumenisity’s tech, which BT claimed at the time had the potential to slash latency by up to 50% compared to traditional fiber. Infrastructure company euNetworks Fiber UK Limited is also testing Lumenisity cable to serve the London Stock Exchange.

Earlier this month, Lumenisity completed construction of a 40,000-square-foot HCF manufacturing facility in Romsey, U.K., which the company says will enable “scaled-up” production of its HCF technology in the future.

“This is the end of the beginning, and we are excited to start our new chapter as part of Microsoft to fulfil this technology’s full potential and continue our pursuit of unlocking new capabilities in communication networks,” Lumenisity wrote in a statement on its website. “We are proud to be acquired by a company with a shared vision that will accelerate our progress in the hollowcore space.”

Microsoft acquires startup developing high-speed cables for transmitting data by Kyle Wiggers originally published on TechCrunch

Pin It on Pinterest