Camera maker Canon leans into software at CES

Depending on whether you spend most of your time in hospitals, offices or in the great outdoors, when you hear ‘Canon,’ your mind will likely go to medical scanning equipment, high-end printers, or cameras. At CES this year, the 85-year-old company is leaning in a new direction, with an interesting focus on software applications.

At the show, the imaging giant showed off a direction it has been hinting at before, but this time relying far less on its own hardware, and more on the software the company has developed, in part as a response to the COVID-19 pandemic casting a shadow over people’s ability to connect. To the chorus of ‘meaningful communication’ and ‘powerful collaboration,’ the Japanese imaging giant appears to be plotting out a new course for what’s next.

“Canon is creating groundbreaking solutions that help people connect in more ways than we ever could have imagined, redefining how they work and live at a time when many of them are embracing a hybrid lifestyle,’’ said Kazuto Ogawa, President and CEO, Canon U.S.A., Inc, in a press briefing at CES 2023. “Canon’s ultimate role is to bring people closer together by revealing endless opportunities for creators. Under our theme of ‘Limitless Is More,’ we will show CES 2023 attendees what we are creating as a company focused on innovation and a world without limits.”

Among other things, Canon showed off a somewhat gimmicky immersive experience tied in with M. Night Shyamalan’s upcoming thriller movie, Knock at the Cabin. The very Shyamalanesque movie trailer will give you a taster of the vibe. At the heart of things, however, Canon is tapping into a base desire in humanity; to feel connected to one another. The company is desperate to show off how its solutions can “remove the limits humanity faces to create more meaningful communication,” through four technologies it is showing off at the trade show this year.

Canon USA CEO Kevin Ogawa on stage at CES 2023 along with M. Night Shyamalan. Image Credit: Haje Kamps / TechCrunch

3D calling: Kokomo

The flagship solution Canon is showing off is Kokomo, which the company describes as a first-of-its-kind immersive VR software package. It is designed to combine VR with an immersive calling experience. The solution is pretty elegant: Using a VR headset and a smartphone, the Kokomo software enables users to see and hear one another in real-time with their live appearance and expression, in a photo-real environment.

The Kokomo solution brings 3D video calling to a home near you. Image Credit: Canon

In effect, the software package scans your face to learn what you look like, then turns you into a photo-realistic avatar. The person you are in a call with can see you –sans VR headset – showing your physical appearance and facial expressions. The effect is to experience a 3D video call. At the show, Canon is demoing the tech by letting visitors step into a 1×1 conversation with the Knock at the Cabin characters.

Realtime 3D video: Free Viewpoint

Aimed at the sports market, Free Viewpoint is a solution that combines more than 100 high-end cameras with a cloud-based solution that makes it possible to move a virtual camera to any location. The software takes all the video feeds, creating a point-cloud based 3D model when enables a virtual camera operator to create a number of angles that would otherwise have been impossible: Drone-like replay footage, swooping into the action, for example, or detailed in-the-thick-of-things type footage, enabling viewers to see plays from the virtual perspective of one of the players.

In the USA, the system has already been installed at two NBA arenas (including at the home of the Cavaliers and the Nets). The video can be broadcast live, or compiled into replay clips. Canon also points out that the system enables ‘virtual advertising and other opportunities for monetization,’ so I suppose we have that to look forward to as well.

Returning to the Knock at the Cabin theme, at CES, Canon showed off a virtual action scene captured with the Free Viewpoint video system, captured at Canon’s Volumetric Video Studio in Kawasaki, Japan. The effect of watching an action scene ‘through the eyes’ of various characters was a wonderfully immersive experience.

Augmented reality tech: MREAL

Canon also showed off some earlier-stage tech that isn’t quite ready for prime-time viewing yet, including MREAL. This is tech that helps integrated simulation-like immersive worlds, merging the real and the virtual worlds. Use cases might include pre-visualization for movies, training scenarios, and interactive mixed-reality entertainment. The company tells TechCrunch that the technology is in the market research phase.

The company is trying to figure out what to develop further, and how to market the product. In other words: Who would use this, what would they use it for, and what would they be willing to pay for it.

Remote presence: AMLOS

Activate My Line of Sight (AMLOS) is what Canon is calling its solution for hybrid meeting environments, where some participants are in person, while others are off-site. If you’ve ever been in a meeting in that configuration, you’ll often find that attending remotely is a deeply frustrating experience, as the in-person meeting participants are engaging with each other, and the remote attendees are off on a screen somewhere.

Canon hopes that AMLOS can help solve that; it’s a software-and-camera set of products aiming to improve the level of engagement. It adds panning, tilting, and zooming capabilities to remote camera systems, giving remote users the ability to customize their viewing and participation experience. So far, the solution is not quite intuitive enough to overcome the barrier of not being in the room, but it’s certainly better than being a disembodied wall of heads on a screen.

Camera maker Canon leans into software at CES by Haje Jan Kamps originally published on TechCrunch

Ottonomy’s new delivery robot gets an automatic package dispenser

The robots are slowly but surely conquering this year’s CES. During today’s press preview, Ottonomy debuted a new model being added to the New York firm’s army of delivery robots. Yeti stands out from other Ottobot models primarily thanks to the addition of a clever auto dispense mechanism designed to eliminate the need for a person to be present to receive the package. The startup calls the product “the first fully autonomous unattended delivery robot on the market.”

Once it reaches its destination, the last mile-delivery bot can drop its contents onto a doorstep or transfer them into a compatible locker for safe keeping until the human arrives to pick them up. Another interesting angle here is the potential for product returns – specifically, a customer could put use the robot to get unwanted product back to the original seller.

Yeti follows the late 2022 addition of another robot, Ottobot 2.0, which brings some interesting customization options to the table, including the ability to swap out different modular bins for different sorts of deliveries.

Image Credits: Ottonomy

The firm has a number of concurrent programs in cities across the world, including Pittsburgh, Cincinnati, Oslo and Madrid. It’s also working to expand to additional markets in the U.S., Canada, Europea and Asia. Here in the States, it’s partnered with Verizon.

“During the validation processes we ran pilots with airports, retailers and postal services which gave us the deep insights we needed on the most effective use cases and scalability,” says cofounder and CEO Ritukar Vijay. “With our strategic alignment with Verizon and other enterprises, we are in the prime position to fill the gap that companies like Amazon and Fedex were not able to. As demand and the use cases for autonomous unassisted delivery continue to grow, we are positioned to provide robots-as-a-service for restaurants, retailers and beyond.”

Ottonomy announced a $3.3 million seed raise last August.

Ottonomy’s new delivery robot gets an automatic package dispenser by Brian Heater originally published on TechCrunch

Investors say web3 and hype are in for 2023, high valuations are out — maybe?

This past year was tumultuous for venture investors, to say the least. The ecosystem watched as startup funding dried up, held its breath as a $32 billion venture-backed company evaporated almost overnight, and witnessed one of the largest startup acquisitions of all time.

Did you hear anyone yell “bingo?” Probably not. It’s unlikely that many investors came close to predicting what would play out in 2022. But, hey, there’s always next year.

It seems we’re entering yet another interesting and tumultuous year: The crypto market is hanging on by a thread; everyone is watching with popcorn in hand to see which unicorn will be the next to tumble; and the hype around AI continues to swell.

Some think 2023 will just be the start of a venture winter and overall economic recession, while others think we could see some stabilization as things head back to normal by mid-year. But who is to say?

To find out how investors are thinking about the year ahead and what they’re planning, we asked more than 35 investors to share their thoughts. Here is a selection of their answers lightly edited for clarity.

How is the current economic climate impacting your deployment strategy for the next year?

U.S.-based early stage investor: My goal is to deploy the same amount every year, but the climate has led to far less interesting companies/founders raising rounds, so I will probably deploy 20%-30% of what I want to.

Bruce Hamilton, founder, Mech Ventures: We are contemplating decreasing our check size so we can double our number of investments from 75 to 140.

Damien Steel, managing partner, OMERS Ventures: We believe there will be incredible investment opportunities available over the coming years, and are excited to continue the same pace of deployment we have had in the past. I would expect international funding into Europe to slow over the coming year as GPs are put under pressure. We view this as a great opportunity to lean in.

California-based VC: New deployments have halted for us, and remaining funds are being directed to follow-on rounds for our existing portfolio.

Ba Minuzzi, founder and CEO, UMANA House of Funds: The current economic climate has had a massive positive impact on our deployment strategy. I’m excited for Q1 2023 and the entire year of 2023 for the opportunities coming to us. The end of 2022 has been a great awakening for founders. It’s time to be disciplined with burn, and very creative with growth. Times of scarcity create the best founders.

Dave Dewalt, founder, MD and CEO, NightDragon: We won’t be changing our deployment strategy much, despite macro conditions. This is for a few reasons, most of which are rooted in the continued importance and investment in our core market category of cybersecurity, safety, security and privacy.

We see a massive market opportunity in this space which has an estimated TAM of $400 billion. This opportunity has remained strong and expanded, even as the larger economy struggles, because cyber budgets have remained highly resilient despite company cutbacks in other budget areas. For instance, in a recent survey of CISOs in our Advisor community, 66% said they expect their cyber budgets to increase in 2023.

Innovation is also still in demand above and beyond what is available today as the threat environment worsens globally. Each of these factors gives us confidence in continued investment and delivering outcomes for our LPs.

Ben Miller, co-founder, Fundrise: The economic climate will get worse before it gets better. Although the financial economy has already been repriced, with multiples moving back to historical norms, the real economy will be the next to turn downwards. That will cut back growth rates or even reduce revenue, magnifying valuation compression even more than what we’ve already seen so far.

We’re responding to these circumstances with a new solution: offering uncapped SAFEs to the most promising mid- and late-stage companies. While SAFEs are traditionally used for early stage companies, we think founders will be very receptive to extending their runways with the fastest, lowest friction investment solution available in the market.

Dave Zilberman, general partner, Norwest Venture Partners: Ignoring the macro-economic climate would be reckless. As such, given that we’re multi-stage investors, we see the current market as an opportunity to overweight early stage investments at the seed and Series A stages.

Economic headwinds won’t impede the need for more developer solutions; developers support the basis of competition in a digital world. As developer productivity and efficiency will be of even greater importance, solutions with a clear ROI will excel.

What percentage of unicorns are not actually worth $1 billion right now? How many of them do you think will fail in 2023?

Kirby Winfield, founding general partner, Ascend VC: Gotta be like 80% no longer worth $1 billion if you’re using public market comps. I think maybe 5%-10% will fail in 2023, but maybe 40% by 2025.

Ba Minuzzi, founder and CEO, UMANA House of Funds: We kicked off 2022 with five portfolio companies that had “unicorn status”, and two of those have already lost that status. I believe this data is indicative of the overall theme — that two out of every five unicorns will lose, or have lost, their $1 billion valuation. I do see this trend continuing in 2023.

Harley Miller, founder and managing partner, Left Lane Capital: Up to one-third, I would say, are decidedly worth less than that, especially for the companies whose paper valuations are between $1 billion and $2 billion. Companies with high burn rates and structurally unsound unit economics will suffer the most (e.g., quick commerce delivery). It’s not just about whether they’ll still command “unicorn status,” but rather whether or not they will be fundable, at any value, period.

Investors say web3 and hype are in for 2023, high valuations are out — maybe? by Rebecca Szkutak originally published on TechCrunch

Read, which lets you measure how well a meeting is going, is now a Zoom Essential App

Read, the app that lets meeting organizers read the virtual room and see how engaged (or not) participants are, is now one of Zoom’s Essential Apps. This means Zoom customers, Zoom One Pro Business and Business Plus users will have free access to Read’s premium features, like real-time and advanced meeting metrics, for 12 months. The app is also compatible with other video conferencing platform such as Google Meet, Microsoft Team and Webex.

Read is also releasing its Meeting Summary feature, which combines its sentiment analysis tools with OpenAI’s GPT language models to produce meeting summaries that are annotated with sentiment and engagement scores. Other new features include Meeting Playback, which shows when engagement increased or dropped, Read Workspace for organizations to set benchmarks for meetings and Augmented Reality, which displays engagement and talk time in each participant’s window.

Launched in 2021 by the team behind location analytics startup Placed—former Foursquare CEO David Shim, Rob Williams and Elliot Waldron—Read is backed with $10 million in seed funding from investors like Madrona Venture Group and PSL Ventures.

Read’s Meeting Summary tool

Read uses a combination of artificial intelligence, computer vision and natural language processing to gauge meeting participant engagement and sentiment. Some of the things it tracks includes if a small number of people are dominating the conversation, leaving others unheard, or if people seem bored.

Read’s engagement and sentiment analysis is meant to create better meetings (including shorter ones), but understandably, some people might be worried about having their reactions tracked. Shim told TechCrunch that Read protects user privacy and control by letting participants opt into meetings that measure audio and voice through a recording notification. They can declined to be recorded or, if they change their mind partway through a meeting, type “opt-out” into the chat to delete meeting data.

An example of how organizations have utilized Read to improve their virtual meetings include a 400-person technology company that used Read Recommendation to cut 8 hours of meetings a month for each employee.

Shim said Read Meeting Summaries’ pilot clients include venture capitalists, whose days are usually packed with pitches, updates and board meetings. They use Read has a virtual assistant to produce summaries of all meetings and follow-up items. Other users of Read includes salespeople who use the app to see what resonates with their customers, and follow up on those points.

Read, which lets you measure how well a meeting is going, is now a Zoom Essential App by Catherine Shu originally published on TechCrunch

Seen at CES: Nuralogix uses AI and a selfie to measure your heart rate, BP, body mass, skin age, stress level, and more

A picture is worth 1,000 words, as the saying goes, and now a startup called Nuralogix is taking this idea to the next level: soon, a selfie will be able give you 1,000 diagnostics about the state of your health.

Anura, the company’s flagship health and wellness app, takes a 30-second selfie and uses the data from that to create a catalogue of readings about you. They include vital stats like heart rate and blood pressure; mental health-related diagnostics like stress and depression levels; details about your physical state like body mass index and skin age; your level of risk for things like hypertension, stroke and heart disease; and biomarkers like your blood sugar levels.

Some of these readings are more accurate than others and are being improved on over time. Just today, to coincide with CES in Vegas — where I came across the company — Nuralogix announced that its contactless blood pressure measurements were becoming more accurate, specifically with accuracy corresponding to a standard deviation of error of less than 8mmHg.

Anura’s growth is part of a bigger trend in the worlds of medicine and wellness. The Covid-19 pandemic gave the world a prime opportunity to use and develop more remote health services, normalizing what many had thought of as experimental or sub-optimal.

That, coupled with a rising awareness that regular monitoring can be key to preventing health problems, has led to a number of apps and devices proliferating the market. Anura is by far not the only one out there, but it’s a notable example of how companies are playing out the equation of relying on low friction to yield big results. That in a way has been the holy grail of a lot of modern medicine — it’s one reason why so many wanted Theranos to be real.

So while some pandemic-era behaviors are not sticking as firmly as people thought they might (e-commerce has not completely replace in-person shopping, for one) observers believe there is a big future in tele-health and companies like Nuralogix providing the means to implement it.

Grandview Research estimates that tele-health was an $83.5 billion market globally in 2022, and that this number will balloon to $101.2 billion in 2023, growing at CAGR of 24% to 2030, when it will be a $455.3 billion market.

The startup — which is based out of Toronto, Canada, and backed by the city’s Mars Innovation effort (a consortium of universities and research groups helping to spin out academic research) and others — uses a B2B business model and counts Japan’s NTT and Spanish insurance provider Sanitas among its customers. It’s also talking to automotive companies that see the potential of being able to use this to track, say, when a driver is getting tired and distracted, or having a health crisis of some other kind.

Right now, the results that Anura comes up with are positioned as guidance — for “investigational” insights that complement other kinds of assessments. The company is compliant with HIPAA and other data protection regulations, and it’s currently going trough the process of FDA approval so that its customers can use the results in a more proactive manner.

It also has a Lite version of the application (on iOS and Android) where individuals can get some — but not all — of these diagnostics.

The Lite version is worth looking at not just as a way for the company to publicize itself, but how it gathers data.

Nuralogix built Anura on the back of an AI that was trained on data from some 35,000 of different users. A typical 30-second video image of a user’s face is analyzed to see how blood moves around it. “Human skin is translucent,” the company notes. “Light and its respective wavelengths are reflected at different layers below the skin and can be used to reveal blood flow information in the human face.”

Ingrid testing out the app at CES

That in turn is matched up with different diagnostics from those people using traditional measuring tools, and uploaded to the company’s “DeepAffex” Affective AI engine. Then users of the Anura app are essentially “read” based on what the AI has been trained to see: blood moving in one direction or another, or a person’s skin color, can say at lot about how the person is doing physically and mentally.

DeepAffex is potentially being used for more than just tele-health diagnostics. Previous to its pivot to health, company’s AI technology and using this technique of “transdermal optical imaging” (shortened to TOI by the company) to “read” faces, was being applied to reading users’ emotions. One potential application of that was using the tech to augment or even replace traditional lie detector tests, which are regularly used by police and others to determine whether a person is representing things truthfully, but have been proven to be flawed.

There are also horizons that extend into hardware. The current version of Anura is based an app that you access via smartphones or tablets, but longer term the company might also work on their own scanning devices to add in other kinds of facial scanning and other tools such as infrared to pick up even more information and produce more diagnostics. (One area for example that’s not currently touched is blood oxygen, an area that the company definitely wants to tackle.)

I tried out the full version of the Anura app this week in Las Vegas and have to say it’s a pretty compelling experience and indeed is low-friction enough to likely catch on with a lot of people. (And as a measure of that, the company’s demo had a permanent queue of people waiting to try it out.)

Seen at CES: Nuralogix uses AI and a selfie to measure your heart rate, BP, body mass, skin age, stress level, and more by Ingrid Lunden originally published on TechCrunch

WoWee returns to robots with a dog named ‘Dog-E’

It would be a massive understatement to suggest that robot toys are a mixed bag. They largely get the looks right, but brains are another thing altogether. Look at the time and money that went into building the first Roomba, for example, and it becomes very clear why the dream of ubiquitous home robot still seems like a lifetime away.

Just ahead of the holidays, I got a tinge of nostalgia from robot toys of yore. A friend told me they’d picked up a Roboraptor for a child in their life. I naturally asked, “they still make Roboraptor?” Granted, that’s probably not the first thing you want to hear after spending $70 on what you’d thought was a bleeding edge robot toy.

They do, indeed, still make Roboraptor – “they” being Wowee, a toy company founded in Montreal that now operates out of Hong Kong. Hasbro bought the company in the late-90s, only to sell it again in 2007. Roborapter debuted to some acclaim the following year and a deluge of robot toys – including Robosapian – followed. The company also gave the world this terrifying monstrosity of a robotic “watch dog.” A “houndroid,” per the add.

In recent years, the company has been less focused on robots. Earlier this year, WowWee’s “My Avastars” were the target of a lawsuit from Roblox Corp over the “blatant and admitted copying of” its IP. WowWee called the suit “completely meritless.”

Today the company returns to CES with MINTiDDog-E, a strangely named, but less threatening robot dog toy than the iron-jawed Megabyte Cyber Watch Dog. It’s not exactly a Sony Aibo, either, as reflected in the $80 price tag. The robot dog does, however, take advantage of the Dog-E app, which saves different “profiles” to the dog. The “minting” refers to a kind of robot dog imprinting process. Per the company,

Dog-E is a smart, app-connected robot dog with life-like movements, audio sensors to hear sounds, touch sensors on its head, nose and sides of its body, and a POV (persistence of vision) tail that displays icons and messages to communicate. As soon as you turn on Dog-E, your all-white pup comes to life through the minting process, which reveals its unique colors and characteristics. The minting process can begin by petting its head, touching its nose, or playing with it, among a long list of other interactions.

The dog is up for pre-orders now and shops this fall.

WoWee returns to robots with a dog named ‘Dog-E’ by Brian Heater originally published on TechCrunch

Harman’s driver monitoring system can measure your heart rate

Harman, a Samsung subsidiary that specializes in connected car technology and other IoT solutions, revealed at CES a suite of automotive features geared towards enhancing the health and safety of drivers and passengers, including an advanced driver monitoring system (DMS) that can measure a driver’s heart and breath rate.

Harman initially launched its DMS, called Ready Care, in September to measure driver eye activity and state of mind to determine cognitive distraction levels and then have the car initiate a personalized response to help mitigate dangerous driving situations. Based on the driver’s stress levels, Ready Care could also provide alternate routes, perhaps away from traffic jams, that might help to alleviate stress.

On Wednesday, Harman added to the Ready Care product contactless measurement of human vitals such as heart rate, breathing rate and inter-beat levels to further determine a driver’s state of well-being. Now, rather than just relying on an infrared global shutter camera, Harman has added to its set of sensors an in-cabin radar. Harman says this will also allow the vehicle to detect if a child is left unattended.

“With its unique ability to deliver customized and personalized driver interventions via a closed-loop approach, from detections via analysis to adjusting the temperature, audio settings and vehicle lighting, Ready Care offers solutions and protective intelligence that constantly prioritizes the driver’s well-being,” said Armin Prommersberger, SVP of product management at Harman, in a statement.

Through Harman’s software development kit and supporting APIs, OEMs and other third party suppliers can integrate their own vehicle features or functions as part of the in-cabin customized interventions against driver drowsiness and distraction, said Harman. The company didn’t say which OEMs it plans to partner with, but when Harman initially launched Ready Care, BMW showcased the tech at the North America auto show.

Harman also revealed two new products dedicated towards enhancing the audio experience inside and outside the vehicle for safer driving. Together, the Sound and Vibration Sensor (SVS) and External Microphone can help people inside the vehicle better identify emergency vehicle sirens, listen for exterior speech commands from other drivers or traffic controllers, detect glass breakage or vehicle impact and more, according to Harman.

“Audio has the power to deliver incredible experiences for drivers and passengers, and safety is no exception,” said Mitul Jhala, senior director of Harman’s automotive embedded audio team, in a statement.“With our new embedded audio solutions, SVS and External Microphone, OEMs can now offer the acoustic sensing and exterior sound detection consumers are looking for, while enhancing safety both inside and outside the vehicle.”

Harman said the SVS can be invisibly integrated into a vehicle’s exterior and the external microphone can handle environmental elements like wind, sun and poor weather. The company said SVS and the external microphone are future-proofed for an autonomous world, and can be integrated into a vehicle’s larger sensor suite to increase awareness of sounds not just for vehicle occupants but also for self-driving systems.

Harman’s driver monitoring system can measure your heart rate by Rebecca Bellan originally published on TechCrunch

Spotify’s new time capsule feature will let you revisit your musical taste a year from now

Spotify is introducing a new in-app experience called “Playlist in a Bottle” that is designed to let you capture your current music tastes and revisit them one year later. The streaming service announced on Wednesday that the new user experience will help users capture the moment by the time January 2024 rolls around.

To get started with your Playlist in a Bottle, you need to ensure your Spotify mobile app is up to date with the latest version. Then, you need to navigate to spotify.com/playlistinabottle from your mobile device. From there, you can begin the experience by selecting your time capsule of choice. The options include a bottle, jean pocket, gumball machine, lunch box or teddy bear.

The feature will then ask you a series of song-inspired prompts. For example, you may be asked what song you want to hear live in 2023 or what song reminds you of your favorite person. Once you’re done, you can digitally seal your musical time capsule and send it off. You also have the option to share a personalized card to your social channels with the hashtag #PlaylistInABottle. Come January 2024, you’ll receive your personalized time capsule reminding you what you were listening to one year prior.

The new feature is live starting today in 27 markets, including Australia, Argentina, Brazil, Canada, Chile, Colombia, Czech Republic, Denmark, Egypt, France, Germany, Indonesia, Italy, Japan, Mexico, Morocco, New Zealand, Philippines, Poland, Portugal, Saudi Arabia, Spain, Sweden, Turkey, U.A.E., U.K. and the U.S. Playlist in a Bottle is available for both free and premium users across iOS and Android devices.

Given the success of Spotify Wrapped, which has taken the internet by storm over the past few years every December, it’s no surprise that Spotify is looking to recreate the same sort of buzz toward the start of the year as well. Spotify Wrapped is largely popular because you can share it across social media, which is also possible with Playlist in a Bottle.

After January 31, you will no longer be able to create a Playlist in a Bottle, so get started on yours if you don’t want to experience FOMO a year from now when everyone’s sharing their results.

Spotify’s new time capsule feature will let you revisit your musical taste a year from now by Aisha Malik originally published on TechCrunch

Amazon’s custom-built ‘Hey Disney!’ voice assistant will become available for purchase later this year

Last year, Amazon and Disney announced a plan to develop a custom voice assistant that combined Alexa’s smarts with Disney’s library of character voices and original recordings. Dubbed “Hey Disney!,” the voice assistant was the first non-Alexa assistant to become available on Echo devices and installed at select Disney Resort hotels. Now, for the first time, Amazon is showing off the new voice assistant to the general public at the Consumer Electronics Show in Las Vegas. And soon, it says, customers will be able to purchase the Disney Magical Companion voice assistant for use in their own homes, as well.

The launch of the Disney voice assistant has been something of an experiment for Amazon, which has struggled to get its own Alexa users to use its voice assistant for anything more than basic tasks — like setting timers, making lists, or controlling their smart home via their Echo smart speakers and smart screens or other Alexa-powered devices. Unfortunately for Amazon, shopping via Alexa also didn’t take off — nor did other attempts to monetize Alexa through things like in-app purchases or subscriptions to voice apps.

As a result of this and other economic forces, workers in the Alexa division, were among those hardest hit by Amazon’s recent layoffs.

Meanwhile, Amazon’s Disney partnership, which includes access to Disney’s intellectual property, allows Alexa’s technology to be used for a broader range of experiences, while also offering Amazon a potential revenue stream from custom client solutions.

Image Credits: Amazon/Disney

Before today, the Disney voice assistant was available in select Disney Resort hotel rooms, as a free service for the guests. Visitors could ask the assistant for pertinent information like park hours, directions to the park, or where to eat. They could also make guest service requests at the hotel, like ordering extra towels or room service.

And, of course, the assistant is packed with Disney features — like jokes, interactive trivia, greetings from favorite Disney characters, and access to “soundscapes” inspired by Disney films. Supported voices include those from over 20 popular characters from Disney, Pixar, Star Wars, and more. When you ask for the weather, Olaf from “Frozen” might tell you when it’s cold outside, for instance. The experience itself is guided by the Disney Magical Companion, not Alexa — but some guests have complained the voice is not a known character, like Mickey.

The assistant itself was built using Amazon’s Alexa Custom Assistant (ACA) solution, which allowed Disney to customize Alexa’s technology while also supporting its own in-house tech. To start, Hey Disney! will work with Disney’s interactive wearable, the Disney MagicBand+, which will enhance Disney’s trivia game by turning the band into a game show buzzer of sorts that reacts with lights and haptics as players answer the trivia questions. The band, which is typically used in the park for entry and other things like Lightning Lane access, will also light up and buzz when an alarm or timer the guest sets goes off.

Amazon aided in the development of the assistant, it says, helping Disney to create hundreds of pieces of custom content. It’s also using the platform to introduce voice assistants to consumers who have yet to interact with them by offering hints and prompts about things they can do — like hear a joke or play a game.

“Disney is the master storyteller, and its stories are so powerful for so many people,” noted Aaron Rubenson, the vice president of Alexa, in a statement released during CES. “Now people can keep talking to a character, they can continue with the storyline when they go back to their room at the end of the day, or when they go home after the vacation is over. It’s just gratifying to imagine that we’re a part of literally bringing that magic home,” he added.

Disney and Amazon will make the Disney Magical Companion available to U.S. customers for purchase later this year, but does not have a launch time at this time.

Amazon’s custom-built ‘Hey Disney!’ voice assistant will become available for purchase later this year by Sarah Perez originally published on TechCrunch

Goodyear, Gatik say tire tech is key to bringing AVs to winter climates

Cars become an extension of the body when humans drive; we can feel the lack of grip in our car’s tires when driving over icy or wet roads. Autonomous vehicles don’t exactly have the same sensory abilities, which is one of the reasons why most AV testing and deployment happens in sunny climates.

Gatik, a Canadian autonomous trucking company, thinks tire-sensing data might be the key to bringing self-driving tech to wintery roads. The company is working with Goodyear, the iconic tire company, to prove that intelligent tires can accurately estimate tire-road friction and provide real-time information back to Gatik’s automated driving system.

“The tire is the only part of the vehicle that touches the ground, and this new level of data sophistication can communicate vital information to the vehicle, enhancing safety and performance,” said Chris Helsel, Goodyear’s senior vice president global operations and chief technology officer, in a statement. “This is another step to evolve the tire to not only deliver its core, traditional job but also be a nexus of new data and information.”

The companies shared at CES 2023 that they recently deployed Goodyear’s road friction detection technology, called SightLine, in Canada. The deployment involved continuously measuring tire sensor-derived information — like wear state, load, inflation pressure and temperature — against other vehicle data and real-time road weather data. All of this information was then connected to Goodyear’s cloud-based proprietary algorithms to come up with a friction estimate. Goodyear said over the course of the trial, these friction estimates successfully detected low grip conditions, like snowy or icy conditions.

The idea down the line is for the friction estimates to be sent back to Gatik’s autonomous fleet to assist with path planning and providing recommendations for safe driving speed, vehicle acceleration limits and vehicle following distance, according to Goodyear.

Of course, the potential for SightLine doesn’t end with better detection of snowy roads or even with autonomous driving. Goodyear said it expects to deploy the tech on “select original equipment vehicles” in 2023. Tire technology can also provide information on the health of the tire and collect information about road conditions like potholes.

In 2021, Goodyear Ventures and Porsche Ventures strategically invested in Tactile Mobility, an Israeli startup that said its tech could measure tire grip estimation and tire health. It’s not clear if Goodyear collaborated with Tactile Mobility to develop SightLine, and the company didn’t respond in time to TechCrunch’s queries.

Goodyear, Gatik say tire tech is key to bringing AVs to winter climates by Rebecca Bellan originally published on TechCrunch

Pin It on Pinterest