With Kokomo VR meeting software, Canon takes a step away from its hardware roots

Canon has a long and deep history of being a hardware manufacturer. Most consumers know it best as a camera manufacturer, but the company has a long, deep and illustrious history in medical, office equipment, and other imaging applications.

During the pandemic, a lot of its business shifted. People stopped going to offices. Sporting events were shut down. And while the medical industry was booming, Canon as a company needed to rethink its mission and vision: What does an imaging company do in a world where people have a desire to connect, but are unable to leave their homes while a deadly virus rages around the world?

At CES 2023, the company showed off its vision for the future — a vision that seems a lot less hardware-y than you would expect from the 85-year-old company that has traditionally made all of its money from making things with buttons.

A rag-tag bunch of Canon veterans took on the challenge and created Kokomo, a VR meeting software package that, in essence, makes real-time 3D video calling a reality.

Users don a VR headset and point a smartphone at themselves. The software scans your face, and creates a photo-real 3D avatar of you and the person you are calling. It uses the motion sensors in the headset and the camera to capture your avatar, moving you into a photo-realistic space, and boom, you are virtually present with a colleague, family member or friend. The technology to scan your face is similar to the tech used by iOS’s face ID, doing a brief pre-capture process. From there, your face’s shape and texture can be shown off inside the video calls.

A quick scan with the Kokomo companion app captures your face so it can be shown to your friends in VR. Image Credit: Canon

The most interesting thing to note about the above paragraph is the lack of Canon products. Traditionally, Canon’s software solutions have focused on enhancing and facilitating the use of its hardware products. The company makes neither smartphones nor VR headsets, however, so this move represents a major departure from its roots.

TechCrunch sat down with the team that led the development of Kokomo to figure out how it came about, and where Canon is going as it re-imagines its own future.

Kokomo is a way to enable people to be there, when they couldn’t.Jon Lorentz

“This is representing a very exciting new innovation for Canon – but also a very new business direction for Canon, as well,” said Jon Lorentz, one of the co-creators of the Kokomo solution. “As you know, traditionally, Canon is very much tied to our hardware products. When we announced AMLOS at CES last year, it was about innovating work from home. Our task [with Kokomo] is to innovate life at home, and that is where this project came from. When we started, we were in the thick of COVID, and there were not a lot of ways for people to get connected. The underlying premise for what we created was to be a solution to that. Kokomo is a way to enable people to be there when they couldn’t.

The team’s goal was to create a solution that takes the experience beyond a phone call, a FaceTime call or a Zoom call – to feel present rather than to just look at each other on a screen. A worthy pursuit in a world where travel is limited and screen fatigue is real. But how is Canon’s solution for bringing people into a virtual world going to accomplish that?

“We support most of the popular consumer VR headsets in the market to enable people to engage in immersive calls, as we are calling them. In these calls, people can engage. They are dynamic, in living, breathing environments. You can download a companion app on a mobile phone, which lets the person you talk to see you from head to toe,” explains Lorentz. “No more legless avatars. No more wondering what someone is actually gesturing. And you can actuallysee the other person. You can be in the call, rather than on the call.”

Below is an in-depth interview with Kokomo co-creators Jon Lorentz, Ray Konno and Jason Williams. The interview has been edited for clarity and length.

A phone and a VR headset are all you need to use Canon’s Kokomo. Image Credit: Canon

TechCrunch: Why is Canon excited in software? Isn’t that a step away from its hardware roots?

Jon Lorentz (JL): At our core, Canon is an imaging company, and that’s really our specialty. Kokomo is applying that specialty to the software rather than starting with our hardware first. We see that the ability to step into a call is really stepping into an imaging sensor. It’s about taking that image sensor data, and then applying it to someone else’s visual field.

Obviously, there are a lot of details behind that, but our core is imaging excellence. As you bring in mesh reality and virtual reality, you need to have a certain level of meshing: it really needs to match up. Otherwise, you’re going to feel disconnected – it’s not going to feel natural. The same goes for the environments; they are not static, from another virtual place. We’ve captured real-life environments, and brought them into VR. You really feel like you’re in the dynamic, living places.

With Kokomo VR meeting software, Canon takes a step away from its hardware roots by Haje Jan Kamps originally published on TechCrunch

Samsung’s new wireless charger has a smart home hub built-in

Or maybe it’s a smart home hub with wireless charging built in. Either way, this is the SmartThings Station, which debuted on stage at CES this afternoon. The hub is a bit of an unsung lynchpin in the smart home ecosystem. It isn’t especially flashy as these things go, but it can be quite useful when it comes automating your setup.

Matter is, of course, the big buzz word of this year’s show, and the new hub supports the standard, naturally, letting users integrate a broad range of different products from a broad range of different companies – thermostats, lights, plugs, etc. From there, you can automate different tasks, using Samsung’s SmartThings.

Image Credits: Samsung

The first time you turn the pad on, you’ll see a pop-up on your Samsung device, walking you through the setup process. You can also just scan the QR code on your phone. The hub features a built-in 15-watt wireless charger that will send you a notification when it’s finished. You can also double-tap the pad to help locate a misplaced phone with a ring. A built in “Smart Button,” meanwhile, can fire up different routines with a tap.

It’s a clever little combo device. As the Matter standard continues to roll out, more users will likely be looking toward these sorts of hubs. And hell, you can never have enough wireless charging surfaces around the house.

The product is due out next month in the U.S. and Korea. No pricing has been announced as of yet.

Samsung’s new wireless charger has a smart home hub built-in by Brian Heater originally published on TechCrunch

Twitter’s advanced search filters for mobile are on their way

Twitter is finally making a feature update that people actually want. According to social media analyst Matt Navarra, Twitter’s advanced search filters for mobile are coming soon.

Here’s what they look like in practice:

NEW! Twitter Advanced Search feature on iOS is coming soon https://t.co/ae56yE3JTU pic.twitter.com/xbQUpQJAlS

— Matt Navarra (@MattNavarra) January 4, 2023

The feature makes it easier to find specific tweets you’re looking for by filtering based on date, user, retweet count, hashtags and more. Sure, this technically has existed on Twitter for a long time, but figuring out how to pull up advanced search is pretty unintuitive and clunky. On the web, you have to type in your search term, then click the three dot menu to the right of the search bar to open up advanced search. On mobile, this wasn’t even an option until now, when this feature release seems to be imminent.

These changes could come courtesy of George Hotz, the security hacker known for developing iOS jailbreaks and reverse engineering the PlayStation 3. He later founded Comma.ai, a driver-assistance system startup that aims to bring Tesla Autopilot–like functionality to other cars.

But in his most recent role, Hotz was a Twitter intern. Yes, an intern. Hotz tweeted his support for a controversial memo in which Elon Musk told employees to get “extremely hardcore” or leave. When his followers pushed back on this, he stated, “I’ll put my money where my mouth is. I’m down for a 12 week internship at Twitter for cost of living in SF.”

So Musk put his frenemy to work — according to Hotz’s own tweets, Musk told him that his job was to fix Twitter’s bad search system. In late November, he polled his followers to see what they wanted from Twitter search. Some common answers included searching within “liked” and “seen” tweets, more accessible advanced search and moving away from exact text search.

if I just get rid of the pop up I still consider my internship a win. I have a chrome extension on my laptop to block it

reminds me of the guy who got a job at Apple, made Wallet automatically delete your expired boarding passes, and quit the next week

— George Hotz (@realGeorgeHotz) November 22, 2022

Even Musk himself complained about Twitter’s search feature within a week of taking control of the company. “Fixing search is a high priority,” he tweeted.

Search within Twitter reminds me of Infoseek in ‘98! That will also get a lot better pronto.

— Elon Musk (@elonmusk) November 5, 2022

It’s not clear when this feature will roll out, but typically, when a feature can be reverse engineered by an app researcher — as is the case here — it’s almost ready for the public eye.

We’ll have to wait and see how good this feature is in practice, but truly, the only way for Twitter’s search function to go is up.

Twitter’s advanced search filters for mobile are on their way by Amanda Silberling originally published on TechCrunch

Camera maker Canon leans into software at CES

Depending on whether you spend most of your time in hospitals, offices or in the great outdoors, when you hear ‘Canon,’ your mind will likely go to medical scanning equipment, high-end printers, or cameras. At CES this year, the 85-year-old company is leaning in a new direction, with an interesting focus on software applications.

At the show, the imaging giant showed off a direction it has been hinting at before, but this time relying far less on its own hardware, and more on the software the company has developed, in part as a response to the COVID-19 pandemic casting a shadow over people’s ability to connect. To the chorus of ‘meaningful communication’ and ‘powerful collaboration,’ the Japanese imaging giant appears to be plotting out a new course for what’s next.

“Canon is creating groundbreaking solutions that help people connect in more ways than we ever could have imagined, redefining how they work and live at a time when many of them are embracing a hybrid lifestyle,’’ said Kazuto Ogawa, President and CEO, Canon U.S.A., Inc, in a press briefing at CES 2023. “Canon’s ultimate role is to bring people closer together by revealing endless opportunities for creators. Under our theme of ‘Limitless Is More,’ we will show CES 2023 attendees what we are creating as a company focused on innovation and a world without limits.”

Among other things, Canon showed off a somewhat gimmicky immersive experience tied in with M. Night Shyamalan’s upcoming thriller movie, Knock at the Cabin. The very Shyamalanesque movie trailer will give you a taster of the vibe. At the heart of things, however, Canon is tapping into a base desire in humanity; to feel connected to one another. The company is desperate to show off how its solutions can “remove the limits humanity faces to create more meaningful communication,” through four technologies it is showing off at the trade show this year.

Canon USA CEO Kevin Ogawa on stage at CES 2023 along with M. Night Shyamalan. Image Credit: Haje Kamps / TechCrunch

3D calling: Kokomo

The flagship solution Canon is showing off is Kokomo, which the company describes as a first-of-its-kind immersive VR software package. It is designed to combine VR with an immersive calling experience. The solution is pretty elegant: Using a VR headset and a smartphone, the Kokomo software enables users to see and hear one another in real-time with their live appearance and expression, in a photo-real environment.

The Kokomo solution brings 3D video calling to a home near you. Image Credit: Canon

In effect, the software package scans your face to learn what you look like, then turns you into a photo-realistic avatar. The person you are in a call with can see you –sans VR headset – showing your physical appearance and facial expressions. The effect is to experience a 3D video call. At the show, Canon is demoing the tech by letting visitors step into a 1×1 conversation with the Knock at the Cabin characters.

Realtime 3D video: Free Viewpoint

Aimed at the sports market, Free Viewpoint is a solution that combines more than 100 high-end cameras with a cloud-based solution that makes it possible to move a virtual camera to any location. The software takes all the video feeds, creating a point-cloud based 3D model when enables a virtual camera operator to create a number of angles that would otherwise have been impossible: Drone-like replay footage, swooping into the action, for example, or detailed in-the-thick-of-things type footage, enabling viewers to see plays from the virtual perspective of one of the players.

In the USA, the system has already been installed at two NBA arenas (including at the home of the Cavaliers and the Nets). The video can be broadcast live, or compiled into replay clips. Canon also points out that the system enables ‘virtual advertising and other opportunities for monetization,’ so I suppose we have that to look forward to as well.

Returning to the Knock at the Cabin theme, at CES, Canon showed off a virtual action scene captured with the Free Viewpoint video system, captured at Canon’s Volumetric Video Studio in Kawasaki, Japan. The effect of watching an action scene ‘through the eyes’ of various characters was a wonderfully immersive experience.

Augmented reality tech: MREAL

Canon also showed off some earlier-stage tech that isn’t quite ready for prime-time viewing yet, including MREAL. This is tech that helps integrated simulation-like immersive worlds, merging the real and the virtual worlds. Use cases might include pre-visualization for movies, training scenarios, and interactive mixed-reality entertainment. The company tells TechCrunch that the technology is in the market research phase.

The company is trying to figure out what to develop further, and how to market the product. In other words: Who would use this, what would they use it for, and what would they be willing to pay for it.

Remote presence: AMLOS

Activate My Line of Sight (AMLOS) is what Canon is calling its solution for hybrid meeting environments, where some participants are in person, while others are off-site. If you’ve ever been in a meeting in that configuration, you’ll often find that attending remotely is a deeply frustrating experience, as the in-person meeting participants are engaging with each other, and the remote attendees are off on a screen somewhere.

Canon hopes that AMLOS can help solve that; it’s a software-and-camera set of products aiming to improve the level of engagement. It adds panning, tilting, and zooming capabilities to remote camera systems, giving remote users the ability to customize their viewing and participation experience. So far, the solution is not quite intuitive enough to overcome the barrier of not being in the room, but it’s certainly better than being a disembodied wall of heads on a screen.

Camera maker Canon leans into software at CES by Haje Jan Kamps originally published on TechCrunch

Ottonomy’s new delivery robot gets an automatic package dispenser

The robots are slowly but surely conquering this year’s CES. During today’s press preview, Ottonomy debuted a new model being added to the New York firm’s army of delivery robots. Yeti stands out from other Ottobot models primarily thanks to the addition of a clever auto dispense mechanism designed to eliminate the need for a person to be present to receive the package. The startup calls the product “the first fully autonomous unattended delivery robot on the market.”

Once it reaches its destination, the last mile-delivery bot can drop its contents onto a doorstep or transfer them into a compatible locker for safe keeping until the human arrives to pick them up. Another interesting angle here is the potential for product returns – specifically, a customer could put use the robot to get unwanted product back to the original seller.

Yeti follows the late 2022 addition of another robot, Ottobot 2.0, which brings some interesting customization options to the table, including the ability to swap out different modular bins for different sorts of deliveries.

Image Credits: Ottonomy

The firm has a number of concurrent programs in cities across the world, including Pittsburgh, Cincinnati, Oslo and Madrid. It’s also working to expand to additional markets in the U.S., Canada, Europea and Asia. Here in the States, it’s partnered with Verizon.

“During the validation processes we ran pilots with airports, retailers and postal services which gave us the deep insights we needed on the most effective use cases and scalability,” says cofounder and CEO Ritukar Vijay. “With our strategic alignment with Verizon and other enterprises, we are in the prime position to fill the gap that companies like Amazon and Fedex were not able to. As demand and the use cases for autonomous unassisted delivery continue to grow, we are positioned to provide robots-as-a-service for restaurants, retailers and beyond.”

Ottonomy announced a $3.3 million seed raise last August.

Ottonomy’s new delivery robot gets an automatic package dispenser by Brian Heater originally published on TechCrunch

Investors say web3 and hype are in for 2023, high valuations are out — maybe?

This past year was tumultuous for venture investors, to say the least. The ecosystem watched as startup funding dried up, held its breath as a $32 billion venture-backed company evaporated almost overnight, and witnessed one of the largest startup acquisitions of all time.

Did you hear anyone yell “bingo?” Probably not. It’s unlikely that many investors came close to predicting what would play out in 2022. But, hey, there’s always next year.

It seems we’re entering yet another interesting and tumultuous year: The crypto market is hanging on by a thread; everyone is watching with popcorn in hand to see which unicorn will be the next to tumble; and the hype around AI continues to swell.

Some think 2023 will just be the start of a venture winter and overall economic recession, while others think we could see some stabilization as things head back to normal by mid-year. But who is to say?

To find out how investors are thinking about the year ahead and what they’re planning, we asked more than 35 investors to share their thoughts. Here is a selection of their answers lightly edited for clarity.

How is the current economic climate impacting your deployment strategy for the next year?

U.S.-based early stage investor: My goal is to deploy the same amount every year, but the climate has led to far less interesting companies/founders raising rounds, so I will probably deploy 20%-30% of what I want to.

Bruce Hamilton, founder, Mech Ventures: We are contemplating decreasing our check size so we can double our number of investments from 75 to 140.

Damien Steel, managing partner, OMERS Ventures: We believe there will be incredible investment opportunities available over the coming years, and are excited to continue the same pace of deployment we have had in the past. I would expect international funding into Europe to slow over the coming year as GPs are put under pressure. We view this as a great opportunity to lean in.

California-based VC: New deployments have halted for us, and remaining funds are being directed to follow-on rounds for our existing portfolio.

Ba Minuzzi, founder and CEO, UMANA House of Funds: The current economic climate has had a massive positive impact on our deployment strategy. I’m excited for Q1 2023 and the entire year of 2023 for the opportunities coming to us. The end of 2022 has been a great awakening for founders. It’s time to be disciplined with burn, and very creative with growth. Times of scarcity create the best founders.

Dave Dewalt, founder, MD and CEO, NightDragon: We won’t be changing our deployment strategy much, despite macro conditions. This is for a few reasons, most of which are rooted in the continued importance and investment in our core market category of cybersecurity, safety, security and privacy.

We see a massive market opportunity in this space which has an estimated TAM of $400 billion. This opportunity has remained strong and expanded, even as the larger economy struggles, because cyber budgets have remained highly resilient despite company cutbacks in other budget areas. For instance, in a recent survey of CISOs in our Advisor community, 66% said they expect their cyber budgets to increase in 2023.

Innovation is also still in demand above and beyond what is available today as the threat environment worsens globally. Each of these factors gives us confidence in continued investment and delivering outcomes for our LPs.

Ben Miller, co-founder, Fundrise: The economic climate will get worse before it gets better. Although the financial economy has already been repriced, with multiples moving back to historical norms, the real economy will be the next to turn downwards. That will cut back growth rates or even reduce revenue, magnifying valuation compression even more than what we’ve already seen so far.

We’re responding to these circumstances with a new solution: offering uncapped SAFEs to the most promising mid- and late-stage companies. While SAFEs are traditionally used for early stage companies, we think founders will be very receptive to extending their runways with the fastest, lowest friction investment solution available in the market.

Dave Zilberman, general partner, Norwest Venture Partners: Ignoring the macro-economic climate would be reckless. As such, given that we’re multi-stage investors, we see the current market as an opportunity to overweight early stage investments at the seed and Series A stages.

Economic headwinds won’t impede the need for more developer solutions; developers support the basis of competition in a digital world. As developer productivity and efficiency will be of even greater importance, solutions with a clear ROI will excel.

What percentage of unicorns are not actually worth $1 billion right now? How many of them do you think will fail in 2023?

Kirby Winfield, founding general partner, Ascend VC: Gotta be like 80% no longer worth $1 billion if you’re using public market comps. I think maybe 5%-10% will fail in 2023, but maybe 40% by 2025.

Ba Minuzzi, founder and CEO, UMANA House of Funds: We kicked off 2022 with five portfolio companies that had “unicorn status”, and two of those have already lost that status. I believe this data is indicative of the overall theme — that two out of every five unicorns will lose, or have lost, their $1 billion valuation. I do see this trend continuing in 2023.

Harley Miller, founder and managing partner, Left Lane Capital: Up to one-third, I would say, are decidedly worth less than that, especially for the companies whose paper valuations are between $1 billion and $2 billion. Companies with high burn rates and structurally unsound unit economics will suffer the most (e.g., quick commerce delivery). It’s not just about whether they’ll still command “unicorn status,” but rather whether or not they will be fundable, at any value, period.

Investors say web3 and hype are in for 2023, high valuations are out — maybe? by Rebecca Szkutak originally published on TechCrunch

Read, which lets you measure how well a meeting is going, is now a Zoom Essential App

Read, the app that lets meeting organizers read the virtual room and see how engaged (or not) participants are, is now one of Zoom’s Essential Apps. This means Zoom customers, Zoom One Pro Business and Business Plus users will have free access to Read’s premium features, like real-time and advanced meeting metrics, for 12 months. The app is also compatible with other video conferencing platform such as Google Meet, Microsoft Team and Webex.

Read is also releasing its Meeting Summary feature, which combines its sentiment analysis tools with OpenAI’s GPT language models to produce meeting summaries that are annotated with sentiment and engagement scores. Other new features include Meeting Playback, which shows when engagement increased or dropped, Read Workspace for organizations to set benchmarks for meetings and Augmented Reality, which displays engagement and talk time in each participant’s window.

Launched in 2021 by the team behind location analytics startup Placed—former Foursquare CEO David Shim, Rob Williams and Elliot Waldron—Read is backed with $10 million in seed funding from investors like Madrona Venture Group and PSL Ventures.

Read’s Meeting Summary tool

Read uses a combination of artificial intelligence, computer vision and natural language processing to gauge meeting participant engagement and sentiment. Some of the things it tracks includes if a small number of people are dominating the conversation, leaving others unheard, or if people seem bored.

Read’s engagement and sentiment analysis is meant to create better meetings (including shorter ones), but understandably, some people might be worried about having their reactions tracked. Shim told TechCrunch that Read protects user privacy and control by letting participants opt into meetings that measure audio and voice through a recording notification. They can declined to be recorded or, if they change their mind partway through a meeting, type “opt-out” into the chat to delete meeting data.

An example of how organizations have utilized Read to improve their virtual meetings include a 400-person technology company that used Read Recommendation to cut 8 hours of meetings a month for each employee.

Shim said Read Meeting Summaries’ pilot clients include venture capitalists, whose days are usually packed with pitches, updates and board meetings. They use Read has a virtual assistant to produce summaries of all meetings and follow-up items. Other users of Read includes salespeople who use the app to see what resonates with their customers, and follow up on those points.

Read, which lets you measure how well a meeting is going, is now a Zoom Essential App by Catherine Shu originally published on TechCrunch

Seen at CES: Nuralogix uses AI and a selfie to measure your heart rate, BP, body mass, skin age, stress level, and more

A picture is worth 1,000 words, as the saying goes, and now a startup called Nuralogix is taking this idea to the next level: soon, a selfie will be able give you 1,000 diagnostics about the state of your health.

Anura, the company’s flagship health and wellness app, takes a 30-second selfie and uses the data from that to create a catalogue of readings about you. They include vital stats like heart rate and blood pressure; mental health-related diagnostics like stress and depression levels; details about your physical state like body mass index and skin age; your level of risk for things like hypertension, stroke and heart disease; and biomarkers like your blood sugar levels.

Some of these readings are more accurate than others and are being improved on over time. Just today, to coincide with CES in Vegas — where I came across the company — Nuralogix announced that its contactless blood pressure measurements were becoming more accurate, specifically with accuracy corresponding to a standard deviation of error of less than 8mmHg.

Anura’s growth is part of a bigger trend in the worlds of medicine and wellness. The Covid-19 pandemic gave the world a prime opportunity to use and develop more remote health services, normalizing what many had thought of as experimental or sub-optimal.

That, coupled with a rising awareness that regular monitoring can be key to preventing health problems, has led to a number of apps and devices proliferating the market. Anura is by far not the only one out there, but it’s a notable example of how companies are playing out the equation of relying on low friction to yield big results. That in a way has been the holy grail of a lot of modern medicine — it’s one reason why so many wanted Theranos to be real.

So while some pandemic-era behaviors are not sticking as firmly as people thought they might (e-commerce has not completely replace in-person shopping, for one) observers believe there is a big future in tele-health and companies like Nuralogix providing the means to implement it.

Grandview Research estimates that tele-health was an $83.5 billion market globally in 2022, and that this number will balloon to $101.2 billion in 2023, growing at CAGR of 24% to 2030, when it will be a $455.3 billion market.

The startup — which is based out of Toronto, Canada, and backed by the city’s Mars Innovation effort (a consortium of universities and research groups helping to spin out academic research) and others — uses a B2B business model and counts Japan’s NTT and Spanish insurance provider Sanitas among its customers. It’s also talking to automotive companies that see the potential of being able to use this to track, say, when a driver is getting tired and distracted, or having a health crisis of some other kind.

Right now, the results that Anura comes up with are positioned as guidance — for “investigational” insights that complement other kinds of assessments. The company is compliant with HIPAA and other data protection regulations, and it’s currently going trough the process of FDA approval so that its customers can use the results in a more proactive manner.

It also has a Lite version of the application (on iOS and Android) where individuals can get some — but not all — of these diagnostics.

The Lite version is worth looking at not just as a way for the company to publicize itself, but how it gathers data.

Nuralogix built Anura on the back of an AI that was trained on data from some 35,000 of different users. A typical 30-second video image of a user’s face is analyzed to see how blood moves around it. “Human skin is translucent,” the company notes. “Light and its respective wavelengths are reflected at different layers below the skin and can be used to reveal blood flow information in the human face.”

Ingrid testing out the app at CES

That in turn is matched up with different diagnostics from those people using traditional measuring tools, and uploaded to the company’s “DeepAffex” Affective AI engine. Then users of the Anura app are essentially “read” based on what the AI has been trained to see: blood moving in one direction or another, or a person’s skin color, can say at lot about how the person is doing physically and mentally.

DeepAffex is potentially being used for more than just tele-health diagnostics. Previous to its pivot to health, company’s AI technology and using this technique of “transdermal optical imaging” (shortened to TOI by the company) to “read” faces, was being applied to reading users’ emotions. One potential application of that was using the tech to augment or even replace traditional lie detector tests, which are regularly used by police and others to determine whether a person is representing things truthfully, but have been proven to be flawed.

There are also horizons that extend into hardware. The current version of Anura is based an app that you access via smartphones or tablets, but longer term the company might also work on their own scanning devices to add in other kinds of facial scanning and other tools such as infrared to pick up even more information and produce more diagnostics. (One area for example that’s not currently touched is blood oxygen, an area that the company definitely wants to tackle.)

I tried out the full version of the Anura app this week in Las Vegas and have to say it’s a pretty compelling experience and indeed is low-friction enough to likely catch on with a lot of people. (And as a measure of that, the company’s demo had a permanent queue of people waiting to try it out.)

Seen at CES: Nuralogix uses AI and a selfie to measure your heart rate, BP, body mass, skin age, stress level, and more by Ingrid Lunden originally published on TechCrunch

WoWee returns to robots with a dog named ‘Dog-E’

It would be a massive understatement to suggest that robot toys are a mixed bag. They largely get the looks right, but brains are another thing altogether. Look at the time and money that went into building the first Roomba, for example, and it becomes very clear why the dream of ubiquitous home robot still seems like a lifetime away.

Just ahead of the holidays, I got a tinge of nostalgia from robot toys of yore. A friend told me they’d picked up a Roboraptor for a child in their life. I naturally asked, “they still make Roboraptor?” Granted, that’s probably not the first thing you want to hear after spending $70 on what you’d thought was a bleeding edge robot toy.

They do, indeed, still make Roboraptor – “they” being Wowee, a toy company founded in Montreal that now operates out of Hong Kong. Hasbro bought the company in the late-90s, only to sell it again in 2007. Roborapter debuted to some acclaim the following year and a deluge of robot toys – including Robosapian – followed. The company also gave the world this terrifying monstrosity of a robotic “watch dog.” A “houndroid,” per the add.

In recent years, the company has been less focused on robots. Earlier this year, WowWee’s “My Avastars” were the target of a lawsuit from Roblox Corp over the “blatant and admitted copying of” its IP. WowWee called the suit “completely meritless.”

Today the company returns to CES with MINTiDDog-E, a strangely named, but less threatening robot dog toy than the iron-jawed Megabyte Cyber Watch Dog. It’s not exactly a Sony Aibo, either, as reflected in the $80 price tag. The robot dog does, however, take advantage of the Dog-E app, which saves different “profiles” to the dog. The “minting” refers to a kind of robot dog imprinting process. Per the company,

Dog-E is a smart, app-connected robot dog with life-like movements, audio sensors to hear sounds, touch sensors on its head, nose and sides of its body, and a POV (persistence of vision) tail that displays icons and messages to communicate. As soon as you turn on Dog-E, your all-white pup comes to life through the minting process, which reveals its unique colors and characteristics. The minting process can begin by petting its head, touching its nose, or playing with it, among a long list of other interactions.

The dog is up for pre-orders now and shops this fall.

WoWee returns to robots with a dog named ‘Dog-E’ by Brian Heater originally published on TechCrunch

Harman’s driver monitoring system can measure your heart rate

Harman, a Samsung subsidiary that specializes in connected car technology and other IoT solutions, revealed at CES a suite of automotive features geared towards enhancing the health and safety of drivers and passengers, including an advanced driver monitoring system (DMS) that can measure a driver’s heart and breath rate.

Harman initially launched its DMS, called Ready Care, in September to measure driver eye activity and state of mind to determine cognitive distraction levels and then have the car initiate a personalized response to help mitigate dangerous driving situations. Based on the driver’s stress levels, Ready Care could also provide alternate routes, perhaps away from traffic jams, that might help to alleviate stress.

On Wednesday, Harman added to the Ready Care product contactless measurement of human vitals such as heart rate, breathing rate and inter-beat levels to further determine a driver’s state of well-being. Now, rather than just relying on an infrared global shutter camera, Harman has added to its set of sensors an in-cabin radar. Harman says this will also allow the vehicle to detect if a child is left unattended.

“With its unique ability to deliver customized and personalized driver interventions via a closed-loop approach, from detections via analysis to adjusting the temperature, audio settings and vehicle lighting, Ready Care offers solutions and protective intelligence that constantly prioritizes the driver’s well-being,” said Armin Prommersberger, SVP of product management at Harman, in a statement.

Through Harman’s software development kit and supporting APIs, OEMs and other third party suppliers can integrate their own vehicle features or functions as part of the in-cabin customized interventions against driver drowsiness and distraction, said Harman. The company didn’t say which OEMs it plans to partner with, but when Harman initially launched Ready Care, BMW showcased the tech at the North America auto show.

Harman also revealed two new products dedicated towards enhancing the audio experience inside and outside the vehicle for safer driving. Together, the Sound and Vibration Sensor (SVS) and External Microphone can help people inside the vehicle better identify emergency vehicle sirens, listen for exterior speech commands from other drivers or traffic controllers, detect glass breakage or vehicle impact and more, according to Harman.

“Audio has the power to deliver incredible experiences for drivers and passengers, and safety is no exception,” said Mitul Jhala, senior director of Harman’s automotive embedded audio team, in a statement.“With our new embedded audio solutions, SVS and External Microphone, OEMs can now offer the acoustic sensing and exterior sound detection consumers are looking for, while enhancing safety both inside and outside the vehicle.”

Harman said the SVS can be invisibly integrated into a vehicle’s exterior and the external microphone can handle environmental elements like wind, sun and poor weather. The company said SVS and the external microphone are future-proofed for an autonomous world, and can be integrated into a vehicle’s larger sensor suite to increase awareness of sounds not just for vehicle occupants but also for self-driving systems.

Harman’s driver monitoring system can measure your heart rate by Rebecca Bellan originally published on TechCrunch

Pin It on Pinterest