Balance is a Mac timekeeper app that requires you to manually clock in your hours

There are plenty of time-tracking apps for Mac that automatically log the hours you’ve spent signed-in. Some even offer granular data, telling you how much time you spent on a particular app. A new app called Balance is taking a slightly different approach to timekeeping, allowing users to manually punch in and punch out the time they are spending in front of a screen.

Balance hopes to help users build a set of healthy work habits rather than get granular data about their productivity. It won’t tell you long you had Slack, Microsoft Teams, Chrome or any other application open on your machine, but will offer general insights into your overall usage of the system and time spent in various sessions in a week.

To make this system work, Balance sends you a reminder if your machine has been on for more than five minutes but you have not clocked in. Clocking out is simple, too, just lock your Mac. Sadly, if your system goes to sleep, Balance doesn’t register a clock-out.

Image Credits: Balance

As there is no automatic tracking, the app can’t understand if you have taken a break even when you step away from the computer. So it will remind you to take a break after 60 minutes. You can easily fine-tune such settings as per your convenience.

Balance also offers you a Pomodoro timer (25 minutes on and 5 minutes off) through the Focus mode menu. The app lives in the menu bar of your Mac, so you can quickly access all the options. It shows the active time of the current session by default, but you can change it to the total session duration including breaks or time since the last break was taken.

Image Credits: Balance

Alexander Sandberg, the developer of Balance, says he built the app because he wanted a timekeeper that understands work-life balance. Working from home he often sat in front of his system way past his work hours, he told TechCrunch in an interview, and that’s when he thought of building Balance.

“I chose a manual clocking system for Balance because I believe it helps with creating a ‘ritual’ for checking in and out of work. Especially when working from home, it’s important to have something that helps you differentiate work time and non-work time. For instance, I’ve heard about people who go for a short walk to and from ‘the office’ at the beginning and at the end of the work days, even though their office is at home. This is to help the mind and body differentiate between life and work,” he told TechCrunch in an email.

While Balance is good for building the habit of clocking in and out, it could take a bit of time in getting used to. You might have many sessions that you forget to start or end. So you can end up with false positives on both ends.

Balance is available for free for everyone with the Pro version costing $2.49 a month (or $24.99 a year) as an introductory price. Paying customers will get features like session history with trends data. Balance also gives users an option to export their logs if they want to stop using the app or just want to analyze their data in a different way.

Sandberg said he’s building more pro features like a better session history overview with month and year; categorization and labeling of sessions; and app and website blocking to help users focus more.

Balance is a Mac timekeeper app that requires you to manually clock in your hours by Ivan Mehta originally published on TechCrunch

Baidu starts offering nighttime driverless taxis in China

Baidu, the Chinese internet giant that became known for its search engines, is making some big strides in autonomous driving.

Starting this week, the public can ride its robotaxis in Wuhan between 7 am and 11 pm without safety drivers behind the wheel. Previously, its unmanned vehicles could only operate from 9 am to 5 pm in the city. The updated scheme is expected to cover one million customers in certain areas of Wuhan, a city of more than 10 million people.

Like most autonomous vehicle startups, Baidu combines a mix of third-party cameras, radars, and lidars to help its cars see better in low-visibility conditions, in contrast to Tesla’s vision-based solution.

In August, Baidu started offering fully driverless robotaxi rides, charging passengers at taxi rates. In Q3, Apollo Go, the firm’s robotaxi-hailing app, completed more than 474,000 rides, up 311% year over year. Accumulatively, Apollo Go had exceeded 1.4 million orders as of Q3.

That sounds like a potentially substantial revenue stream for Baidu, but one should take such figures with a grain of salt and ask: how many of these trips are subsidized by discounts? How many of them are repeatable, daily routes rather than one-off novelty rides taken by early adopters? To juice up performance numbers, it’s not uncommon to see Chinese robotaxi operators recruiting the public to ride in their vehicles.

It’s also tricky to tell which of China’s robotaxi upstarts have a lead at this stage. Their expansion isdependent on their relationship with the local city where they operate, and major cities often have the power to pass certain local legislations.

As one of the few remaining consumer internet sectors still with big room to grow, autonomous driving is getting warm support from local authorities nationwide. Case in point, Wuhan, an industrial hub in central China, is one of the first cities in the country to let robotaxis chauffeur the public without in-car safety operators. And now, the city seems to be comfortable with driverless cars roaming about even in low-light nighttime.

Setting aside a reasonable dose of skepticism, Baidu has indeed put a lot of effort into making the self-driving future arrive earlier. One of the moats it’s building is its visual-language model for identifying unseen or rare objects in long-tail scenarios. The AI is backed by Wenxin, the same large model that undergirds its text-to-image art platform.

“The model will enable autonomous vehicles to quickly make sense of an unseen object, such as special vehicle (fire truck, ambulance) recognition, plastic bag misdetection, and others,” Baidu previously explained. “In addition, Baidu’s autonomous driving perception model—a sub-model of the WenXin Big Model—leveraging more than 1 billion parameters, is able to dramatically improve the generalization potential of autonomous driving perception.”

Baidu starts offering nighttime driverless taxis in China by Rita Liao originally published on TechCrunch

Jakarta-based fintech Akulaku raises $200M from Japan’s largest bank

Jakarta-based fintech Akulaku has raised $200 million from Mitsubishi UFJ Financial Group (MUFG), the largest bank in Japan. This is part of a strategic investment, with startup and MUFG planning to expand into new markets and product together in 2023. Earlier this year, Akulaku raised $100 million in funding from Siam Commercial Bank as part of another strategic investment. Its other backers include Ant Group (Akulaku launched a BNPL partnership earlier this year with Alipay+).

Akulaku, which operates in the Philippines and Malaysia in addition to Indoensia, offers a virtual credit card and installment shopping platform, as well as an investment platform and neobank. Founded in 2016, its target is to serve 50 million users by 2025.

As part of MUFG’s strategic investment, Akulaku has agreed to work with MUFG companies in Southeast on tech, product development, financing and distribution. MUFG is focused on growing its presence in the region, and earlier this year it purchased the Philippines and Indonesian units of Home Credit BV for 596 million euros. Its focus on Southeast Asia comes as homegrown banks, like Singapore’s DBS Group Holdings and Indonesia’s Bank Central Asia gain on MUFG in market cap.

In a statement, Kenichi Yamato, the managing executive officer and chief executive of MUFG Bank’s Global Commercial Banking Business Unit, said “Southeast Asia is key and a second market to MUFG. Our investment in Akulaku will further solidify our commitment in this region to meet growing financial needs of underserved customers.”

Jakarta-based fintech Akulaku raises $200M from Japan’s largest bank by Catherine Shu originally published on TechCrunch

Samsung raises over $10 mn for Global Goals

Samsung Electronics announced that it has reached a significant milestone to advance the Sustainable Development Goals (SDGs), or the Global Goals, with a contribution of more than $10 million to the UN Development Programme (UNDP).

Government makes USB-C charging port mandatory: What it means for iPhone and Android smartphone users

Following in the European Union’s footsteps of mandating USB-C chargers for all electronics, the Indian government too is moving in a similar direction. The government has further chalked out its plan to standardise charging ports for smartphones, laptops and tablets in India. As a part of these, the Bureau of Indian Standards (BIS) – the body that is responsible for quality certification of electronics and gadgets in the country – is formalising charging standards in the country.

High-growth startups should start de-risking their path to IPO now

High-growth companies often set significant goals, knowing full well that the idea of “overnight success” is for the storybooks. However, there is no better time than the middle of a market downturn to start planning for the leap from a private to a public company.

De-risking the path to going public requires strategic planning, which takes time. Companies with goals to go public in less than three years must therefore plan for it now — despite the downturn — to get the running start they’ll need to navigate the open market.

Let’s explore why this adverse economy is ideal for planning an IPO and what to do about it.

Growth investors have recently pulled back

While some companies delay their IPOs, others can play catch-up and prepare for the time when the open market itches to invest again.

Carta reports that private fundraising levels have declined across the U.S. from a record-breaking 2021. Unsurprisingly, late-stage companies have experienced the brunt of this blow.

Market experts are currently encouraging leaders not to pin their hopes on venture capital dry powder, even though there’s plenty of it. As the graph below indicates, the size of late-stage funding rounds has shrunk.

Image Credits: Founder Shield

Although few enjoy market downturns, how this one unfolds can deliver insights to late-stage companies that pay attention. On one hand, many leaders are embracing the message of the Sequoia memo. We can agree with their ideas to prioritize profits over growth — scaling is different from what it used to be, and we must swallow that jagged pill.

On the other hand, cost-cutting and giving up hope of fundraising isn’t all doom and gloom. After all, when there is money to be found, some innovative founder will find it. We see it every day; only now, the path looks different.

Market downturns spur valuation corrections

Course-correcting is a concept frequently discussed amid market downturns. The pendulum swings one way for a period, then begins its journey toward a more balanced standard. In this case, the open market thrived on bloated valuations — most startups were overvalued before 2021.

Furthermore, many stated that 2021 was a miracle year, especially as VC investment nearly doubled to $643 billion. The U.S. sprouted more than 580 new unicorns and saw over 1,030 IPOs (over half were SPACs), significantly higher than the year before. This year has only welcomed about 170 public listings.

High-growth startups should start de-risking their path to IPO now by Ram Iyer originally published on TechCrunch

What to expect from AI in 2023

As a rather commercially successful author once wrote, “the night is dark and full of terrors, the day bright and beautiful and full of hope.” It’s fitting imagery for AI, which like all tech has its upsides and downsides.

Art-generating models like Stable Diffusion, for instance, have led to incredible outpourings of creativity, powering apps and even entirely new business models. On the other hand, its open source nature lets bad actors to use it to create deepfakes at scale — all while artists protest that it’s profiting off of their work.

What’s on deck for AI in 2023? Will regulation rein in the worst of what AI brings, or are the floodgates open? Will powerful, transformative new forms of AI emerge, a la ChatGPT, disrupt industries once thought safe from automation?

Expect more (problematic) art-generating AI apps

With the success of Lensa, the AI-powered selfie app from Prisma Labs that went viral, you can expect a lot of me-too apps along these lines. And expect them to also be capable of being tricked into creating NSFW images, and to disproportionately sexualize and alter the appearance of women.

Maximilian Gahntz, a senior policy researcher at the Mozilla Foundation, said he expected integration of generative AI into consumer tech will amplify the effects of such systems, both the good and the bad.

Stable Diffusion, for example, was fed billions of images from the internet until it “learned” to associate certain words and concepts with certain imagery. Text-generating models have routinely been easily tricked into espousing offensive views or producing misleading content.

Mike Cook, a member of the Knives and Paintbrushes open research group, agrees with Gahntz that generative AI will continue to prove a major — and problematic — force for change. But he thinks that 2023 has to be the year that generative AI “finally puts its money where its mouth is.”

Prompt by TechCrunch, model by Stability AI, generated in the free tool Dream Studio.

“It’s not enough to motivate a community of specialists [to create new tech] — for technology to become a long-term part of our lives, it has to either make someone a lot of money, or have a meaningful impact on the daily lives of the general public,” Cook said. “So I predict we’ll see a serious push to make generative AI actually achieve one of these two things, with mixed success.”

Artists lead the effort to opt out of data sets

DeviantArt released an AI art generator built on Stable Diffusion and fine-tuned on artwork from the DeviantArt community. The art generator was met with loud disapproval from DeviantArt’s longtime denizens, who criticized the platform’s lack of transparency in using their uploaded art to train the system.

The creators of the most popular systems — OpenAI and Stability AI — say that they’ve taken steps to limit the amount of harmful content their systems produce. But judging by many of the generations on social media, it’s clear that there’s work to be done.

“The data sets require active curation to address these problems and should be subjected to significant scrutiny, including from communities that tend to get the short end of the stick,” Gahntz said, comparing the process to ongoing controversies over content moderation in social media.

Stability AI, which is largely funding the development of Stable Diffusion, recently bowed to public pressure, signaling that it would allow artists to opt out of the data set used to train the next-generation Stable Diffusion model. Through the website HaveIBeenTrained.com, rightsholders will be able to request opt-outs before training begins in a few weeks’ time.

OpenAI offers no such opt-out mechanism, instead preferring to partner with organizations like Shutterstock to license portions of their image galleries. But given the legal and sheer publicity headwinds it faces alongside Stability AI, it’s likely only a matter of time before it follows suit.

The courts may ultimately force its hand. In the U.S. Microsoft, GitHub and OpenAI are being sued in a class action lawsuit that accuses them of violating copyright law by letting Copilot, GitHub’s service that intelligently suggests lines of code, regurgitate sections of licensed code without providing credit.

Perhaps anticipating the legal challenge, GitHub recently added settings to prevent public code from showing up in Copilot’s suggestions and plans to introduce a feature that will reference the source of code suggestions. But they’re imperfect measures. In at least one instance, the filter setting caused Copilot to emit large chunks of copyrighted code including all attribution and license text.

Expect to see criticism ramp up in the coming year, particularly as the U.K. mulls over rules that would that would remove the requirement that systems trained through public data be used strictly non-commercially.

Open source and decentralized efforts will continue to grow

2022 saw a handful of AI companies dominate the stage, primarily OpenAI and Stability AI. But the pendulum may swing back towards open source in 2023 as the ability to build new systems moves beyond “resource-rich and powerful AI labs,” as Gahntz put it.

A community approach may lead to more scrutiny of systems as they are being built and deployed, he said: “If models are open and if data sets are open, that’ll enable much more of the critical research that has pointed to a lot of the flaws and harms linked to generative AI and that’s often been far too difficult to conduct.”

Image Credits: Results from OpenFold, an open source AI system that predicts the shapes of proteins, compared to DeepMind’s AlphaFold2.

Examples of such community-focused efforts include large language models from EleutherAI and BigScience, an effort backed by AI startup Hugging Face. Stability AI is funding a number of communities itself, like the music-generation-focused Harmonai and OpenBioML, a loose collection of biotech experiments.

Money and expertise are still required to train and run sophisticated AI models, but decentralized computing may challenge traditional data centers as open source efforts mature.

BigScience took a step toward enabling decentralized development with the recent release of the open source Petals project. Petals lets people contribute their compute power, similar to Folding@home, to run large AI language models that would normally require an high-end GPU or server.

“Modern generative models are computationally expensive to train and run. Some back-of-the-envelope estimates put daily ChatGPT expenditure to around $3 million,” Chandra Bhagavatula, a senior research scientist at the Allen Institute for AI, said via email. “To make this commercially viable and accessible more widely, it will be important to address this.”

Chandra points out, however, that that large labs will continue to have competitive advantages as long as the methods and data remain proprietary. In a recent example, OpenAI released Point-E, a model that can generate 3D objects given a text prompt. But while OpenAI open sourced the model, it didn’t disclose the sources of Point-E’s training data or release that data.

Point-E generates point clouds.

“I do think the open source efforts and decentralization efforts are absolutely worthwhile and are to the benefit of a larger number of researchers, practitioners and users,” Chandra said. “However, despite being open-sourced, the best models are still inaccessible to a large number of researchers and practitioners due to their resource constraints.”

AI companies buckle down for incoming regulations

Regulation like the EU’s AI Act may change how companies develop and deploy AI systems moving forward. So could more local efforts like New York City’s AI hiring statute, which requires that AI and algorithm-based tech for recruiting, hiring or promotion be audited for bias before being used.

Chandra sees these regulations as necessary especially in light of generative AI’s increasingly apparent technical flaws, like its tendency to spout factually wrong info.

“This makes generative AI difficult to apply for many areas where mistakes can have very high costs — e.g. healthcare. In addition, the ease of generating incorrect information creates challenges surrounding misinformation and disinformation,” she said. “[And yet] AI systems are already making decisions loaded with moral and ethical implications.”

Next year will only bring the threat of regulation, though — expect much more quibbling over rules and court cases before anyone gets fined or charged. But companies may still jockey for position in the most advantageous categories of upcoming laws, like the AI Act’s risk categories.

The rule as currently written divides AI systems into one of four risk categories, each with varying requirements and levels of scrutiny. Systems in the highest risk category, “high-risk” AI (e.g. credit scoring algorithms, robotic surgery apps), have to meet certain legal, ethical and technical standards before they’re allowed to enter the European market. The lowest risk category, “minimal or no risk” AI (e.g. spam filters, AI-enabled video games), imposes only transparency obligations like making users aware that they’re interacting with an AI system.

Os Keyes, a Ph.D. Candidate at the University of Washington, expressed worry that companies will aim for the lowest risk level in order to minimize their own responsibilities and visibility to regulators.

“That concern aside, [the AI Act] really the most positive thing I see on the table,” they said. “I haven’t seen much of anything out of Congress.”

But investments aren’t a sure thing

Gahntz argues that, even if an AI system works well enough for most people but is deeply harmful to some, there’s “still a lot of homework left” before a company should make it widely available. “There’s also a business case for all this. If your model generates a lot of messed up stuff, consumers aren’t going to like it,” he added. “But obviously this is also about fairness.”

It’s unclear whether companies will be persuaded by that argument going into next year, particularly as investors seem eager to put their money beyond any promising generative AI.

In the midst of the Stable Diffusion controversies, Stability AI raised $101 million at an over-$1 billion valuation from prominent backers including Coatue and Lightspeed Venture Partners. OpenAI is said to be valued at $20 billion as it enters advanced talks to raise more funding from Microsoft. (Microsoft previously invested $1 billion in OpenAI in 2019.)

Of course, those could be exceptions to the rule.

Image Credits: Jasper

Outside of self-driving companies Cruise, Wayve and WeRide and robotics firm MegaRobo, the top-performing AI firms in terms of money raised this year were software-based, according to Crunchbase. Contentsquare, which sells a service that provides AI-driven recommendations for web content, closed a $600 million round in July. Uniphore, which sells software for “conversational analytics” (think call center metrics) and conversational assistants, landed $400 million in February. Meanwhile, Highspot, whose AI-powered platform provides sales reps and marketers with real-time and data-driven recommendations, nabbed $248 million in January.

Investors may well chase safer bets like automating analysis of customer complaints or generating sales leads, even if these aren’t as “sexy” as generative AI. That’s not to suggest there won’t be big attention-grabbing investments, but they’ll be reserved for players with clout.

What to expect from AI in 2023 by Kyle Wiggers originally published on TechCrunch

Pin It on Pinterest