geeky NEWS: Navigating the New Age of Cutting-Edge Technology in AI, Robotics, Space, and the latest tech Gadgets
As a passionate tech blogger and vlogger, I specialize in four exciting areas: AI, robotics, space, and the latest gadgets. Drawing on my extensive experience working at tech giants like Google and Qualcomm, I bring a unique perspective to my coverage. My portfolio combines critical analysis and infectious enthusiasm to keep tech enthusiasts informed and excited about the future of technology innovation.
AI Gadgets Comparison: Rabbit R1, Humane AI Pin and Ray-Ban Meta Smart Glasses
AI Gadgets Comparison: Rabbit R1, Humane AI Pin and Ray-Ban Meta Smart Glasses
Updated: May 08 2024 15:20
In today's fast-paced world, our mobile phones have become our primary digital personal assistants, thanks to helpful voice assistant bots like Alexa, Google Assistant, and Siri. While these bots offer some level of vocal interaction, they are merely scratching the surface when it comes to handling our personal and professional tasks and routines. A new ecosystem is emerging, encompassing key AI technologies from OpenAI, Apple, Google, Meta for automated tasks discovery and knowledge management for consumers. These advancements point towards a future where LLM-powered AI Gadgets (or "AI Personal Assistants") will be able to handle a wide range of intelligent tasks seamlessly and autonomously.
Large Action Models (LAMs)
The latest focus in the AI ecosystem is on Large Action Models (LAMs), which enable 'bots' to act as proper assistants by carrying out various real-world tasks and performing entire end-to-end routines. Currently, the AI ecosystem resembles the mobile app ecosystem, consisting of individual and separate 'apps' that carry out specific tasks. However, when users want to complete a whole routine or transact something, they need to manually string together multiple apps, resulting in a convoluted and clunky process.
LAMs aim to connect the 'action' ecosystem by bringing everything into a single platform that can handle all tasks seamlessly, much like a human personal assistant would. To understand the structured nature of human-computer interactions within applications, LAM uses neuro-symbolic programming. This AI approach combines techniques from both neural networks, which are inspired by the structure of the brain, and symbolic AI technologies, which deal with logic and symbols. Neuro-symbolic techniques help LAMs understand and represent the complex relationships between actions and human intention.
LAM assists the user in performing an action through extensive knowledge of user interfaces. During the training process, the LAM model learns what a large number of user interfaces of websites and applications look like and how they work. During training, LAMs adapt the technique called “imitation through demonstration” or “learning through demonstration”. This means that they examine how people engage with interfaces as they click buttons or enter data, and then accurately mimic these actions. They collect the knowledge and learn from examples provided by users, making them more adaptable to further changes and capable of handling diverse tasks.
Also check out my earlier post about AI Agents, where LLM engages in an iterative process, such as writing an essay outline, conducting research, drafting, analyzing, and revising the content. It also talks about multiagent collaboration involve multiple AI agents working together, each playing a different role, to arrive at better solutions than a single agent could. The spotlight on LAMs has grown brighter with the recent launch of Rabbit’s artificial intelligence device, the R1.
LAMs vs LLMs: Understanding the Difference
The distinction between Large Action Models (LAMs) and Large Language Models (LLMs) lies in their capabilities and focus areas.
LLMs are adept at generating text based on input prompts. They are proficient in understanding and generating natural language text. However, they may not be optimized for task execution. For instance, while an LLM can recommend which flight is best to choose, the actual booking on the airline's website would still need to be done by the user.
On the other hand, LAMs focus on understanding actions and orchestrating sequences of actions to accomplish specific goals. They are designed not just to understand language but also to take actions based on that understanding. They can book appointments, make reservations, or complete forms by interacting with applications or systems. LAMs are particularly designed to understand human intentions and perform actions in applications or systems, making them more suitable for interactions that are oriented to complete tasks.
While Large Language Models rely primarily on neural network architectures for language processing, LAMs often incorporate hybrid approaches that combine neural networks with symbolic reasoning or planning algorithms. This enables LAMs to understand both the language context and the underlying structure of actions required to accomplish tasks effectively.
The Rabbit R1 Pocket Companion
Rabbit Technologies has taken the AI personal assistant concept to the next level with their Rabbit R1 Pocket Companion. This low-cost, handheld device not only provides the software and engine behind the assistant but also enables users to undertake tasks from anywhere in the world. The Rabbit R1 has already sold out of its 5th batch of preorders, making it a standout product from the recent CES show.
Running on the Rabbit OS, which is powered by LAM, the R1 features a natural language interface, making it a user-friendly device that can be controlled mostly by voice. The R1 is not your standard device. It doesn’t have apps like a smartphone. Instead, it relies on a more intuitive interface. The device is equipped with a screen, a built-in microphone, a rotating camera, and an analog button to start a conversation with the AI.
The developers of Rabbit R1 claim that the Rabbit OS understands everything the user tells it. It recognizes human intentions, which vary from person to person and can change rapidly. In addition to understanding what users are saying, it also performs actions for them. Whether it’s translating languages, navigating the web, answering questions, playing music on demand, or even ordering plane tickets, the R1 can do it all.
The Rabbit R1 aims to replace smartphones in many areas by carrying out tasks on behalf of the user through a singular, universal interface. It can set itineraries, make reservations, book tickets, and order various goods and services. The device's Large Action Models are organized into four categories: OPTIMAL, EXPLORATORY, PLANNED, and EXPERIMENTAL, covering a wide range of functionalities. Check out the recent quarterly update video:
MKBHD calls the Rabbit R1 “Barely Reviewable". According to him, the Rabbit R1 has a larger action model, which means that it is supposed to be able to use apps for you like a human would. However, the Rabbit R1 is still in development, and many of its features are not yet available. For example, the Rabbit R1 can only control four apps: Spotify, Uber, DoorDash, and Mid Journey. It also does not have a generative UI, which means that it cannot create UIs for apps that it has not been trained on. Additionally, the teach mode, which would allow users to teach the Rabbit R1 how to use new apps, is not yet available.
Basically, if you are willing to take a gamble on a product that could be very useful in the future, then the Rabbit R1 may be worth considering. Here are some of the early review of Rabbit R1:
The Shortcut 5/10: don't buy this AI device yet - "Being able to physically hold artificial intelligence in my hand with the Rabbit R1 for just $199 makes it a tempting purchase. But, throughout my review, you’ll get the sense this “AI pocket companion” is meant for two types of people: early adopters who love to tinker with frustratingly unfinished tech, and AI chatbot enthusiasts who want to explore the newest AI out there so much they’re willing to overlook bugs."
The Verge 3/10: nothing to see here - "Artificial intelligence might someday make technology easier to use and even do things on your behalf. All the Rabbit R1 does right now is make me tear my hair out. But until the hardware, software, and AI all get better and more differentiated, I just don’t think we’re getting better than smartphones. The AI gadget revolution might not stand a chance. The Rabbit R1 sure doesn’t."
Tom's Guide 3/10: Avoid this AI gadget - "The Rabbit R1 promises to make your life easier with its AI capabilities but its unreliable performance, inaccurate answers and short battery life make it impossible to recommend. I would wait to even consider buying this until the company works out the bugs. Will the Rabbit R1 get better over time? For sure, and Rabbit promises all sorts of future upgrades like point-of-interest research, navigation and a teach mode so the R1 can learn more things. But I just don’t see the R1 taking off because it’s yet another device you need to carry around as it’s not designed to replace your phone. It’s more of a companion device."
Engadget 4/10: A $199 AI toy that fails at almost everything - "The Rabbit R1 is a cute AI gadget, but at launch it’s riddled with issues and terrible battery life. When phones can handle similar AI tasks, the R1 doesn’t do enough to justify its existence. It can play music from Spotify (if you have a paid subscription), but what's the point of doing that with its terrible 2-watt speaker? You can ask the R1 to generate art via Midjourney AI (again, with a paid account), but it often failed to show me the pictures that were created."
WIRED 3/10: Review: Rabbit R1 - "Too few features at launch. Big privacy concerns. Answers provided by the LLM can often be incorrect. Scroll wheel is annoying to use. Third-party integrations are half-baked. In the end, the biggest issue boils down to the fact that I now have to carry two devices. Rabbit was clear in saying that the R1 will not replace your phone, but if I can do all of the same tasks and so much more on my smartphone (Google's Gemini has given me identical if not better results than the R1), I have no reason to use it."
Rabbit's R1 is an Android App in Disguise?
The R1 might be nothing more than a glorified Android app, as demonstrated by Mishaal Rahman from Android Authority, who managed to run the Rabbit launcher APK on a Google Pixel 6A. Rahman downloaded the Rabbit launcher APK and, with some tweaking, successfully ran it on his Pixel 6A. Using the volume-up key to mimic the R1's single hardware button, he was able to set up an account and interact with the app, just as one would with the actual R1 device. This revelation suggests that the R1's functionality is more akin to a standard Android app than a standalone gadget.
While the experiment was successful, Rahman acknowledges that the app might not offer the full range of features available on the R1. The launcher app is designed to be preinstalled on the R1's firmware with system-level permissions, some of which could not be granted on the Pixel 6A. As a result, certain functions may not work as intended when running the app on a different device.
Rabbit co-founder and CEO Jesse Lyu tried to dispute these claims on X/Twitter and apparently disabled Mishaal's access from the cloud. Lyu's response focus on the common practice of using AOSP as client. He wrote: "with all the respect - while most of the non iOS consumer devices runs on modified AOSP as client, i don't think you understand that client apk can be duplicated and bootlegged while all the actual service lives on the cloud? and why that bootleg apk is not working? try now."
with all the respect - while most of the non iOS consumer devices runs on modified AOSP as client, i don't think you understand that client apk can be duplicated and bootlegged while all the actual service lives on the cloud? and why that bootleg apk is not working? try now. https://t.co/tZJd96AbWv
The revelation that the R1's core functionality can be replicated on a mid-range smartphone from two years ago raises doubts about the need for a dedicated AI gadget. While the R1 may offer some unique features or optimizations, the fact that its essence can be distilled into an Android app suggests that it might not be the groundbreaking device it was initially touted to be. As we await an official response from Rabbit, it's worth considering whether the future of AI lies in standalone gadgets or in the apps that power our ever-present smartphones.
Cloneable Template of Rabbit R1 App
Another In X/Twitter user Will Hobick of Flutterflow posted that he would be posting a "cloneable template" of the Rabbit R1 app later in the week. He demonstrates a version of the app running on an iPhone. As you can see in the demo video below, the app-based version running on iPhone offers the same core functionality as the device itself. It shows the app answering a verbal query, complete with the same animations as the actual Rabbit R1 device.
The r1 rabbit working on IOS 👀
This app utilizes device time, battery life, haptic touch, camera and more. Built in just a few hours and working as a PWA, IOS and Android (see thread)
Android Authority also analyzes the firmware of R1 and reveals that Rabbit did not make significant modifications to the BSP (Board Support Package) provided by MediaTek. The R1 actually ships with all the standard apps included in AOSP and those provided by MediaTek. Rabbit only made a few changes to the AOSP build, including the R1 launcher app, a fork of “AnySoftKeyboard” app with a custom theme, an OTA updater app, and a custom boot animation.
Android Authority concludes that "it’s to set the record straight and refute the claim that custom hardware with 'very bespoke AOSP' was necessary. Yes, it’s true that all the R1 launcher does is act as a local client to the cloud services offered by Rabbit, which is what truly handles the core functionality. It’s also true that there’s nothing wrong or unusual with companies using AOSP for their own hardware. But the fact of the matter is that Rabbit does little to justify its use of custom hardware except by making the R1 have an eye-catching design."
Another X user Marcel (@MarcelD505) posted a video showing the Rabbit R1 running a generic build of Android with the LineageOS ROM, revealing the spec details as MediaTek Helio P35, 4GB of RAM, and 128GB of storage:
I pinky promise this is not a photoshop, we will do a writeup eventually, here is a short video. Note that this isn't my video and device but from someone in our team. We do all have the knowledge on how to do it. pic.twitter.com/JsImDj3Wls
The tech world has been buzzing lately about the Humane AI Pin, a wearable AI assistant that promises to revolutionize the way we interact with technology. It is a sleek, minimalist pin that attaches magnetically to your clothes. It boasts a team of AI assistants – your Researcher, Interpreter, Photographer, Communicator, and DJ – all ready to answer your questions, translate languages, and even capture photos or videos with voice commands.
While the Pin itself is undeniably attractive, early review shows that the functionality feels unfinished. The AI assistants can answer basic questions and translate simple phrases, but complex queries often resulted in nonsensical responses or frustrating delays. Taking photos and videos with voice commands felt clunky compared to using my phone, and the "DJ" feature was more like a glorified white noise machine. At a whopping $700, the Pin feels like a luxury item with budget-tier performance.
Humane AI Pin Reviews
MKBHD calls the Humane AI Pin “the worst product I’ve ever reviewed", and the video has 6.9M views so far! Here are some of the early review of Humane AI Pin:
The Verge 4/10: Humane AI Pin review: not even close - "I stand in front of a store or restaurant, press and hold on the touchpad, and say, Look at this restaurant and tell me if it has good reviews. The AI Pin snaps a photo with its camera, pings some image recognition models, figures out what I’m looking at, scours the web for reviews, and returns it back. AI Pin can’t set an alarm or a timer. It can’t add things to your calendar, either, or tell you what’s already there. The problem with so many voice assistants is that they can’t do much — and the AI Pin can do even less."
Engadget 5/10: The Humane AI Pin is the solution to none of technology's problems - "One singular thing that the AI Pin actually manages to do competently is act as an interpreter. I tried talking to myself in English and Mandarin, and was frankly impressed with not only the accuracy of the translation and general vocal expressiveness, but also at how fast responses came through. Not only is the Humane AI Pin slow, finicky and barely even smart, using it made me look pretty dumb. In a few days of testing, I went from being excited to show it off to my friends to not having any reason to wear it.""
WIRED 4/10: Review: Humane Ai Pin - "Quick access to AI text and speech models. Polished accessories. Hands-free phone calls are nice. Seamless setup. Solid real-time translation capabilities. Scarce features at launch. Thermal issues cause overheating. Accuracy of answers is mixed (and it's slow). Projector is annoying to interact with and is impossible to see in daylight. Poor photos and videos in low light. Can't sync the Ai Pin's number to a cell number. Easy for others to hijack or steal."
Inverse: Humane’s Ai Pin Isn't Ready to Replace Your Phone, But One Day It Might - "Compared to Alexa, Google Assistant, and yes, even Siri, getting an answer to certain basic questions like “What’s the weather?” using the Ai Pin can take as long as six seconds. That may not seem like a long wait, but when the other assistants can answer almost immediately, the Ai Pin feels like a turtle crawling while the hares race by, leaving a trail of dust. Vision, a feature in beta that uses the camera to “see” and identify what’s in front of it also impressed me. Take the 13-megapixel ultra-wide camera. It’s simply terrible. Photos look like they were taken with an iPhone 4 with poor dynamic range, no shadow detail, and overall bad sharpness."
Humane.Center account web portal
If you're trying to manage the AI Pin using the Humane.Center web portal, it can feel like a bit of a treasure hunt sometimes! First off, there's no app for iOS or Android, which means you're stuck with browser-only access to the Humane.Center. You could whip up a Safari shortcut and pin it to your iPhone home screen, but let's be honest, that just makes you yearn for a native app even more.
Below are the two screenshot from Inverse about the Humane.Center account web portal. It’s where you see all the transcriptions with the AI Mic, music you’ve played, photos and video, notes, and more. Also the photos and videos screen of the web portal after they’ve been uploaded from the Ai Pin into Humane.Center.
Ray-Ban Meta Smart Glasses
The Ray-Ban Meta Smart Glasses, launched last fall, have recently received a significant upgrade with the addition of multimodal AI. This new feature allows the glasses to process various types of information, such as photos, audio, and text, making them a more versatile and useful wearable device. Despite some limitations and quirks, the Meta glasses offer a glimpse into the future of AI-powered gadgets and their potential to seamlessly integrate into our daily lives. Note that existing owners of the Ray-Ban Meta Smart Glasses only need to update their glasses in the Meta View app to access the new features.
Below is the video posted on X by Brian Stout (@stoutiam), recording a run in Bodega Bay Dunes:
Video from a run in Bodega Bay Dunes. Love this part of the world. Shot with @ray_ban@Meta smart glasses.
The Power of Multimodal AI in Ray-Ban Meta Smart Glasses
Multimodal AI enables the Meta glasses to understand and respond to a wide range of user queries. By simply saying, "Hey Meta, look and..." followed by a specific command, users can ask the glasses to identify plants, read signs in different languages, write Instagram captions, or provide information about landmarks and monuments. The glasses capture a picture, process the information in the cloud, and deliver the answer through the built-in earphones. While the possibilities are not endless, the AI's capabilities are impressive and constantly evolving. Here are some examples what you can do with the Multimodal AI function:
Ask about what you see yourself: "Hey Meta, look and describe what I'm seeing."
Understand text: "Hey Meta, look and translate this text into English."
Get gardening tips: "Hey Meta, look and tell me how much water these flowers need?"
Express yourself: "Hey Meta, look and write a funny Instagram caption about this dog."
Last month, Meta released the next big thing in AI models, the Llama 3. It’s the latest AI model to be offered by Meta free of charge and with a relatively open (though not open-source) license that lets developers deploy it in most commercial apps and services. Meta's new model scores significantly better than its predecessor in benchmarks without an increase in model size. The secret is the use of a lot of training data. Check out my earlier post about the Llama 3 model.
Ray-Ban Meta Smart Glasses Strengths and Weaknesses
Like most AI systems, Meta's multimodal AI has its strengths and weaknesses. It can be incredibly accurate in identifying certain objects, such as specific car models or plant species. However, it can also be confidently wrong at times, mistaking one object for another or providing irrelevant information. The AI's performance is often dependent on the quality of the image captured and the user's ability to frame the question in a way that the AI can understand.
One of the key advantages of the Ray-Ban Meta Smart Glasses is their familiar form factor. As a pair of glasses with built-in headphones, they feel natural and comfortable to wear. Users are already accustomed to talking through earbuds, making it less awkward to interact with the AI assistant. The glasses' design allows for a seamless integration of AI technology into a well-known and widely-used accessory.
Ray-Ban is also introducing a limited-edition Ray-Ban Meta smart glasses in an exclusive Scuderia Ferrari colorway. This Miami 2024 special edition is the perfect fusion of iconic design, racing heritage, and cutting-edge technology.
Using the Meta glasses' AI requires a bit of a learning curve. Users need to adapt to the AI's language and understand its limitations. For example, the lack of a zoom feature can hinder the AI's ability to identify distant objects accurately. However, users can often find workarounds, such as taking a picture of a picture, to help the AI along. As users become more familiar with the technology, they can better leverage its capabilities to enhance their daily experiences.
Ray-Ban Meta Smart Glasses Reviews
Here are some of the reviews of Ray-Ban Meta Smart Glasses:
The Verge: The Ray-Ban Meta Smart Glasses have multimodal AI now - "It can be handy, confidently wrong, and just plain finicky — but smart glasses are a much more comfortable form factor for this tech. To me, it’s the mix of a familiar form factor and decent execution that makes the AI workable on these glasses. Because it’s paired to your phone, there’s very little wait time for answers. It’s headphones, so you feel less silly talking to them because you’re already used to talking through earbuds. In general, I’ve found the AI to be the most helpful at identifying things when we’re out and about. It’s a natural extension of what I’d do anyway with my phone."
Tom's Guide: Ray-Ban Meta smart glasses just got a ton of upgrades, including new AI features and video calling - "Having experimented with the multimodal AI integration through limited beta access in recent months, I've found that it mostly succeeds in identification. For example, Meta AI could name some New York City landmarks just by taking a picture through the glasses. But it's not right every time, and the glasses are prone to the same kind of occasional connectivity headaches that reviewers reported for the Humane AI Pin. Good looks are a major perk of the Ray-Ban Meta Smart Glasses. They mostly look like an average pair of designer glasses."
ZDNet: Meta's Ray-Ban smart glasses just got another useful feature for free (and a new style) - "The improvements to the Ray-Ban Meta glasses and sunglasses include better integration with Apple Music, support for multimodal AI, and compatibility with WhatsApp and Messenger, allowing users to stream what they're seeing from the sunglasses themselves. Meta is focusing the wearable on features wearers of smart glasses will already be used to, while adding new capabilities such as live video integrated into common messaging apps. The ability to share your view on WhatsApp and Messenger is completely hands-free, letting you show exactly what you're seeing in real time."
Popsugar: Even Non-Techy Folks Will Love the Ray-Ban Meta Smart Glasses - "Given that plain-old designer sunglasses can cost upwards of $300, I'd definitely say that what you're getting with these glasses is worth it — think of them as headphones, a camera, a smart assistant, and shades all in one. Particularly if you're a fan of Ray-Bans, then there's no reason not to opt for all of these cool features. What's more, even non-techy folks will love these. They've easily become part of my daily life; they're really just there to help enhance your life, whether by capturing what's around you easier or providing AI answers to your questions."
iFixit Teardowns of Rabbit R1 & Humane AI Pin
Comparing Rabbit R1, Humane AI Pin and Ray-Ban Meta Smart Glasses
Rabbit R1: Priced at $199, it is a pocket-sized device that connects directly to cellular or Wi-Fi networks. Users press and hold a button to ask the assistant a question and navigate through on-screen menus using a scroll wheel. While the R1 promises live translation, it struggled with basic phrases during testing and had slow response times. The device also had issues with basic tasks like playing music and setting timers.
Humane AI Pin: Priced at $699, it is a cell-connected device that clips to your shirt. Users can ask questions by holding down the touchpad, and the built-in "laser ink" display projects the answer on their palm. The pin offers live translation in over a dozen languages, enabling users to have conversations with people speaking different languages. However, the display can be difficult to see outdoors, and the device's response times can be slow.
Ray-Ban Meta Smart Glasses: Priced at $299 ($329 with polarized lenses), it offers a familiar glasses form factor with added AI capabilities. By simply saying "Hey Meta," users can access the assistant and ask questions about their surroundings. The glasses excel at visual recognition, consistently identifying dog and cat breeds and providing contextual answers. However, they rely on a smartphone connection and lack live audio translation.
While all three AI gadgets show promise, the Ray-Ban Meta Smart Glasses emerge as the most practical and user-friendly option. Despite lacking live audio translation and requiring a smartphone connection, the glasses offer excellent visual recognition, contextual answers, and a familiar form factor at a reasonable price point. The Humane AI Pin and Rabbit R1, while ambitious, suffer from slow response times and inconsistent performance. As these companies release software updates to address issues, they may become more viable alternatives in the future.
Here is a comparison video by WSJ’s Joanna Stern who put these devices through a series of tests, including translation and visual search:
Key Success Factors of AI Gadgets or AI Personal Assistants
Personalization
AI Gadgets with AI agents excel at providing personalized experiences to users. By learning from user behavior, preferences, and patterns, AI agents can tailor recommendations, suggestions, and even anticipate user needs. This level of personalization enhances user satisfaction and creates a stronger bond between the user and their device.
The Humane AI Pin is designed to understand your intention and learn your context: where you are, what you like, and what you’re doing, so it can give you what you want automatically (if it understand what you want correctly).
The Rabbit R1 training mode allows you to teach it new tasks and customize its behavior to your liking. For example, you can show it how you book flights on your favorite airline, and it will learn to replicate the process for future bookings. This level of personalization makes the R1 feel like a genuinely helpful assistant.
Visual Communication
Many AI Gadgets rely on the display to convey information. This could be anything from showing data and graphs to displaying images and videos. A poor quality display can make it difficult to see this information clearly, hindering the user experience.
The on-palm projection feature of the Humane AI Pin has been criticized for being overly sensitive, slow to navigate, and difficult to see in bright light. The Rabbit R1’s screen has also been criticized for being too small and not touch-friendly.
The Ray-Ban Meta Smart Glasses does not have a visual display, but it has a built-in 12MP camera that shoots video at 1080p resolution. It can communicate what it sees visually from the camera.
Frictionless Interaction
AI Gadgets are meant to simplify tasks and integrate seamlessly into our lives. Bugs and slow responses create friction in this interaction. Imagine wanting to quickly set a timer or adjust the thermostat, but the device takes too long or malfunctions. The Humane AI Pin has been criticized for its bugs and slow response times with delays up to 13 seconds in responses, and one reviewer even reported it can take up to five minutes to recognize a newly attached battery booster.
These can be a huge turn-off for users expecting a snappy assistant. This sluggishness disrupts the flow of conversation and makes it feel less like a natural interaction. One reviewer of the Rabbit R1 praise its performance, “10x faster than most voice AI projects” and “answers questions within 500ms.” Saying it lives up to the hype.
The Ray-Ban Meta glasses connect to your phone via Bluetooth. The app prompts you to connect to Spotify Tap, which allows you to quickly start playing music with touch controls on the glasses without opening the Spotify app on your phone. You then pick a language and voice for the digital assistant and connect to Facebook Messenger, Whatsapp, and/or your phone’s text messaging app. Finally, you get a nice message politely asking you to “respect others’ privacy.”
Bugs Free
The AI Pin seems prone to malfunctions and misunderstandings. Reviews mentioned issues like hysterical laughter and misidentifying landmarks. Some reviewers found that the AI features weren't very reliable, for instance giving incorrect information or refusing to perform basic tasks like restarting. These bugs can be frustrating and erode trust in the device's reliability. Similarly, the Rabbit R1 also have some launch day bugs reported by reviewers, however, they are much less severe than AI Pin and most think those bugs are fixable over time.
Battery Life
AI Gadgets should have good battery life that last for the whole day. The battery life of both the Rabbit R1 and Humane AI Pin has been a point of concern among reviewers. Humane AI Pin has an internal battery that lasts about four to five hours on a single charge. It comes with a standard external battery pack that can extend the runtime up to about nine hours. Despite this, reviewers have mentioned that the battery life isn’t great, especially if you plan to use it throughout the day. The device frequently gets too warm and needs to “cool down,” which could also affect the battery life.
The Ray-Ban Meta smart glasses should hold roughly four hours of battery in the frames themselves, so with eight extra charges in the case, you’re looking at 36 hours total. At 15% remaining battery, the built-in digital assistant starts giving an audible warning about the low charge. It also tells you to “charge your glasses for full functionality.”
Continuous Learning and Improvement
AI agents on AI Gadgets are designed to learn and improve over time. Through machine learning algorithms, they can adapt to user preferences, speech patterns, and behaviors. As users interact more with their AI agents, the agents become smarter and more efficient in understanding and fulfilling user requests. This continuous learning ensures that the user experience keeps getting better.
While initial training might be needed for certain tasks, the Rabbit R1’s continuous learning ensures it adapts to your unique workflow. The more you use it, the more intuitive and efficient it becomes.
Versatility
A broader service range allows the assistant to adapt to different contexts and user preferences. This makes it more helpful and promotes user engagement. People rely on assistants for various tasks, from scheduling appointments to controlling smart home devices. A wider range of services caters to a larger user base and increases the assistant's overall value.
At launch, the Rabbit R1 “rabbithole” connections page (a web-based portal in which you connect your accounts via a computer or phone) includes just four services: Spotify (the only music service), Uber (the only ride-hailing service), DoorDash (the only food delivery service) and Midjourney (the only AI image creator). The AI Pin can’t set an alarm or a timer, nor It can’t add schedule to calendar. The problem with so many voice assistants is that they can’t do much — and the AI Pin can do even less, that's a problem!
Privacy and Security
As AI Gadgets handle a vast amount of personal data, privacy and security are crucial success factors. Leading AI phone manufacturers have implemented strong encryption, secure authentication methods, and regular security updates to protect user information. Many AI agents also offer transparency and control over data collection and usage, giving users peace of mind.
The Rabbit R1 microphone only activates when you press a button, and the camera can be physically covered when not in use. Additionally, you have complete control over which apps and services the R1 accesses, ensuring your data stays in your hands.
Ray-Ban Meta made the blinking animation of the LED indicator a lot more noticeable when the glasses are recording. The glasses also won't record at all if they detect anything covering the LED indicator, which everyone can appreciate.
Emotional Intelligence
Some AI agents are designed to detect and respond to user emotions. By analyzing voice tone, language, and other cues, these agents can provide more empathetic and supportive responses. This emotional intelligence helps create a more human-like interaction between users and their AI agents, fostering trust and engagement. For instance, if the user says "cancel my dinner reservation" in a frustrated voice, the AI Gadgets should understand the user's annoyance and potentially offer alternative restaurants or recommend relaxation techniques.
The Future of AI Gadgets
LAMs in AI Gadgets, similar to Rabbit's R1, are capable of more than just communication and response generation. They can analyze the preferences, habits, and past interactions of a user to provide personalized recommendations for various activities. Whether it's suggesting restaurants, movies, books, or travel destinations, or offering personalized advice on health, fitness, or personal finance based on individual goals and preferences, LAMs are designed to cater to the unique needs of each user.
In addition, LAMs can integrate with smart home devices and IoT (Internet of Things) systems. They can control appliances, monitor power consumption, or enhance home security. They can respond to voice instructions, adjust devices' settings based on user preferences, and automate routine activities, enhancing convenience and comfort.
As we look towards the future, we can expect LAMs and LLMs to become even more defined and explored. While major tech players like Apple, Samsung & Google are working on their own LLM-powered smart devices, companies like Rabbit are already pushing the boundaries of what is possible with AI personal assistants. The hope is that they will evolve into truly useful and accurate AI Gadgets for people.
Multimodal LLMs have all the prerequisites to become one of the most powerful AI technologies. As they continue to improve their understanding of human intention and action execution, they will become increasingly effective at automating complex tasks within a variety of industries. This applies not only to routine administrative tasks but also to more complex decision-making and problem-solving processes. Looking ahead, the proliferation of AI in our daily lives will continue to grow. The future of AI is not just about smart devices, but about creating an ecosystem of interconnected AI-powered services that work together to make our lives easier, more efficient, and more enjoyable.