The latest December 2024 Pixel Drop claims to bring more intelligent, helpful, and intuitive features to your devices. It includes new ways to use Gemini, camera improvements, and security updates. It also expands some of the existing features to newer countries. Here are all the details.
First, Google detailed the memory feature for Gemini, Saved Info. Saved info lets you ask Gemini to remember your interests and preferences to provide more helpful and relevant responses tailored to your goals and needs.
Google also discussed the new extensions for Gemini. The new Phone and Messages extensions allow you to ask Gemini to call personal contacts or businesses and draft and send messages with your default phone and Messaging apps. You can even set alarms, control your device settings, and open your camera to take a quick selfie through the new Utilities extension.
Next, Gemini Nano on Pixel can now suggest more contextual, easy-to-tap replies for you on the Call Screen. This means you can easily respond to unknown callers without having to take the call yourself. For example, if a package is being delivered, you can use this feature to respond to the delivery person by answering their yes or no questions or asking relevant follow-up questions — all via simple prompts that will show up on your screen. In addition, you can now peek into conversations between the caller and the AI agent during an automatic Call Screen. You can also answer or decline a call at any time during the screening session.
The December 2024 Pixel Drop also allows you to capture and share Ultra HDR photos—full of bright intensity, higher contrast, and more detail—right to your Instagram Feed. Next, when sharing photos to Snapchat, instead of scrolling through your device’s photo albums, you’ll see all your folders, favourites, and cloud photos through the Photo Picker.
Now, users can use the Dual Screen on their Pixel Fold and Pixel 9 Pro Fold in Portrait mode so the user and their photo subject can preview every shot before capturing. December 2024 Pixel Drop also brings ‘Made You Look’ to the first-generation Pixel Fold so you can use fun animations on the outer screen to grab your child’s attention and snap a photo at the perfect moment.
Your Pixel Studio sticker creations are now available on your Gboard keyboard, so you can share them with friends and family via messages, social media, and more. Furthermore, Emoji Kitchen in Gboard gets updated navigation. New Pixel Screenshots upgrades include saving your Circle to Search queries right to the Pixel Screenshots with just a tap.
Pixel Screenshots now automatically categorizes your screenshots, so it’s even easier to find exactly what you need with new search filters. When you find what you’re looking for, Pixel Screenshots provides helpful suggested actions based on your saved information — like creating a calendar invite or getting directions. In addition, Pixel Screenshots also lets you add tickets or credit cards you’ve screenshotted to Google Wallet, giving you quick access to things like your driver’s license, boarding passes, and more in one convenient spot.
In Gboard, you’ll now see movies, music, products and other text suggestions from your screenshots while searching in relevant apps. You’ll have to turn on “Show suggestions from your screenshots in other apps” in the Pixel Screenshots app.
With the December 2024 Pixel Drop, Expressive Captions are also being made available on Android phones, which uses AI to automatically capture the intensity and emotion of how someone speaks for any content with sound on your phone, even live streams. “When you’re watching live sports, texting, using social media or watching a video message, you’ll see things like a gasp at a juicy secret, cheersand applause for a big win and all caps when someone is really excited,” said Google.
December 2024 Pixel Drop also brings the “Clear voice” feature in the Recorder App to focus on speakers and reduce background noise while recording anything. Further, Google is introducing a new way to navigate your smartphone with Simple View. Simple View increases your phone’s font size and touch sensitivity, making it easier to see and use controls, apps and widgets.
Next, you’ll now be able to see album art for each song in your Now Playing history, so it’s easier to explore new music. Then, you can now jump straight into using your go-to controls on your Pixel Tablet. All you have to do is swipe right from your tablet’s lock screen to access widgets for quick controls for your smart home devices, timers, music and more.
Identity Check has also been made available in beta with the December 2024 Pixel Drop. When you’re in a new location, Identity Check will require your face or fingerprint authentication before you can make any changes to sensitive settings on your phone. This gives you extra protection against anyone who might try to take your phone and access your passwords, change your PIN, or turn off theft protection features.
Google’s free built-in Google VPN is now available on the Pixel Tablet. Additionally, when you get a notification from your Nest Cam or Nest Doorbell, you can now see a live view of your porch and talk with whoever’s there from your Pixel Watch 2.
The Loss of Pulse detection feature is now expanding to Pixel Watch 3 users in Germany and Portugal. This first-of-its-kind feature can identify when someone’s heart suddenly stops beating due to conditions such as cardiac arrest, poisoning, or respiratory arrest and then prompt the watch to call emergency services for help.
For users in Germany, Google is also expanding access to fall detection for all Pixel Watch generations and car crash detection for Pixel phones and Pixel Watch 2 and 3. Finally, Google’s enhanced Daily Readiness algorithm and new tools — Cardio Load and Target Load — originally launched with Pixel Watch 3, are now coming to Pixel Watches and Fitbit smartwatches and trackers that support Readiness starting December 9.
Samsung today announced the public release of the One UI 7 beta for Galaxy S24, Galaxy S24+, and Galaxy S24 Ultra. The new version of One UI boasts powerful AI features, simplified controls, and a preview of scalable AI ecosystems of the future, according to the company.
“One UI 7 marks a significant leap forward by integrating leading AI agents and multimodal capabilities into every touch point of the interface, creating an AI platform where every interaction feels natural and intuitive. The beta program offers a first look at an upgraded mobile user experience for greater personalization than ever before,” said Samsung.
One UI 7 beta for Galaxy S24 series brings AI-based advanced writing assist tools. Integrated into AI OS, they allow users to boost their productivity where texts can be selected, without needing to switch between applications. This capability expands on the powerful writing assist tools already available to Galaxy users, offering AI-powered options to summarize content, check spelling and grammar, and automatically format notes into bullet points.
Call transcripts is another new feature that’s available in 20 languages. When call recording is enabled, recorded calls will automatically be transcribed for reference later on, eliminating the need to take notes manually while multitasking.
One UI 7’s AI features come with a significant new look, based on a new notification system that streamlines communication with easy access from the device’s lock screen. It includes Now Bar, which highlights relevant activities across various features like Interpreter, Music, Recording, Stopwatch, and more.
By offering instant access to important notifications, Now Bar reduces the need to constantly unlock the device and allows users to engage with key information effortlessly. Set to be supported on upcoming Galaxy S series devices, Now Bar will transform the lock screen experience, which will continue to evolve with more intelligent experiences in the future.
“Defined by bold, iconic design choices, One UI 7 reduces visual clutter and fosters an inviting experience designed to resonate with users on a personal level and enable intuitive mobile experiences across AI features,” says Samsung. Other changes in One UI 7 include a simplified home screen, redesigned One UI widgets, and a new lock screen.
A redesigned camera user interface allows more intuitive control over advanced settings. Camera buttons, controls, and modes have been reorganized to make it easier to find the features you need and to give you a clearer preview of the picture you’re taking or the video you’re recording.
For Pro and Pro video modes, the manual settings layout has also been simplified, making it easier to focus on the picture or video you’re shooting. A new zoom control is available when you’re recording in Pro video mode, allowing you to control the zoom speed for smooth transitions.
One UI 7: Availability Details
The official One UI 7 release will commence with upcoming Galaxy S series devices, featuring additional AI capabilities including enhanced on-device AI functions, starting from the first quarter of 2025. In line with Samsung’s commitment to extend its OS upgrade policy, the update will gradually roll out to other Galaxy devices.
The One UI 7 beta for Galaxy S24 series devices is available in Germany, India, Korea, Poland, the UK, and the U.S., from today, December 5. Galaxy S24 series users can apply to join the beta program via Samsung Members.
Samsung has remained adamant with one of its software related decisions to either only allow Bixby to launch after a long-press of the power button, or to access the power menu. However, that could soon change as Samsung could allow users to long-press power button to access Gemini instead of Bixby.
Samsung may soon finally let you long press the side button to launch Gemini instead of Bixby, as reported by Android Authority. The publication discovered strings in the latest Google app APK teardown which are part of the Gemini intro screen that’ll appear in the setup wizard for Samsung devices.
The strings inform the user that they can “hold down the Side button to talk to Gemini.” Samsung likes to call the power button as ‘Side button’. These strings are specifically aimed at Samsung devices as their names have “_samsung” in them.
The presence of strings in the Google app, explaining how users can long-press power button to access Gemini, hints that Samsung may introduce this functionality soon. While the exact timeline remains unclear, it’s likely to debut with the upcoming One UI 7 release. It might even appear in the setup wizard for the new Samsung Galaxy S25 series, which is rumoured to launch on January 22.
It would be a major software change in Samsung devices if the information holds any truth. This change, however, doesn’t indicate that Samsung is giving up on Bixby in any form. In fact, the company debuted the next-generation Bixby earlier last month. The new Bixby debuted alongside the Samsung W25 and the W25 Flip in the country, which are China’s version of the Galaxy Z Fold Special Edition and the Galaxy Z Flip 6.
Google has announced the launch of Google Genie 2, a large-scale foundation world model capable of generating an endless variety of action-controllable, playable 3D environments for training and evaluating embodied agents. Based on a single prompt image, it can be played by a human or AI agent using keyboard and mouse inputs.
“Games play a key role in the world of artificial intelligence (AI) research. Their engaging nature, unique blend of challenges, and measurable progress make them ideal environments to safely test and advance AI capabilities,” said Google. “Genie 2 could enable future agents to be trained and evaluated in a limitless curriculum of novel worlds.”
What is Google Genie 2?
While Genie 1 could create a diverse array of 2D worlds, Google Genie 2 represents a significant leap forward in generality, as it can generate a vast diversity of rich 3D worlds. Genie 2 is a world model, meaning it can simulate virtual worlds, including the consequences of taking any action (e.g. jump, swim, etc.).
It was trained on a large-scale video dataset and, like other generative models, demonstrates various emergent capabilities at scale, such as object interactions, complex character animation, physics, and the ability to model and thus predict the behavior of other agents.
Genie 2 can essentially convert an image into a playable 3D environment. This means anyone can describe a world they want in text, select their favorite rendering of that idea, and then step into and interact with that newly created world (or have an AI agent be trained or evaluated in it). At each step, a person or agent provides a keyboard and mouse action, and Genie 2 simulates the next observation. Genie 2 can generate consistent worlds for up to a minute, with the majority of examples shown lasting 10-20s.
Genie 2 responds intelligently to actions taken by pressing keys on a keyboard, identifying the character and moving it correctly. For example, the model has to figure out that arrow keys should move the robot and not the trees or clouds.
Further, Genie 2 is capable of remembering parts of the world that are no longer in view and then rendering them accurately when they become observable again. Genie 2 generates new plausible content on the fly and maintains a consistent world for up to a minute.
Genie 2 can create different perspectives, such as first-person view, isometric views, or third person driving videos. It further models various object interactions, such as bursting balloons, opening doors, and shooting barrels of explosives. The model has learned how to animate various types of characters doing different activities, and can also model non-playable characters (NPCs) and even complex interactions with them.
It can further model smoke effects, water effects, gravity, point and directional lighting, reflections, bloom, and even coloured lighting. Genie 2 can also be prompted with real world images, where it can model grass blowing in the wind or water flowing in a river. The list of its abilities is quite large and doesn’t end here.
“Genie 2 shows the potential of foundational world models for creating diverse 3D environments and accelerating agent research,” as per Google. The company notes that this research direction is in its early stages and it looks forward to continuing to improve Genie’s world generation capabilities in terms of generality and consistency.
Update 06/12/2024: The article has been updated with the latest announcements below.
OpenAI is set to launch a 12-day “shipmas” event starting today, December 5, unveiling new features, products, and demos. There will be 12 live streams that will be held on each day and every one of them will be accompanied by new launches. Here’s everything to know about 12 days of OpenAI.
As announced by OpenAI and its CEO, the 12 Days of OpenAI event begins December 5 at 10AM Pacific time (11:30PM IST). Among the anticipated announcements are Sora, OpenAI’s long-awaited text-to-video AI tool, and a new reasoning model, according to sources familiar with the company’s plans, as reported by The Verge.
“Each weekday, we will have a livestream with a launch or demo, some big ones and some stocking stuffers. We’ve got some great stuff to share, hope you enjoy! merry christmas!,” said Sam Altman on X. While the announcements haven’t been confirmed as to what exactly they are, 12 Days of OpenAI will most likely consist of the debut of Sora.
OpenAI CEO Mira Murati told The Wall Street Journal earlier this March that Sora will debut publicly by the end of the year and 12 Days of OpenAI seems to be the apt event to take off the curtains from AI tool. Sora was unveiled back in February.
Sora boasts a profound comprehension of language, enabling it to accurately interpret prompts and create compelling characters that reflect vibrant emotions. The model not only comprehends user prompts but also grasps the physical context in which they exist. Sora has been in a private testing phase throughout 2024. It was leaked by artists who were testing it a few weeks back, stating that OpenAI was using them for supposed “unpaid R&D and PR.”
We’ll be updating this article once the announcements begin coming in, so stay tuned.
12 Days of OpenAI: All Announcements
December 5 – Day 1
ChatGPT Pro, OpenAI o1
OpenAI has announced the launch of o1, its latest AI model which the company describes as the “smartest model in the world.” It’s smarter, faster, and packs more features (eg multimodality) than o1-preview. Alongside o1, the company introduced ChatGPT Pro, a $200 (approx Rs 16,900) monthly plan that enables scaled access to the best of OpenAI’s models and tools.
This plan includes unlimited access to OpenAI o1, as well as to o1-mini, GPT-4o, and Advanced Voice. It also includes o1 pro mode, a version of o1 that uses more compute to think harder and provide even better answers to the hardest problems. In the future, OpenAI expects to add more powerful, compute-intensive productivity features to this plan.
ChatGPT Pro provides a way for researchers, engineers, and other individuals who use research-grade intelligence daily to “accelerate their productivity and be at the cutting EDGE of advancements in AI.” ChatGPT Pro provides access to a version of the company’s most intelligent model that thinks longer for the most reliable responses. OpenAI claims that in evaluations from external expert testers, o1 pro mode produces more reliably accurate and comprehensive responses, especially in areas like data science, programming, and case law analysis.
Compared to both o1 and o1-preview, o1 pro mode performs better on challenging ML benchmarks across math, science, and coding. To emphasize the primary strength of the o1 Pro mode—enhanced reliability—OpenAI adopts a more rigorous evaluation standard. A model is deemed to have successfully solved a question only if it answers correctly in all four attempts (“4/4 reliability”), rather than just once.
Pro users can access this functionality by selecting “o1 pro mode” in the model picker and asking a question directly. Since answers will take longer to generate, ChatGPT will display a progress bar and send an in-app notification if you switch away to another conversation.
The company will continue adding capabilities to Pro over time to unlock more compute-intensive tasks. It’ll also continue to bring many of these new capabilities to its other subscribers.
December 6 – Day 2
Reinforcement Fine-tuning
OpenAI, on the second day of “12 Days of OpenAI” Shipmas event, the company announced the expansion of its Reinforcement Fine-Tuning Research Program to enable developers and machine learning engineers to create expert models fine-tuned to excel at specific sets of complex, domain-specific tasks.
The new model customization technique, called Reinforcement Fine-tuning, enables developers to customize OpenAI’s models using dozens to thousands of high quality tasks and grade the model’s response with provided reference answers. This technique reinforces how the model reasons through similar problems and improves its accuracy on specific tasks in that domain.
OpenAI encourages research institutes, universities, and enterprises to apply, particularly those that currently execute narrow sets of complex tasks led by experts and would benefit from AI assistance. OpenAI’s Reinforcement Fine-Tuning has shown great potential in fields like Law, Insurance, Healthcare, Finance, and Engineering. This is because this technique works particularly well with tasks that have a clear “right” answer, the kind that most experts in those fields would generally agree on.
As part of the research program, you will get access to OpenAI’s Reinforcement Fine-Tuning API in alpha to test this technique on your domain-specific tasks. You will be asked to provide feedback to help the AI company improve the API ahead of a public release.
December 9 – Day 3
Sora
An announcement everyone expected, Sora was launched by OpenAI for the general public on Day 3 of its Shipmas event. While Sora was introduced earlier this year, it was available only to a select group of testers. OpenAI has developed a new version of Sora—Sora Turbo— that is significantly faster than the model it previewed in February. It is releasing that model today as a standalone product at Sora.com to ChatGPT Plus and Pro users.
With Sora, users can generate videos up to 1080p resolution, up to 20 sec long, and in widescreen, vertical, or square aspect ratios. You can bring your assets to extend, remix, and blend, or generate entirely new content from text.
OpenAI has further developed new interfaces to make it easier to prompt Sora with text, images, and videos. There’s a storyboard tool that lets users precisely specify inputs for each frame. Open AI will also have Featured and Recent feeds in Sora that are constantly updated with creations from the community.
Sora is included as part of the user’s Plus account at no additional cost. You can generate up to 50 videos at 480p Resolution or fewer videos at 720p each month.
For those who want more Sora, the Pro plan includes 10x more usage, higher resolutions, and longer durations. OpenAI notes that it is working on tailored pricing for different types of users, which it plans to make available early next year.
The version of Sora OpenAI has deployed has many limitations, the AI company conveys. “It often generates unrealistic physics and struggles with complex actions over long durations. Although Sora Turbo is much faster than the February preview, we’re still working to make the technology affordable for everyone,” said OpenAI.
All Sora-generated videos come with C2PA metadata, which will identify a video as coming from Sora to provide transparency, and can be used to verify origin. OpenAI admits that while some of the measures it has taken may be imperfect, it has still added safeguards like visible watermarks by default, and built an internal search tool that uses technical attributes of generations to help verify if content came from Sora.
As of today, OpenAI is blocking particularly damaging forms of abuse, such as child sexual abuse materials and sexual deepfakes. Uploads of people will be limited at launch, but OpenAI intends to roll the feature out to more users as it refines its deepfake mitigations.
December 10 – Day 4
Canvas Availability for All
Back in October this year, OpenAI unveiled Canvas, a new interface for working with ChatGPT on writing and coding projects that go beyond simple chat. Canvas opens in a separate window, allowing the user and ChatGPT to collaborate on a project. While it was available only to the paid ChatGPT users, OpenAI has announced that it is now rolling out for all users, including free ones.
Canvas was built with GPT-4o and can be manually selected in the model picker while in beta. Availability for everyone means that Canvas is now out of its beta phase. With Canvas, ChatGPT can better understand the context of what you’re trying to accomplish. You can highlight specific sections to indicate exactly what you want ChatGPT to focus on. Like a copy editor or code reviewer, it can give inline feedback and suggestions with the entire project in mind.
You control the project in Canvas. You can directly edit text or code. There’s a menu of shortcuts for you to ask ChatGPT to adjust writing length, debug your code, add emojis to your content, and quickly perform other useful actions. You can also restore previous versions of your work by using the back button in Canvas.
Canvas opens automatically when ChatGPT detects a scenario in which it could be helpful. You can also include “use canvas” in your prompt to open canvas and use it to work on an existing project. Canvas can also help you code, with shortcuts including:
Coding shortcuts include:
Review code: ChatGPT provides inline suggestions to improve your code.
Add logs: Inserts print statements to help you debug and understand your code.
Add comments: Adds comments to the code to make it easier to understand.
Fix bugs: Detects and rewrites problematic code to resolve errors.
Port to a language: Translates your code into JavaScript, TypeScript, Python, Java, C++, or PHP.
Canvas now also supports custom GPTs, so you can add a collaborative interface to your custom AIs.
December 11 – Day 5
ChatGPT with Apple Intelligence
On Day 5 of the event, OpenAI demoed ChatGPT in Apple devices that’s integrated with Apple Intelligence. While the ability to do so was announced months back by Apple and was available in beta, the official release for the public where Siri can talk to ChatGPT came yesterday with the iOS 18.2 release.
Siri can tap into ChatGPT’s expertise when it’s needed. Users are asked before any questions are sent to ChatGPT, along with any documents or photos, and Siri then presents the answer directly. The model Siri will be leveraging will be the latest GPT-4o. For those who choose to access ChatGPT, Apple says their IP addresses will be hidden, and OpenAI won’t store requests.
December 12 – Day 6
ChatGPT Gets Video Input Support with Screenshare, Santa Mode also Announced
OpenAI is halfway through its “12 Days of OpenAI” Shipmas event and on Day 6, it announced that ChatGPT’s Advanced Audio mode now supports Video Input, which in other words means that ChatGPT now has vision. Using the ChatGPT app, users subscribed to ChatGPT Plus, Team, or Pro subscriptions can point their phones at objects in the real world and have ChatGPT respond in real-time.
This feature has been delayed multiple times in the past but is finally released to the public seven months after OpenAI first demoed it, even though the company said it would roll out in a “few weeks” back then.
Furthermore, through the Screenshare ability, the Advanced Voice Mode with vision can also understand what’s on the user’s device’s screen. It can explain various settings menus or give suggestions on a math problem.
The rollout of Advanced Voice Mode with Vision has already begun on Thursday and is expected to be completed within a week. However, not all users will gain immediate access. According to OpenAI, ChatGPT Enterprise and Edu subscribers will need to wait until January for the feature, while there is no set timeline for its availability in certain regions including the EU, Switzerland, Iceland, Norway, and Liechtenstein.
Finally, just in time for Christmas, OpenAI has also announced “Santa Mode”, where users can apply Santa’s voice as a preset voice in ChatGPT. One can find it by tapping or clicking the snowflake icon in the ChatGPT app next to the text input box.
December 13 – Day 7
ChatGPT Projects
Projects provide a new way to group files and chats for personal use, simplifying the management of work that involves multiple chats in ChatGPT. Projects keep chats, files, and custom instructions in one place.
Conversations in Projects support the following features:
Canvas
Advanced data analysis
DALL-E
Search
Currently, connectors to add files from Google Drive or Microsoft OneDrive are not supported.
You can delete your project by selecting the dots next to your Project name and selecting Delete Project. This will delete your files, conversations, and custom instructions in the Project. Once they are deleted, they cannot be recovered. You can also set Custom Instructions in your Project by selecting Add Instructions on your Project page. You can revisit this button at any time to reference or update your Instructions.
Instructions set in your Project will not interact with any conversations outside of your Project and will supersede custom instructions set in your ChatGPT account.
December 16 – Day 8
Improvements to ChatGPT Search
On Day 8, OpenAI announced improvements for ChatGPT Search, making it faster overall, a better experience on mobile, and improving overall reliability. Then, OpenAI has integrated Advanced Voice Mode in ChatGPT Search so you can use search through your voice. Aside from that, ChatGPT search is now available to all logged-in users on all platforms in regions where ChatGPT is available.
December 17 – Day 9
OpenAI o1 and New Tools for Developers
On Day 9 of “12 Days of OpenAI” event, the company introduced more capable models, new tools for customization, and upgrades that improve performance, flexibility, and cost-efficiency for developers building with AI. This includes:
OpenAI o1 in the API, with support for function calling, developer messages, Structured Outputs, and vision capabilities.
Realtime API updates, including simple WebRTC integration, a 60% price reduction for GPT-4o audio, and support for GPT-4o mini at one-tenth of previous audio rates.
Preference Fine-Tuning, a new model customization technique that makes it easier to tailor models based on user and developer preferences.
December 18 – Day 10
1-800-ChatGPT: Talk to ChatGPT via a Phone Call or through WhatsApp
On Day 11, OpenAI announced 1-800-ChatGPT, an experimental new launch to enable wider access to ChatGPT. You can now talk to ChatGPT via phone call or message ChatGPT via WhatsApp at 1-800-ChatGPT without needing an account. You can also start a conversation on WhatsApp by clicking this link or scanning a QR code. One can talk to 1-800-ChatGPT for 15 minutes per month for free, with a daily limit on WhatsApp messages. OpenAI says it may adjust usage limits based on capacity if needed. It provides a notice as you approach the limit and informs you when the limit has been reached.
December 19 – Day 11
Work with Apps on macOS
On the second last day of “12 Days of OpenAI” event, OpenAI announced that users can now work with apps while using Advanced Voice Mode. It’s ideal for live debugging in terminals, thinking through documents, or getting feedback on speaker notes. Further, now you can also search through all your previous conversations using keywords and phrases by clicking the search bar. The company has also added support for more note-taking and coding apps, including Apple Notes, Notion, Quip, and Warp
December 20 – Day 12
Early access for safety testing
On the final day of the ’12 Days of OpenAI’ event, OpenAI announced that it is inviting safety researchers to apply for early access to our next frontier models. This early access program complements the company’s existing frontier model testing process, which includes rigorous internal safety testing, external red teaming such as our Red Teaming Network and collaborations with third-party testing organizations, as well the U.S. AI Safety Institute and the UK AI Safety Institute.
The company will begin selections as soon as possible. Applications close on January 10, 2025.
With this final announcement, OpenAI’s Shipmas event comes to an end.
Google is rolling out an Undo device backup setting in Google Photos where you can quickly remove all of the backed up photos on Google Photos from a particular device without deleting them from the device itself. The feature is rolling out to Google Photos on iOS as of now.
Announced via a Google Support page, Google says that it understand that sometimes, a user might change their mind or that they might not want all of their photos on their device backed up anymore, which resulted in the rollout of the new Undo device backup setting in Google Photos.
The settings gives users the ability to remove all of the photos and videos that are currently found on their device in Google Photos — without also deleting those photos and videos off of your device. To make use of the setting, follow the steps below:
Open the Google Photos app.
At the top, tap your Profile picture or Initial and then Google Photos settings,and then Backup.
To view the off-screen items below, scroll.
Tap Undo backup for this device.
Next to “I understand my photos and videos from this device will be deleted from Google Photos,” check the box.
Tap Delete Google Photos backup.
The photos and videos will remain on your device even after the backup from Google Photos is deleted. After you delete your Google Photos backup, Backup will be turned off automatically on that device. This feature is rolling out to iOS users now and will be available on Android soon.
Meanwhile, Google Photos recently got updated with the ‘Updates’ page, which replaced the ‘Sharing’ page that showed users all the notifications of what is being shared in the albums they are a part of. One can access Updates by tapping on the bell icon at the top right. The Updates page is chronologically organised, and you can view your incoming activity from today, yesterday, this week, this month, last month, and beyond.
Update 05/12/2024: Samsung has officially announced the launch of One UI 7 Beta for Galaxy S24 series devices in multiple countries including India.
Original Story Below
All eyes are now set on the Samsung One UI 7 Beta as the Korean brand gears up to unveil a major overhaul to its Android skin in years. Multiple reports have emerged online suggesting that the One UI 7 Beta is coming later today, December 5, for the Galaxy S24 series.
As per Max Jambor on X, who has been a reliable source for leaks in the past, Samsung One UI 7 Beta is going to be released later today, December 5, for the Galaxy S24, Galaxy S24+, and Galaxy S24 Ultra. As per the information he received from Samsung PR, Germany will be one of the first countries where the beta will be made available, likely alongside Korea and the USA, given Samsung’s history of launching beta versions in these countries first.
The second phase countries should likely include India where the second beta will be served as the first one. However, the brand hasn’t officially confirmed anything so far, and looking at how the One UI 7 Beta has been delayed multiple times in the past, we also can’t say anything for certain regarding its availability in other countries, yet.
A separate report on X includes screenshots of a user’s chat with Samsung support, where the support team shared an image of the One UI 7 Beta program banner. This banner typically appears in the Samsung Members App for users to sign up for the beta program when it commences. Both these reports suggest that the beta is indeed coming later today and per leaks, should bring a new set of AI features, a new quick settings panel, a new lock screen bar, newer customisation options, and much more.
Samsung already confirmed earlier this year that the beta is coming by the end of 2024 and the full launch will take place alongside the Galaxy S25 series next month, likely on January 22. We’ll update the article once Samsung officially announces the One UI 7 Beta, so keep an eye out for it.
Oppo Find X8 December 2024 update has been released by the brand in India, which not only integrates the latest security patch, but also brings a set of improvements and big fixes as well. Here’s everything to know about the update.
Our Indian Find X8 unit has received a new update today, on December 4. The Oppo Find X8 December 2024 update bears version CPH2651_15.0.0.301 (EX01) and it’s a hefty one in size, coming in at about 778.81 MB. The update has the following changelog:
Camera
Optimizes the skin tone of portraits taken with the front camera to make them look more natural.
Produces a steadier live preview at high zoom ratios for a better sense of control with the telephoto camera.
Improves the interactive experience of Camera and you can now add the XPAN mode to the menu for quick access.
Apps
Now you can zoom in or out the preview of burst shots collections.
Contacts can now be switched to a floating window.
System
Now you can see the status of the flashlight on Live Alerts.
Now you can drag the bottom corner of a floating window to change it to full screen.
Fixes a display issue with existing icons on the status bar when a Live Alerts capsule is displayed.
Improves system stability and performance.
Integrates the December 2024 Android security patch to enhance system security.
The Find X8 arrived in India earlier last month with a starting price of Rs 69,999. We reviewed the device and came to a conclusion that the Oppo Find X8 seems like a worthy contender against the competition and excels in almost every aspect, be it a power-packed chipset, decent display, smooth and feature-rich software, great build quality, or versatile cameras.
iQOO is now rolling out FunTouch OS 15 for iQOO Neo 9 Pro via an Open Beta program which users can sign up for. FunTouch OS 15 is based on Android 15 and brings a new set of features to Vivo’s and iQOO’s devices, such as smoother animations, more customisation options, etc.
Announced via iQOO Community, Open beta program for testing FunTouch OS 15 for iQOO Neo 9 Pro is now live. Users can head over to the System Update page and sign up for the trial via the three-dot menu. iQOO says that this “update will be randomly pushed out to a limited number of users soon and will have a broader rollout in a few days to ensure there are no critical bugs.” It also shared a partial changelog of the update, which includes:
Added Priority scheduling algorithm, which can finely differentiate the priority level and computing power requirements of different apps and tasks to accurately allocate computing power and improve system smoothness
Added Rapid dynamic effect engine, which is a dedicated channel opened for dynamic effects, enabling timely feedback about dynamic effects to improve visual smoothness
Added “Circle to Search”, which allows you to select an area on the screen to search and quickly get the information you want
Added memory enhancement technology, which accurately classifies and manages memory, compresses memory to save space, improves memory usage efficiency, and enables smooth multitasking
Optimized system dynamic effects with the new Origin animation that incorporates principles from human factors research.
The Open Beta version of FunTouchOS 15 for iQOO Neo 9 Pro comes with software version PD2338F_EX_A_15.1.6.7.W30 and weighs in at about 2.73GB.
In related news to the company, it launched the iQOO 13 in India on December 3. It starts at Rs 54,999 and packs features like Snapdragon 8 Elite, a 144Hz display, LPDDR5x Ultra RAM, UFS 4.1 storage, a 6000mAh battery, 120W fast charging, triple rear cameras, and much more.
Update 11/12/2024: Realme has confirmed that the Realme 14x 5G will launch and go on sale on the same date, which is December 18. It will be the first smartphone in India under Rs 15,000 segment to be IP69-rated.
Original Story Below
Realme 14x India launch has been tipped, and rumours suggest that the device will go on sale later this month. Aside from that, some of the device’s key specifications have also been leaked, revealing that it will come with a flagship feature but will be offered at a budget price.
Realme 14x India launch was tipped by 91mobiles, and its report citing industry sources, indicates that the device will go on sale in India on December 18th. This means that one could expect the official reveal to take place sometime during next week, but the date for that remains unconfirmed as of now.
Further, Realme 14x is also said to come with an IP69 rating and a 6,000mAh battery powering the device. These specs are usually seen in mid-range to flagship segment phones but Realme seems to be planning to spice things up in the budget segment. In addition, it’s also tipped that the Realme 14x will feature a 6.67-inch HD+ IPS LCD display.
The Realme 14x is also said to come with a Diamond Design panel. It will launch in three variants as per a recent report from the same source, which includes 6GB + 128GB, 8GB + 128GB, and 8GB + 256GB trims. The smartphone will be available in three colour options: Crystal Black, Golden Glow, and Jewel Red. Realme 14x is also tipped to feature a square-shaped camera module.
Realme 14x will be the successor to the Realme 12x 5G which debuted in India earlier this year in April. The device packs the MediaTek Dimensity 6100+ Processor and was priced starting at Rs 11,999. The Realme 14x 5G could also be priced along similar lines, judging by the leaked specifications of the device. We’ll know more once the official Realme 14x India launch date is announced.