We are specialists in quality batteries. We have batteries, chargers and accessories for everything you can think of. Low prices, big inventory, expert advice. Find your battery here!
After months of teasers, previews, and select rollouts, Microsoft’sCopilot Visionis now available to try for all Edge users in the U.S. The flashy new AI tool is designed to watch your screen as you browse so you can ask it various questions about what you’re doing and get useful context-appropriate responses. The main catch, however, is that it currently only works with nine websites.
For the most part, these nine websites seem like pretty random choices, too. We have Amazon, which makes sense, but also Geoguessr? I’m pretty sure the point of that site is to try and guess where you are on the mapwithout any help. Anyway, the full site list is as follows:
Wikipedia
Tripadvisor
Williams Sonoma
Amazon
Target
Wayfair
Food & Wine
OpenTable
Geoguessr
CEO of Microsoft AI Mustafa Suleyman announced the release on Bluesky yesterday and shared a few of his favorite use cases.
Usually, when you want to ask Copilot a question, you have to write out the paragraphs of context yourself, and aside from being slow and annoying, this can also be pretty difficult if you’re trying to ask about something you don’t know much about.
With Copilot Vision, instead of trying to describe what you’re looking at or what you’re talking about, the AI model can see itright on your screen.
So, according to Suleyman’s examples, you can search for “breathable sheets” on Amazon and ask Copilot if any of the results are made from appropriate fabrics. Copilot can point the right ones out to you or give you examples of breathable fabric to search for.
On the Food & Wine recipe website, Copilot can help you go hands-free while you cook by answering your questions and reading out parts of the recipe to you. This works because the whole experience is designed to work through voice — you speak directly to the AI and the AI speaks back.
According to one of the videos on theCopilot Vision page, however, it looks like you can type out questions too and receive written responses.
Microsoft is taking things very slowly and carefully with this feature, almost certainly because it wants to avoid triggeringanother backlashlike it did with Recall. The limited number of compatible sites is connected to copyright issues, and the company makes sure to stress that the feature is “opt-in,” doesn’t record your screen, is only on when you turn it on, and deletes the data as soon as you end a session.
If you thought ads were intrusive before, you haven’t seen anything yet.
LGhas recently signed a multi-year agreement with a company called Zenapse with an aim to leverage “emotional intelligence to deliver more meaningful and measurable brand experiences across 200 million LG Smart TVs globally.”
To make ads more meaningful, Zenapse uses a CTV (Connected TV) platform called “ZenVision” that watches along with the content being played on the screen to better understand the emotional state the viewer might be in.
LG can then use this information to group users into specific subsets and target them ads based on their emotional intelligence. This information becomes psychographic data. Unlike demographic data, which is simpler concepts like age, location, etc, psychographic data gets more personal, diving deep into your psyche and psychological mindset.
Using this data, ZenVision can better “optimize predictions” and thus target you ads that might satisfy your needs a whole lot more. Not only that, but Zenapse and LG can then sell that information to third parties.
As reported byStreamTV Insiderand onZenVision’s own website, data groups include abstract personality definers like “social connectors,” “wellness seekers,” “goal-driven achievers,” “digital adopters,” and many more.
Zenapse’s AI goes a step further and specifies the user’s general proclivities, lumps those users together and pushes ads based on these key factors. It’s notjustyour emotions being targeted, but your goals, principles, political affiliations, and more all wrapped up into one generalized market segment.
How does it work? The AI uses a combination of facets to better derive this data, including its own sophisticated algorithms, existing information on the particular on-screen content (like the script or genre), and even ACR (automatic content recognition) data collated by LG sets on their own.
LG Ad Solutions made theannouncement on Tuesday, however there’s very little details about the longevity of the agreement, which TVs might suffer from the agreement, or how much LG spent to leverage Zenapse’s tech.
Based on what we know about it so far, though, Zenapse’s ZenVision sets a terrifying new precedent in on-screen advertising and gets so much more dystopian the deeper you dive into the rabbit hole.
Google is already well-acquainted with Zenapse, with Google Ads, Google for Startups, and Google Cloud all having strategic deals with the AI firm.
While LG might just be the first to implement it,Googleis already well-acquainted with Zenapse, with Google Ads, Google for Startups, and Google Cloud all having strategic deals with the AI firm.
Though only speculation, it probably won’t take long before ZenVision finds its way onto Google TV — if it hasn’t already. Google TV commands nearly 270M TVs and it’s leveraged by everything from thebest budget TVsbyHisenseandTCLto some of thebest OLED TVsout of Sony.
As stated, Zenapse is ushering in a whole new era of TV advertising and if you thought things couldn’t get worse, think again.
…And maybeturn off ACR on your TVwhile you still have the chance.
Officially announced at CES 2025, HDMI 2.2 is the next-generationHDMI standardthat promises to double available bandwidth for higher resolution and refresh rate support, and will require a new cable to support these new standards. It will also bring with it advanced features for improved audio and video syncing between devices.
But the new cable isn’t coming until later this year, and there are no signs of TVs supporting the new standard yet. Here’s everything you need to know about HDMI 2.2.
The standout feature of HDMI 2.2 is that is allows for up to double the bandwidth of existing Ultra High Speed HDMI cables using theHDMI 2.1 protocol. HDMI 2.2 is rated for up to 96 Gbps, opening up support for native 16K resolution support without compression, or native 4K 240Hz without compression. Throw DSC on and it should support monitors up to 4K 480Hz or 8K in excess of 120Hz.
While there aren’t any consumer TVs or monitors that support such resolutions and refresh rates at this time, it could be that the protocol finds use in future augmented and virtual reality headsets.
HDMI 2.2 will also support Latency Indication Protocol (LIP), which will help make sure audio is synchronized with video, especially in configurations that include an external A/V system.
To support the new resolutions and refresh rates, you’ll need to use a new HDMI 2.2 certified Ultra96 cable design. The new cables will be backwards compatible with all previous HDMI versions, but will only run at the maximum supported speed of the lowest link in the chain. So an HDMI 2.2 cable plugged into a port that is only compatible with HDMI 2.0 speeds will not be able to use the full bandwidth of the cable.
HDMI 2.1 was a major innovation for the HDMI standard when it was ratified in 2017, but it was only in 2020 and 2021 when we started seeing real products using the design. It almost tripled the bandwidth over HDMI 2.0, and finally made HDMI capable of true 4K 120Hz support which was super important for this latest generation of games consoles.
HDMI 2.1 also introduced a range of new features. These included:
Dynamic HDR support.
Display Stream Compression 1.2 support.
Enhanced audio return channel (eARC) support.
Variable refresh rates.
Quick media switching.
Quick frame transport.
Auto low latency mode.
HDMI 2.2 is much lighter on features, merely introducing the new LIP protocol, but that will still have its uses. Like HDMI 2.1, though, HDMI 2.2’s bandwidth uplift is dramatic, and indeed by gigabits per second, the largest in the standard’s history. Doubling HDMI 2.1 from 48 Gbps to 96 Gbps makes HDMI 2.2 the most capable video and audio transmission standard, even eclipsing DisplayPort 2.1 and USB4, both of which can only reach 80 Gbps.
It is however, still weaker than thenew standard out of China known as GPMI.
When is HDMI 2.2 available?
HDMI 2.2 was officially unveiled in January 2025, and will officially launch in the first half of the year. The HDMI Forum who manages its ongoing development has suggested the new Ultra96 cables will be released before the end of the year, but we may not see devices that support HDMI 2.2 until sometime in 2026.
That’s the optimistic take, too. HDMI 2.1 was ratified and launched in 2017, but it wasn’t until 2020/2021 where we started seeing displays making full use of it.HDMI 2.2 could in theory take even longer.
At the time of writing there isn’t a great call for more advanced cable standards in the living room. While PC gaming does have a possible use for higher bandwidth cables to enable higher resolution and refresh rate gaming, most high-end PCs aren’t managing 200+ FPS at 4K — and they have DisplayPort 2.1 support, anyhow. Indeed, most still play at lower resolutions. While in the living room, 4K at 120Hz is the standard for the major games consoles. Without a new generation of Xbox or PlayStation to drive up to TV refresh rates, there isn’t much point in supporting more.
HDMI 2.1 cables can have labels like UHD, or 8K on them, but otherwise look the same as any other HDMI cables.Cable Matters
There are 8K TVs which could conceivably offer 120Hz or higher refresh rates, but games consoles and TVs can’t manage that anyhow, so again, little benefit. There are also no plans for 8K Blu-rays which could use the additional bandwidth for higher bit-rate video or greater HDR metadata.
HDMI 2.2 is likely to be the future of connecting all sorts of devices and has the bandwidth and features to compete with the best alternatives elsewhere, but it’s likely not going to become mainstream for some time to come.
What is aperture in photography? Here’s what you need to know – and how it affects your images
Learning photography often feels like a never-ending list, but one of the first and easiest concepts that I picked up when I first started is called aperture. But what is aperture in photography? Aperture is a setting that controls how much or how little light comes into the lens. By controlling the aperture, photographers can make a photo brighter or darker, like opening or closing a curtain to let more sunshine into a dark room.
But while aperture is one of three camera settings that influenceexposure, or how light or dark a photograph is, light isn’t the only thing that aperture controls. When I first learned what aperture was, I assumed that I could just keep the aperture wide open all the time to let in the most light. However, I quickly learned that that assumption led to photos that were never quite sharp enough.
It seems that VRR support will no longer be a feature at launch
(Image credit: Nintendo)
Nintendo has quietly removed the mention of VRR support from some of its regional Switch 2 websites
The US, Canada, and Japan websites no longer feature the mention of VRR support
As of writing, the UK website still mentions VRR, but could still be removed
Nintendo has quietly removed any mention of variable refresh rate (VRR) support from some of its regionalSwitch 2websites, suggesting the console may not offer the feature after all.
That’s according to Digital Foundry’s Oliver Mackenzie (viaVGC), who spotted that the US website has been updated since theNintendo Switch 2 Direct, and no longer mentions VRR support for docked play.
Now it reads: “Take in all the detail with screen resolutions up to4Kwhen you connect the Nintendo Switch 2 system to a compatible TV using the dedicated dock. The system also supportsHDRand frame rates up to 120 fps on compatible TVs.”
It’s not just the US website that has been updated, but theCanadaandJapansites too.
As of writing, theUKsite still mentions that the Switch 2 “supports HDR, VRR, and frame rates up to 120 fps on compatible TVs,” but Nintendo may be in the process of removing it from all its regional sites.
It’s unclear why Nintendo has made changes, but Mackenzie theorises that VRR support may not be available at launch. However, the Switch 2 in handheld looks like it will still offer VRR thanks toNvidia G-Sync, which will ensure “ultra-smooth, tear-free gameplay.”
Everything we needed to know about theSwitch 2’s specswas revealed during the Direct earlier this month, where it was also confirmed that the console will have a bigger screen, from 6.2 inches to 7.9 inches, 256GB of internal storage, and a mouse function for its magnetic Joy-Con controllers.
The iPhone 6S (above) is now officially ‘vintage’ – despite not looking radically different to today’s iPhones
Apple has just labeled the iPhone 6S as “vintage”
The same designation has been applied to the 2018 Mac mini
This means repairs are more limited should something go wrong
Are you still using aniPhone 6Sor a2018 Mac mini? If you are, we’ve got some bad news:Applehas just declared both products to be “vintage” on itsvintage and obsolete products page, which means you’ll get much more limited service and repairs for them if anything goes wrong.
The iPhone 6S andiPhone 6S Pluswere released in 2015 and came with a few notable milestones. They were the first iPhones to come with Apple’s 3D Touch tech, while they were also the last to featureheadphone jacks.
In addition, Apple strengthened the chassis of the devices to prevent the kind of ‘bendgate’controversy that befell the iPhone 6. The iPhone 6S was last offered for sale by Apple in 2018.
The 2018 Mac mini, meanwhile, was the last Mac mini to come with anIntelprocessor rather than anApple silicon chip(the first of which – the M1 – debuted in 2020). And it was the first (and so far only) Mac mini to come in a space gray finish.
The 2018 Mac Mini (above) has joined the iPhone 6S on Apple’s perilous ‘vintage’ list
Other than taking many of us on a trip down memory lane, this news has some practical implications for those who are still running an iPhone 6S or 2018 Mac mini.
Apple labels a product as “vintage” when at least five years have passed since the company last offered it for sale. Products that last went on sale seven or more years ago are designated as “obsolete.”
Now that the iPhone 6S and Mac mini have been declared “vintage,” that means your repair options are more limited. You can get them fixed at Apple Stores and Apple Authorized Service Providers (AASPs), but only if the required parts are available. Third-party shops might be able to repair your device if Apple or its AASPs won’t.
The next step – declaring a product to be “obsolete” – means that Apple Stores and AASPs generally will not repair your device, with Apple declining to provide replacement parts. In that case, you have no option but to either rely on a third-party repair shop or upgrade your device.
So, if you are still using an iPhone 6S or 2018 Mac mini, you’ve still got a little longer before Apple stops offering repairs. That said, with these devices getting long in the tooth – and products like theiPhone 16andM4 Mac minioffering far better performance – now might be a good time to look at upgrading to one of thebest iPhonesandbest Macsyou can get.
Snipping Tool feature now in testing will sort you out in a jiffy
Windows 11 is powering up the Snipping Tool in testing right now
A new feature allows you to copy all the text from an image straight away
There’s no need to take a screenshot first, as was previously the case, so this is a neat time-saver
Microsoftis making it even easier forWindows 11users to be able to extract text from images (or any content) on the screen.
Windows 11 already has this OCR-powered (Optical Character Recognition) ability, as you may be aware, but at the moment, it’s necessary totake a screenshot first with the Snipping Toolbefore you can extract text from that image.
With a new update for the Snipping Tool that’s just been released (which was leaked previously), you don’t need to grab a screenshot to perform text extraction any longer – although bear in mind this is still in testing at this point (so may still be wonky).
AsNeowin reports, with previewversion 11.2503.27.0of the Snipping Tool, you can simply hit theWindows + Shift + Skeys together, and this will pop up the capture bar for the tool.
However, instead of having to create a snip (screenshot), the ‘text extractor’ option will be right there in the bar, so you can just click that, with no need to save a screen grab first.
Essentially, this is directly integrating the ability to extract text from images (or any screen content) into Windows 11, with no additional steps needed, mirroring the functionality present in Microsoft’sPowerToys suite of tools(for advanced Windows users) – and it’s definitely going to be appreciated by folks who use this capability.
It’s obviously less of a hassle than having to clear the hurdle of actually grabbing a screenshot, if all you’re interested in doing is copying all the text that’s currently visible on your monitor.
I sayallthe text, but that’s only what happens if you use the ‘Copy all text’ option provided. If you just want a specific portion of text, you can manually select and extract only those words (it’s also possible to remove line breaks if you want).
Microsoft is slowly expanding Windows 11’s OCR powers, and you may recall that late last year, thePhotos app got Optical Character Recognition built into pull text from images directly within the application.
Google is testing even more new features in its Messages beta app
These include an expanded 14-line message view and new RCS message labels
While these are still in beta testing, they could start rolling out to users this month
Over the past couple of months,Googlehas been doubling down on eradicating all traces of Google Assistant to makeGeminiits flagship voice assistant, but amidst the organized Gemini chaos, Google has been paying a lot of attention to improving its Messages app, giving it some much-needed TLC.
It’s safe to say that the new revisions to the Google Messages app have significantly improved its UI. Itsnew snooze function for group chats alsocomes to mind, but Google is still in its beta testing era. For a while, Google was experimenting with aneasier way to join group chats,followingWhatsApp’s footsteps. Now, it’s testing five more features that could make up the next wave of Google Messages upgrades this month.
Although these features are in beta, there’s been no comment on whether they’ll be officially rolling out to users. With that said, we’ll be keeping an eye out for any further updates.
Just a few weeks ago, we reported on a new upgrade found in Google Messages beta indicating thatGoogle would get better at handling lengthy text messages.
For a while, Google Messages users have been restricted to a four-line view limit when sending texts, meaning that you would need to scroll to review your entire message before sending. This is particularly frustrating when sending long URL links.
But that could soon be a thing of the past, as9to5Googlehas picked up a new beta code that reveals an expanded message composition field on thePixel 9athat now reaches up to 14 lines.
Recently, Google has been testing new in-app labels that could distinguish whether you’re sending an SMS or RCS message.
Thanks to an APK teardown fromAndroid Authority, the labels found in beta suggest that soon you’ll be able to see which of your contacts are using RCS in Messages, adding a new RCS label to the right side of a contact’s name or number.
Unsubscribe from automated texts
This is a feature we’re quite excited to see, and we’re hoping for a wider rollout this month. A few weeks ago, anunsubscribe buttonwas spotted at the bottom of some messages, which could give users an easier way of unsubscribing to automated texts and even the option to report spam.
When you tap this, a list of options will appear asking you for your reasons for unsubscribing, which include ‘not signed up’, ‘too many messages’, and ‘no longer interested’ as well as an option for ‘spam’. If you select one of the first three, a message reading ‘STOP’ will be sent automatically, and you’ll be successfully unsubscribed.
Read receipts gets a new look
(Image credit: 9to5Google)
Google could introduce another revamp of how you can view read receipts in the Messages app. In November 2024, Google tested a redesign of its read receipts that placed the checkmark symbols inside the message bubbles, which used to appear underneath sent messages.
In January, Google tested another small redesign introducing a new white background, which could roll out soon, and while this isn’t a major redesign, it’s effective enough to make read receipts stand out more.
Camera and gallery redesign, and sending ‘original quality’ media
We first noticed that Google Messages was prepping anew photo and video quality upgrade. In March, more users started to notice a wider availability, but it’s still not yet fully rolled out, meaning it could be one of the next new updates in the coming weeks.
Essentially, Google could be rolling out a new option that allows you to send media, such as photos and videos, in their original quality. This will give you the choice of the following two options:
‘Optimize for chat’– sends photos and videos at a faster speed, compromising quality.
‘Original quality’– sends photos and videos as they appear in your phone’s built-in storage.
I’ve been an iPhone user since 2009 when I got my first iPhone 3G, and since then I’ve been a loyal customer, upgrading annually to the best smartphoneApplehas to offer.
WhenSamsungreleased the S25 series of smartphones earlier this year with AI at their core, I knew I had to finally give Android a proper go and see whatGoogle’s mobileoperating systemwas capable of.
Over the next few weeks I’m going to pit my S25’sGalaxy AIfeatures against my iPhone 16 Pro Max’sApple Intelligencecapabilities to see which smartphone has AI features worth using compared to those that are just a gimmick.
When I received the S25, early last week, the first thing I decided to do was test Galaxy AI’s photo editing prowess, after all,Clean Upon iPhone is one of the most complete Apple Intelligence features to date, or so I thought.
To start my Galaxy AI Generative Edit versus Clean Up comparison, I decided to start by erasing my French Bulldog, Kermit, from his very plain grey bed. I thought this was a good starting point as while the bed is a plain background, there would be the shadow of the dog coming from the natural sunlight of the window in front of the camera.
Galaxy AI had no issue removing Kermit from his bed, in fact not only did it remove the shadow, it continued the brown stitching of the bed’s border, and smoothed out the surface to completely erase any sign of Kermit from the photo.
Apple Intelligence’s Clean Up, on the other hand, failed miserably at this simple task, leaving Kermit’s shadow while removing the dog from the bed. This AI editing created a sort of blur effect that would never pass for an edited image, let alone an original one.
Next up, I took my two smartphones to my local coffee shop to test AI photo editing out in the wild. As I went later in the day, there were only two croissants left, one regular and one pain au chocolat.
For this test, I decided to remove the pain au chocolat from a photo as the coffee shop’s branded paper underneath was monochrome and a repeating pattern that I thought would make for an interesting comparison.
Again, Samsung’s AI editing was impressive, to say the least. Not only did the pain au chocolat completely disappear from the image, but Galaxy Ai replicated the branded pattern perfectly, keeping some crumbs for added realism.
The iPhone’s attempt was again, pretty rubbish, creating a sort of crumpled paper effect and leaving the pain au chocolat’s shadow in plain sight. This was again pretty disappointing from Apple Intelligence, there may be a trend appearing here…
I asked the barista behind the bar if he could pour a Flat White so I could try and remove the coffee cup from his hands using Galaxy AI and Apple Intelligence.
I thought it was worth trying just to see how the AI photo editing tools handle pouring liquid, and again the results are night and day.
On the S25, the cup disappeared, Galaxy AI recreated the barista’s thumb, inserted some objects on the surface, and tried to create the impact of the hot milk on the sink below.
While the liquid’s physics is somewhat off, the editing of the hand and the recreation of what Galaxy AI perceived to be behind the cup were seriously impressive.
As for Apple Intelligence… I’ll let the image do all the talking.
Galaxy AI 3-0 Apple Intelligence
The empty shop
Original
Galaxy AI
Apple Intelligence
、
At this point, I had completely accepted the disparity in effectiveness between Samsung and Apple’s offerings, so I decided to push Generative Edit and Clean Up as far as these flagship smartphones would allow me to.
I took a photo of the busy coffee shop, and after erasing one of the people from sight I decided to go even further and just erase everything from the photo, leaving nothing but the seating.
Again, Galaxy AI passed with flying colors, recreating the shops outside the window, and extending the sofa where I had removed my table and coffee cup. Apple Intelligence, well… It mushed everything together and was absolutely useless.
Galaxy AI 4-0 Apple Intelligence
A whitewash
It’s fair to say there’s a clear winner when it comes to AI photo editing between these two devices, and if you follow any smartphone news you’re probably not surprised.
What was surprising to me, was just how amazing Galaxy AI’s photo editing is, and how bad Clean Up on my top-of-the-line iPhone is.
I’ve used Clean Up in the past and found it did a decent, but hardly mind-blowing job when it comes to removing a subject from a photo. After using the S25, I can’t believe Clean Up has even shipped in its current state knowing what alternatives are out there.
I want to emphasize that both companies have taken a different approach to AI photo editing. Apple wants to keep the image as close to the original as possible, while Samsung is happy to showcase its AI power and offer more emphatic editing.
While I think that’s worth keeping in mind, I do still think Apple’s Clean Up approach should be capable of some of the simple edits you’ve seen above. Instead, it’s come incredibly short of the mark when Samsung’s offering is capable of truly achieving what it sets out to do.
Everything from .ae to .zw will now redirect to google.com.
REUTERS / Reuters
Googleannouncedtoday that it will no longer be using country code top level domains for searches. Instead, all search services will happen on the google.com URL and local results will be delivered automatically. For example, that means users in the UK will no longer see google.co.uk in their browser’s address bar. Google URLs with those country-specific domain endings will now redirect to the main google.com address.
Google started using location information to automatically provide search results based on geographyin 2017. With that change, it didn’t matter whether you entered a query into a local country code URL or into google.com; you’d always see the results version for the place you were physically located. Today’s announcement seems to take that initial action to its conclusion by sunsetting those ccTLDs.
“It’s important to note that while this update will change what people see in their browser address bar, it won’t affect the way Search works, nor will it change how we handle obligations under national laws,” Google noted in its announcement.