We are specialists in quality batteries. We have batteries, chargers and accessories for everything you can think of. Low prices, big inventory, expert advice. Find your battery here!
If you’ve spent a lot of time doing digital photography, or if you’ve owned a lot of Android devices, you’re likely familiar with the humble yet mighty SD card. Across multiple specifications and sizes — SD, SDHC, SDXC, and SDUC, each available in regular and micro sizes — it’s a tried and true storage format. These days, an SD card can hold upwards of one terabyte of data. For mobile photographers and videographers, their slim dimensions make it easy to carry a bunch of them and hot swap as needed. For those who own one of the vanishingly few Android phones with a MicroSD slot, they’re a convenient way to massively increase the storage capacity of those devices. But how did the SD card format come to be, and what does ‘SD’ mean? Hint: it doesn’t stand for SanDisk.
In fact, SD stands for “Secure Digital,” and these little memory cards were originally designed not for photos and videos, but for music. Back in 1999, Toshiba, SanDisk, and Panasonic joined forces to create a new memory standard that could rival Sony’s Memory Stick (more on that later). There was another motive at play in the background, too. The music industry was fighting a losing battle against digital piracy, and major labels were desperately searching for a way to stem the tide.
The Secure Digital name was deliberately chosen in part because SD cards worked with the Secure Digital Music Initiative, the music industry’s effort to find ways of digitally distributing music that couldn’t be easily shared online. But by the early 2000s, SDMI had gone the way of the dodo. Though DRM compatibility remained a part of the spec, SD cards never became the future of music distribution, instead becoming a staple of simple storage solutions.
SD stands for Secure Digital, but it may have another meaning
The ‘SD’ stands for Secure Digital, but it originally stood for something entirely different. If you examine the SD logo stamped on an SD card or card reader, you may notice that the ‘D’ is shaped like a circular disc. Some printings of the logo even have visual accents on that letter to make it appear more like a CD or DVD. To state the obvious, nothing about an SD card is at all disc-like, so what gives?
It has been theorized that the SD card logo was originally intended for another Toshiba related technology that never made it to market. In 1995, Toshiba showed off its intended SD-ROM discs, which were meant to compete with the burgeoning DVD format being developed around the same time. The logo we now see on SD cards was plastered all over the press release.
In this case, ‘SD’ stood for Super Density. Since laser discs increase their storage capacity by putting the microscopic grooves on their plate closer together, Super Density was an apt description. However, SD-ROM never came to market, leaving Toshiba with the logo. When the company got involved in the development of the SD card a few years later, we can surmise that it would have seemed like a perfect opportunity to finally put that logo to use. With such a long history, old SD cards are still useful, but they were never a DVD competitor.
As mentioned near the top of this article, SD cards were in large part a response to Sony’s Memory Stick format. Sony has had a long history of trying to popularize its proprietary media formats, and a long track record of losses. If you’re old enough to sigh when you sit down in chairs, you’ll probably remember the wars that raged between Toshiba’s HD-DVD and the now ubiquitous Blu-ray format created by Sony. That battle went in Sony’s favor largely thanks to the PlayStation 3 and 4, since if you owned one of those consoles, you also owned a Blu-ray player. But there are far more discontinued Sony formats than there are popular ones. Betamax, MiniDisc, and DAT have all been consigned to history’s waste bin.
Memory Stick survived longer than most Sony formats, again thanks to a hardware advantage. It’s well known that Sony makes some of the best cameras on the market, and for a very long time, the company insisted on exclusively using Memory Stick. But unlike with gaming consoles, it was easy to simply buy a Canon or Nikon camera if you didn’t like the Memory Stick. And a lot of people did not like Memory Stick. It was expensive, proprietary, and not widely supported. By 2003, SD cards had surpassed it in popularity, and the trend never reversed.
It wasn’t until 2010 that Sony tacitly admitted defeat by releasing new products with support for both SD cards and Memory Stick. That’s probably for the best. As much as consumers can initially benefit from competition, there eventually needs to be a single, unified standard that they can use.
Google put on anAndroid Showtoday to offer a glimpse at its upcoming interface changes with Android 16, in addition to a slew of Gemini news. It didn’t show off any new devices running the new look; instead, Google offered advice to developers and an explanation of its overall design philosophy. That philosophy seems very… purple.
The new Material 3 Expressive guidelines call for extensive use of color (especially shades of purple and pink), new shapes in a variety of sizes, new motion effects when you take action, and new visual cues that group and contain elements on screen.
A screengrab of examples from Google’s Material 3 Expressive blog post(Image credit: Google)
Google says it has done more research on this design overhaul than any other design work it’s done since it brought its Material Design philosophy to Android in 2014. It claims to have conducted 46 studies with more than 18,000 participants, but frankly, I’m not a UX designer, so I don’t know if that’s a lot.
Google’s Material 3 Expressive is the new look of Android 16
After all of that work, Google has landed on this: Material 3 Expressive. The most notable features, once you get past the bright and – ahem – youthful colors, are the new motion effects.
For instance, when you swipe to dismiss a notification, the object you are swiping will be clear while other objects will blur slightly, making it easier to see. The other notifications nearby will move slightly as you swipe their neighbor. Basically, there will be a lot more organic-looking motion in the interface, especially on swipes and the control levers.
New shapes are coming to Android 16 with Material 3 Expressive(Image credit: Google)
There will be new type styles as well built into Android 16, with the ability to create variable or static fonts. Google is adding 35 more shapes to its interface library for developers to build with, along with an expanded range of default colors.
Google didn’t say that its new Material 3 Expressive design language was targeting iPhone fans, but the hints are there. The next version of Android won’t look cleaner and more organized, instead, Google wants to connect with users on an ‘emotional’ level. According to Google’s own research, the group that loves this new look the most are 18-24 year olds, ie, the iPhone’s most stalwart fan base.
Will this look win over the iPhone’s biggest fans? We’ll see in the months ahead(Image credit: Google)
In its official blog post, Google says, “It’s time to move beyond ‘clean’ and ‘boring’ designs to create interfaces that connect with people on an emotional level.” That connection seems to be much stronger among young people. Google says that every age group preferred the new Material 3 Expressive look, but 18-24 year olds were 87% in favor of the new look.
Apple’s iPhone fanbase is strongest in this age group, if not the generation that’s even younger. It makes sense that Google is making big changes to Android. In fact, this refresh may be overdue. We haven’t seen many inspiring new features in smartphones since they started to fold, and foldable phones haven’t exactly caught on. I’m surprised Google waited this long to improve the software, since there wasn’t any huge hardware innovation in the pipeline (temperature sensors, anybody?).
Material 3 Expressive is coming to more than just Android phones
The new Material 3 Expressive look won’t be limited to Android 16. Google says Wear OS 6 will get a similar design refresh, with more colors, motion, and adaptable buttons that change shape depending on your watch display.
Wear OS watches will also be able to use dynamic color themes, just like Android phones. Start with an image or photo and Wear OS will create a matching color theme for your watch to complement what it sees.
Google demonstrated new buttons that grow as they fill more of the Wear OS display(Image credit: Google)
Even Google’s apps will start to look more Expressive. Google says apps likeGoogle Photosand Maps will get an update in the months ahead that will make them look more like Android 16.
Google borrows a few iPhone features for Android 16, including a Lockdown Mode
Google also demonstrated Live Updates, a new feature that borrows from the iPhone to show you the progress of events like an Uber Eats delivery. The iPhone does this in the Dynamic Island, and Google is adding this feature to the top of the Android 16 display.
Security was a big focus of the Android Show, starting with new protections against calling and text message scams. Google is securing its phones against some common scammer tactics. For instance, scammers might call pretending to be from your bank and might ask you to sideload an app.
With Android 16, you won’t be able to disable Google’s Play Protect app-scanner or sideload any apps while you are on a phone call. You also won’t be able to grant permission to the Accessibility features, a common workaround to get backdoor access to a phone.
Google’s Messages app will also get smarter about text message scams. It will filter out scam messages that ask you to pay overdue toll road fees or try to sell you crypto.
The iPhone already has an extreme protection mode called Lockdown(Image credit: Future / Philip Berne)
Google is also enabling Advanced Protection, its own version of Apple’s Lockdown Mode, on Android 16. Advanced Protection is a super high-security mode that offers the highest level of protection against attacks, whether over wireless networks or physically through the USB port.
Basically, if you’re a journalist, an elected official, or some other public figure and you think a government is trying to hack your phone, Google’s Advanced Protection should completely lock your phone against outside threats.
(Image credit: Google)
If you don’t need that much security but you still want more peace of mind, Google is improving its old Find My Device feature. Android 16 will introduce the Find Hub, which will be a much more robust location to track all of your devices, including Android phones, wearables, and accessories that use ultra-wideband (UWB), similar to Apple AirTags.
Google is introducing new UWB capabilities to help find objects nearby, and those will roll out to Motorola’s Moto Tag first in the months ahead. The new Find Hub will also be able to use satellite connectivity to help locate devices and keep users informed. Plus, if you lose your luggage, Google is working directly with certain airlines like British Airways to let you share your tag information so they can go look for the bag they lost.
Gemini is coming to your car… and your TV… and your watch, and…
Today’s Android Show wasn’t all about Android. Google also made some big announcements about Google Gemini. Gemini is coming to a lot more devices. Gemini is coming to Wear OS watches. Gemini is coming to Android Auto and cars that run Google natively.
Gemini is coming to Google TV. Gemini is even coming to Google’s Android XR, a platform for XR glasses that don’t even exist yet (or at least you can’t buy them). For a brief moment in the Android Show, we caught a glimpse of Google’s possible upcoming glasses.
Could these be Google’s new XR glasses? Hopefully we’ll find out at Google I/O(Image credit: Google)
You’ll be able to talk to Gemini Live and have a conversation in your car on the way to work. ‘Hey Gemini, I need advice on asking my boss for a promotion!’ or ‘Hey Gemini, why is my life so empty that I’m talking to a machine in my car when I could be listening to music or a true crime podcast?’
I may sound like an AI skeptic, but Google’s own suggestions are equally dystopian. Google says on the way to your Book Club, you might ask Gemini to summarize that book you read ages ago (and mostly forgot) and suggest discussion topics. That does not sound like a book club I want to join.
Google did not offer any specific timing for any of the features mentioned in the Android Show, and only said these concepts would appear in the months ahead. It’s unusual for Google to share so much news ahead ofGoogle I/O, which takes place May 20-21 near its HQ in Mountain View, CA. I’ll be on the scene at Google I/O with our News Chief Jake Krol to gather up anything new.
With thePixel 9alaunch already passed, and now team Android spilling all the beans, I suspect Google I/O is going to be mostly about AI. Google is getting these tidbits out of the way so that I don’t waste time asking about new phones when it wants to talk more about Gemini and all the new AI developments. Or perhaps, even better, the Android XR news today was just a hint of what’s to come. Stay tuned, we’ll know more next week!
There must be something in the water in Silicon Valley. Several big tech and digital brands including Amazon andAdobehave revealed subtle tweaks to their logos this month, and now Google has dropped the first significant update to its logo in a decade. This one, however, is a little more noticeable.
Perhaps signalling the final death-knell for the harsh geometry of the flat design movement, Google has blurred the four colours of its ‘G’ logo into a rainbow gradient. The resulting effect is decidedlyInstagram-esque, and, somewhat surprisingly for a logo that adorns millions of smart phone homescreens, the change is actually going down well. Is Google finally within reach of ourbest logosroundup?
Old (left) vs new (right)(Image credit: Google)
The update was first noticed today when it replaced the previous icon design for the Google search app oniOSandAndroid. And while plenty of people have already made joke that the new, blurrier version is just the original when viewed without glasses, the overall consensus seems to be that it’s much more contemporary.
“Makes the current one look dated already. That’s a sign of a good design!” Comments one reader at9to5Google, while another adds, “Somehow it looks better than the regular one.” Over on X, one user comments, “A rare logo update that actually looks nice.”
Users think the old logo looks dated in comparison
Users think the old logo looks dated in comparison
We can expect there to be more noise around this one as it lands on more (and more) users’ homescreens over the coming days. Unlike those subtleAmazonandAdoberebrands, this one’s going to be placed right in front of users. But judging by the initial response, Google’s new gradient is a winner.
AMD and Nvidia go head to head, but only one team has AI on its side
(Image credit: Shutterstock, AMD, Nvidia)
Upscaling technology has become the new battleground for GPU makers, with bothAMDandNvidia(oh, andIntel) offering their own options for improving your framerates when playing PC games.
For the uninitiated, upscaling technology works by making your graphics card render a game at a lower resolution (1080p, in most cases) before scaling it up – hence the name – to a higher target resolution with no loss of framerate. This lets you get a smoother gameplay experience at 1440p,4K, and even8K.
Nvidia’s Deep Learning Super Sampling (DLSS), as the name implies, uses deep-learning AI functionality to provide a highly effective upscaling solution. Over on the red side, AMD’s FidelityFX Super Resolution (FSR) notably doesn’t use AI, but ultimately produces the same service.
If you’re buying a new graphics card, then you’ll need to strongly consider which upscaling tech you’ll have to use in today’s demanding games.
Nvidia has made strides with DLSS (Deep Learning Super Sampling) since the introduction of the RTX 20 series, and AMD has backedFSRfor over four years. They’ve both come a long way since their introduction, but which is better out of DLSS vs FSR?
That’s what we’re hear to find out. It’s worth noting that both graphics upscalers are in their fourth iterations now, with DLSS 4 and FSR 4, respectively. A lot has changed with thebest graphics cardsover the last half-decade, and we’ve seen AMD switch gears in its attempt to be more competitive against its rival.
The two upscalers (previously) worked very differently, which meant a vast gap in compatibility and software support, something that’s been narrowed over the last few months.
We’re comparing DLSS vs FSR based on the performance, software compatibility, and the quality of upscaling to help you decide which is the right fit for use with your GPU.
Our considerations have been made with developments like AI-poweredFrame Generationtech as well, so you can go for higher framerates than ever before, provided you’ve got one of thebest gaming monitorsto make it worthwhile in 1440p, 4K, and even 8K.
The biggest deciding factor of choosing Nvidia DLSS vs AMD FSR comes down to the respective performance, and it’s something that’s changed massively over the last few years.
When directly compared only a year or so ago, it would have been a night-and-day comparison of the two, with Nvidia’s AI-powered upscaling tech (largely) coming out on top, but that’s not necessarily true right now, thanks to the implementations made with AMD FSR 4.
To outline the differences in performance, we first need to know how the two graphics upscalers work. In brief, Nvidia DLSS utilizes the Tensor Cores (AI) of the RTX graphics cards in tandem with the GPU’s CUDA cores with developed algorithms that down-sample the native image and then blow it back up to a target resolution, enabling higher framerates (and better performance) than purely natively rendering.
Until the release of FSR 4, AMD’s upscaling tech was an open-source driver-based software that used sampling algorithms to downsample and then blow the image up to a target resolution. However, with AMD FSR 4, Team Red has embraced Machine Learning tech exclusively with its RDNA 4 GPU line to produce a better product that’s more on par with what Team Green is doing.
The new development for DLSS with the previous two GPU generations has been Frame Generation and Multi Frame Generation, which have been exclusive to the RTX 40 series and RTX 50 series, respectively.
This technology uses AI to generate frames that are interpolated with natively rendered ones for a higher framerate than native, even when compared to the boost afforded by down-sampling and then upscaling. AMD’s previous version of this tech, Fluid Motion Frames (AFMF), was a core part of FSR 3 as a driver-based solution, but it’s since been replaced by an AI-powered solution (loosely) on par with what was possible with DLSS 3.
With the context out of the way, we can state that DLSS 4 beats out FS4 in terms of producing higher framerates thanks to MFG, which AMD does not have yet, but the image quality results can be incredibly similar.
AMD has massively stepped up its game with how faithfully the upscaled gameplay can look under the right circumstances, without the need for excessive sharpening and heavy noise reduction of its previous versions.
However, FSR 4 is currently exclusive to only two GPUs on the market, the RX 9070 andRX 9070 XT, whereas DLSS 3 and DLSS 4 can be utilized by anyone running the respective RTX 40 series and RTX 50 series.
This is only when taking FSR 4 at face value, however, as FSR 3 (and older versions) remain open-source and driver-based, with how they work and can be implemented into modern games.
AMD’s result may not be as strong as what Nvidia’s doing, and its latest efforts may be limited, but it’s worth considering this as a plus all the same. Nvidia still takes the win for this round, but things could change if Team Red continues to innovate instead of just playing catch-up.
Compatibility with Nvidia DLSS vs AMD FSR is initially incredibly one-sided. As touched upon above, AMD FSR (except FSR 4) is completely open-source and driver-based, and can be used on many different generations of not only Team Red’s hardware, but even Intel’s and Nvidia’s as well.
FSR 3 is officially supported by the RX 5000 series and up, which were released more than six years ago, whereas Nvidia DLSS only works on the RTX 20 series and up, with the new versions getting more exclusive with each new GPU generation launch.
With that said, new iterations of Nvidia DLSS do not always lock away all the pivotal features. For example, while DLSS 3 is commonly thought to only be for the RTX 40 series, that only applies to the Frame Generation tech, and not some of its other features, such as Ray Reconstruction from DLSS 3.5 and DLSS 3.7’s new Streamline SDK presets, which can be utilized by the RTX 20 series and RTX 30 series as well.
As previously mentioned, FSR 4 goes all in on Machine Learning and forgoes its open-source and wide-ranging compatibility in favor of delivering a higher-quality product, but we still have to give AMD the win in this respect for everything else that can be with older versions of the software.
DLSS and FSR quality and compatibility wouldn’t matter if games didn’t utilize the software, but that’s (thankfully) not the case.Nvidia claimsthat over 760 games now support its “RTX” technology, taking all versions into account since its launch back in 2018.
With that said, only around 13% of this total amount is said to use the latest version of the graphics upscaler, asNvidia confirmsover 100 games have (or will have) DLSS 4 support for Multi Frame Generation.
While that list no doubt includes many of thebest PC gameson the market, just shy of 800 supported titles is still a far cry from the tens of thousands of releases on the PC platform currently.
With that said, Nvidia is still running rings around Nvidia when you take the adoption figures of its upscaling tech into account. It’s believed that there are around 250 games that support FSR in all its different versions, with around 120 of this figure utilizing FSR 3 and about 40 now using the AI-based FSR 4,according to AMD.
Now, it’s still very much early days for both DLSS 4 and FSR 4, which were launched at the end of 2024 and beginning of 2025, respectively, so we’re expecting these figures to increase dramatically over the next few months (and years) as more developers take advantage of the tech.
We’ve gotten to the point where it’s common for a new AAA PC game to natively support both DLSS, FSR, and XeSS out of the box, or it usually gets added shortly after launch, where it’s now considered strange for a new release to forgo upscaling at release.
Weighing DLSS vs FSR in terms of game support, Nvidia wins out confidently still, but the tide could change if FSR 4 takes off as more RDNA 4 GPUs come out.
Winner:Nvidia DLSS
Nvidia DLSS vs AMD FSR: Which is best?
Delivering a verdict on which is better out of Nvidia DLSS vs AMD FSR isn’t as cut and dry as we’ve previously established above; it’s nuanced, and depends on the kind of hardware that you have access to in the first place.
For example, if you’re using acheap graphics card(or an older GPU), then you’re going to benefit better from previous versions of FSR to get your games into the playable 60fps range in 1080p and 1440p.
However, a cutting-edge RTX 5080 and RTX 5090 trying to achieve 120fps in 4K (and even 8K) will need the processing power of DLSS 4’s Multi Frame Generation, andwe’ve seen incredible things gaming at 8K.
Nvidia DLSS is exclusive to Team Green’s hardware, whereas AMD FSR can be used on not only Team Red graphics cards, but with its competitors as well. It’s going to depend on your hardware; if you’re running an RTX 40 series card then you’ll want to enable Frame Generation, and the RTX 50 series MFG will add a further shot in thearm.
Because of this, we can’t definitively say one’s wholesale better than the other, but we encourage you to try them out in the supported games if your hardware allows it. Which one gives you the best FPS boost and better picture quality? Then that’s the one to enable.
Winner:Tie
Nvidia DLSS vs AMD FSR: FAQs
Can I use both FSR and DLSS?
For the most part, yes, you can use both FSR and DLSS with your modern graphics card, provided you’re not trying to run FSR 4, which is exclusive to two AMD RDNA 4 GPUs right now.
Is DLSS 4 better than FSR 4?
While DLSS 4 and FSR 4 deliver very comparable results in image quality with their four respective presets, Nvidia’s AI-upscaling tech wins out with Multi Frame Generation, allowing for four times the native performance you would normally get, whereas AMD’s frame generation is far less powerful right now.
What a CUDA core is, what it does, and why are it’s important
(Image credit: Nvidia)
Whether you’re running one of thebest graphics cardsmade byNvidiaor any entry-level model from several years ago, it’ll be backed with CUDA cores. Not to be confused withTensor Cores(AI cores), which power the likes of DLSS and Machine Learning, we’re going over everything there is to know about CUDA cores, including how they work, their history, and how they’re utilized.
CUDA cores play an essential role in powering the graphics tech behind some of thebest PC gamesand enabling data science workloads, as well as general computing, in addition to graphics rendering. We’re explaining how it all works and why it’s important further down the page.
What is a CUDA Core?
To understand CUDA cores, we first need to understand Compute Unified Device Architecture as a platform. Developed by Nvidia nearly 20 years ago, it’s a parallel computing platform for purpose-built APIs (Application Programming Interfaces) that lets developers access compilers and tools to run hardware-accelerated programs.
Supported programming languages for CUDA include C, C++, Fortran, Python, and Julia, with supported APIs including not only Direct3D and OpenGL, but specific frameworks such as OpenMP, OpenACC, and OpenCL. CUDA provides both low-level and higher-level APIs on its platform, with an ever-expanding list of libraries for generalized computing, which were previously only thought to be achieved through your computer’s processor.
A CUDA core is a SIMD (Single Instruction, Multiple Data) processing unit found inside your Nvidia graphics card that handles parallel computing tasks; with more CUDA cores, comes the ability to do more with your graphics card. The number of CUDA cores in today’s GPUs has steadily increased over the last 10 years, with top-end performers such as theRTX 5090featuring 21,760 of them and theRTX 4090using 16,384.
These two enthusiast-class graphics cards may be (primarily) marketed on their 4K and 8K gaming performance, but they’re also aimed at tasks such as data science, video processing, encoding, rendering, and AI model training.
Nvidia first created CUDA in 2006, with the first commercially available graphics cards to utilize the technology being the eighth generation of the original GeForce lineup, with the 8800 GTX later (featuring 128 CUDA cores).
Using CUDA, and its specifically developed API built on the platform, this GPU was significantly faster at general-purpose computing outside of just traditional graphics rendering, which were the sole point of video cards back in the day.
Every Nvidia graphics card released afterwards, including the GeForce 500 series, GeForce 600 series, GeForce 700 series, and GeForce 900 series, was built to support CUDA.
Around this time, we saw graphics cards begin to be fully marketed around their CUDA-capable prowess for advanced computing, such as with the Nvidia GTX Titan in 2013, which featured 2,688 CUDA cores and 6GB GDDR5 memory at a time when its contemporaries (like the GTX 770 and GTX 780) lagged significantly.
Fast-forwarding to today, thousands of applications have been developed with CUDA, and all graphics cards from Nvidia natively support the platform, whether they’re gaming GPUs (like theRTX 5070andRTX 5080) or high-end Quadro ones made expressly for developers and data servers.
TheCUDA Toolkithas been steadily upgraded since its launch in 2007, where it’s currently in its 12th iteration, which is primarily made for the company’s H100 and A100 GPUs, with new APIs and tools specific to data center platforms.
CUDA cores work similarly to how CPU cores work on a desktop or laptop processor; they’re built to process large amounts of data simultaneously with a technique called SIMT (Single Instruction, Multiple Threads). In essence, this means a large number of cores all working on an identical process at the same time.
Whereas some of thebest processorson the market (like theAMD Ryzen 9 9950X3D) may feature 16 processing cores, the average GPU now features around 3,000 processing cores, making hardware-based (GPU-accelerated) tasks, such as video editing, 3D rendering, gaming, and simulation, easier and faster to do.
Whereas a CPU core has lower latency and is good for serial processing, a CUDA core has higher throughput and breaks down the processes into smaller tasks through parallel processing.
As its name suggests, many thousands of CUDA cores built into your GPU execute the same process, synchronizing the sub-tasks independently. CUDA cores are, therefore, highly specialized for specific tasks compared to a CPU’s more generalized approach.
How are CUDA Cores utilized for gaming and workloads?
Considering that CUDA cores are parallel processing units that excel at large and intensive operations, having more of them can make your gaming experience smoother and faster.
They handle advanced calculations such as lightning, shading, physics, rasterization, pixel calculating, anti-aliasing, frame rate optimization, texture mapping, and more. With parallel computing, these intensive tasks can be broken down into smaller jobs that the CUDA cores work through all at once.
For more advanced computing processing, CUDA cores can do things such as high-level data processing, scientific simulations, and mathematical operations, because of how a CUDA core executes a floating point and integer operation concurrently.
CUDA as a platform has been praised for its C/C++ interface, ease of use, large ecosystem, libraries, and existing programming models, and there are nearly 20 years of hardware developed to fall back on it. Everything from image processing, deep learning, and other forms of computational science can be achieved with the platform, after all.
(Image credit: AMD)
Do AMD graphics cards use CUDA cores?
CUDA is an Nvidia-developed platform, and CUDA cores are the company’s term for its GPU cores. AMD utilizes completely different Stream Processors for its GPU cores, which do not equate to one another.
To boil things down to the most basic comparison, both CUDA cores and Stream Processors are essentially just shaders (or Unified Shader Units), which are capable of parallel computing tasks, such as shading, etc.
Enhanced Meeting Protection will block you from taking unwarranted screenshots
It’ll turn the screen black if you dare to attempt to take a capture of the screen
Most platforms are supported, but some may have to join audio-only
Microsofthas alluded to an upcoming feature for Teams designed to prevent users from taking unwarranted screenshots during calls in a bid to protect sensitive company information.
A new addition to Redmond’s roadmap adds Enhanced Meeting Protection to Teams, which Microsoft says will prevent screen capture. Added last week, it’s on track to roll out from July 2025.
The feature will become available across desktop client versions on Windows and Mac, as well as iOS and Android apps, making it virtually impossible for users to take snippets of potentially sensitive information.
Microsoft Teams will let you block screenshots soon
“To address the issue of unauthorized screen captures during meetings, the Prevent Screen Capture feature ensures that if a user attempts to take a screen capture, the meeting window will turn black, thereby protecting sensitive information,” Microsoft explained.
Although most common platforms are supported, users joining a Teams call with Enhanced Meeting Protection enabled will be restricted to audio-only to prevent content exposure.
Because the roadmap entry only shares basic details about the upcoming feature, it’s unclear whether it will be enabled by default or toggled on via admin controls.
In the hope that enhanced protections will give companies access to more secure video conferencing, the company will also be rolling out its new Migration Tool for Teams in July.
“Customers will now be able to move content seamlessly from public and private channels in a third-party solution to Teams standard channels,” the roadmap entry reads.
Enhanced Meeting Protection is currently in the ‘in development’ stage – the first of three, preceding ‘launched’ and ‘rolling out’. Microsoft doesn’t share details about how far along the development journey it is, and whether it’s on track for the intended July release.
However, while the feature might be welcomed by many, it still leaves a considerable gap and, in many cases, does nothing to protect sensitive data on screens at all – there’s no system in place to prevent users from taking photos of their screens with theirsmartphones, and such a tool would be almost impossible to implement.
Well,there goes Skype. Bye-bye, you garbage piece of software. I’m surprised you managed to hang around for as long as you did, frankly.
Okay, I’m being a bit mean here; the impact of Skype on the global tech ecosystem shouldn’t be downplayed, as it effectively brought video communication to the mainstream – something that previously was the domain of corporate execs with money to burn on expensive early video-conferencing solutions. For a wonderful, all-too-brief period in the early 2010s, Skype was everywhere: a way to chat face-to-face with distant relatives or schoolmates who were just beyond the reach of an after-class bike ride.
But I can’t pretend Skype was all sunshine and rainbows, even before the pandemic lockdowns and the rise of its chief competitor,Zoom. I remember sitting for ages waiting for a call to connect, frequent audio dropouts, and sometimes struggling to log in at all. Sure, internet connections are faster and more consistent now than they were when Skype was first conceived back in 2003, but that’s not an all-encompassing excuse for the app’s many failings.
See, Skype’s greatest victory was also a sword of Damocles hanging over its head: its 2011 purchase byMicrosoft. A multi-billion dollar deal that positioned Skype to replace Windows Live Messenger (formerly known as the ever-iconic MSN), the purchase proved to be an immediate boon for Skype, as it was widely inserted into Windows devices over the following years, thus reaching a massive global audience.
Unfortunately, this deal also meant that Skype was owned by Microsoft, which is rarely a safe position to be in. RememberZune? Yeah, me neither. The list of products and serviceskilled off by Microsoftover the years is long and storied, and many – including myself – saw the writing on the wall long before serious external competition arrived on the scene.
Aside from a recent cameo role in the Marvel Cinematic Universe, Microsoft’s attempt to beat the iPod was a colossal failure.(Image credit: Marvel Studios)
A key issue was Microsoft’s long-running and ill-placed desire to make Teams work. I’ll be honest: as someone who was, in a previous and much worse place of employment, forced to use Microsoft Teams, I can say with conviction that it sucks. Rigid settings, feature bloat, and an inexplicable ravenous hunger for RAM make it a frequently painful piece of software to use, especially on an outdated work PC.
But Microsoft wanted – and still wants – it to be a Thing People Want To Use, which ultimately led to Skype taking a back seat as its features were gradually cannibalized to improve Teams. In fact, now that Skype has officially been taken out back with a shotgun, Microsoft is actively encouraging users toport their accounts over to Teams.
And what did Skype get in return? A drip-feed of features that nobody asked for, most of which did little to improve the core video-calling functionality. The interface became more cluttered, frequent UI redesigns left users confused, and yet there was a paradoxical feeling of stagnation; meanwhile, the meteoric rise of social-media-driven video calling across platforms such as Facebook andWhatsAppoffered a far more streamlined and pleasant user experience.
Zoom has been around since 2011 (ironically, the same year Microsoft acquired Skype) but you’d be forgiven for thinking that it just popped into existence at the start of the COVID-19 pandemic. When we were collectively displaced from our offices and had to construct impromptu workspaces inside our homes, video conferencing became an everyday necessity – and as we all know, this was where the cracks in Skype’s facade really started to show.
Technical debt is never an easy hurdle to overcome, and Skype’s aging software architecture – while cutting-edge back in 2003 – gradually became a weight chained around its ankle. With Teams at the forefront, investing in updating Skype never seemed like a priority for Microsoft. The app didn’t even change over to a centralized system from its outdated peer-to-peer networking until more than half a decade after Microsoft bought it.
One of the worst blunders was Microsoft’s insistence on keeping it partially anchored to actual phone numbers (with a dial pad feature, no less) in an era when interlinked accounts are king and phones are more than just phones. It was no doubt a move intended to retain the crop of older users who were unaware of the alternatives, but the 100-user call capacity and streamlined interface of Zoom made it an easy choice for professionals who needed to keep their careers afloat while the world screeched to a halt outside.
It’s certainly not a universal truth that Microsoft ruins everything it touches -the Surface tablet line is finally good now!- but the tech giant has something of a reputation for enshittification. I’ve been following thegradual decay of Windowsfor years now, and looking at how Microsoft treats its most widely known product makes understanding the fall of Skype very easy.
Microsoft has finally achieved some success with its Surface tablets, but I’m quietly surprised the brand has lasted this long.(Image credit: Microsoft)
I’ve settled into a belief that Microsoft isn’t able to just let things be what they are. Everything had to bemore!More features, more information, more settings, more AI! Forget what consumers actually want; the line must go up, the goalposts must keep moving, everything must be constantly changing and innovating or it’s worthless. Once you start to see Microsoft as a tech company incapable of sitting still, its successes and failures all start to make a lot more sense.
What people needed for the remote working shift during the pandemic was an effective, straightforward video conferencing tool. They didn’t find that in Skype, which had already become a bloated shell of its former self after years of ‘innovation’ at the hands of Microsoft. So I say this now, to the creators of Zoom: if it ain’t broke…
Apple Pay joins PayPal and your standard credit or debit card as forms of payment for your PS5.
PlayStation
As first reported by9to5Mac, PS5 users now have a direct way to buy games in the PlayStation store with Apple Pay. When you purchase a game on your PS5 with Apple Pay, you’ll be shown a QR code that you can scan with your iPhone or iPad to complete the transaction from there.
Previously, PS5 users had to go through the console’s browser or the PlayStation App on iOS to buy games with Apple Pay. The latest update is a simple quality of life upgrade for PS5 owners since most already have a credit card tied to their PlayStation account. However, Apple Card owners can more easily take advantage of their 2 percent cash back on Apple Pay purchases when buying PS5 games.
Besides using a traditional card on file, Apple Pay joins PayPal as an alternative payment method. The Apple Pay compatibility on the PS5 was made possible through aniOS 18update that allows users to buy things on third-party browsers like Chrome and Firefox with a unique QR code. This change lays the groundwork for more Apple Pay implementation with other browsers and devices, including support for the PS4 in a later software update, as indicated by9to5Mac.
You can continue to use the technology, but it will not receive any future updates or support from Intel.
(Image credit: Intel)
Intel has discontinued support for its Deep Link suite of technologies, as confirmed by a representative onGitHub, via X userHaze.After Intel quietly stopped promoting the feature in newer products such as Battlemage, it has now confirmed that active development for Deep Link has ceased. While you still might be able to use Deep Link, Intel has clarified that there will be no future updates or official support from their customer service channels.
“Intel Deep Link is no longer actively maintained and will not be receiving future updates, meaning that there will be no changes to the features regardless of their current functionality status.”
from X
Deep Link was introduced in late 2020. It allows you to harness the combined power of your Intel CPU and Arc GPU to improve streaming, AI acceleration, and overall efficiency. To utilize Deep Link, you needed an Intel 11th, 12th, or 13th Generation CPU and a dedicatedArc AlchemistGPU. The suite offered four key utilities: Dynamic Power Share, Stream Assist, Hyper Encode, and Hyper Compute.
Dynamic Power Share optimizes performance and power by intelligently shifting power resources between the CPU and GPU. Stream Assist improved streaming by offloading the task from the dedicated GPU to the integrated GPU. Hyper Encode accelerated video encoding using multiple Intel processors. Lastly, Hyper Compute leveraged your Intel CPU and GPU to accelerate AI workloads in OpenVINO.
“Deep Link is no longer actively maintained and will not be receiving future updates, meaning that there will be no changes to the features regardless of their current functionality status.”
Intel representative at GitHub
These features boosted performance in apps like OBS, DaVinci Resolve, and Handbrake. The user who originated the thread at GitHub could not get Stream Assist up and running with OBS using the latestArc B580paired with theCore Ultra 7 265K. Following a month-long wait, a representative relayed that Intel had discontinued software development.
It turns out that even Alchemist users had a hard time getting these features working in Handbrake and OBS. It’s possible that Intel considered Deep Link a niche feature and deemed the ongoing effort and investment not worthwhile. Besides, most of these features require per-vendor validation. Development was likely dropped a while back, asMeteor Lake, an architecture that dates back to late 2023, is also not among the supported CPUs.
Game Chat footage recorded to ensure a “safe and family-friendly online environment”.
Image credit: Nintendo
Nintendo has updated its Nintendo Account Agreement with a severe warning against “unauthorised use”, in a bid to prevent emulation and piracy.
All those with a Nintendo account will have received an email (including Eurogamer) linking to the updated policy. And, asGame File’s Stephen Totilospotted, the wording for the Licence for Digital Products section has been altered.
Theagreement for UK accountsnow states digital products are “licensed only for personal and non-commercial use”, and that any “unauthorised use of a Digital Product may result in the Digital Product becoming unusable”.
This differs slightly from the US, which states: “You acknowledge that if you fail to comply with the foregoing restrictions Nintendo may render the Nintendo Account Services and/or the applicable Nintendo device permanently unusable in whole or in part.”
For comparison, here’s the original wording (effective since April 2021): “You are not allowed to lease, rent, sublicense, publish, copy, modify, adapt, translate, reverse engineer, decompile or disassemble all or any portion of the Nintendo Account Services without Nintendo’s written consent, or unless otherwise expressly permitted by applicable law.”
And here’s the UK update in full: “Any Digital Products registered to your Nintendo Account and any updates of such Digital Products are licensed only for personal and non-commercial use on a User Device. Digital Products must not be used for any other purpose. In particular, without NOE’s written consent, you must neither lease nor rent Digital Products nor sublicense, publish, copy, modify, adapt, translate, reverse engineer, decompile or disassemble any portion of Digital Products other than as expressly permitted by applicable law. Such unauthorised use of a Digital Product may result in the Digital Product becoming unusable.”
The US update is as follows: “Without limitation, you agree that you may not (a) publish, copy, modify, reverse engineer, lease, rent, decompile, disassemble, distribute, offer for sale, or create derivative works of any portion of the Nintendo Account Services; (b) bypass, modify, decrypt, defeat, tamper with, or otherwise circumvent any of the functions or protections of the Nintendo Account Services, including through the use of any hardware or software that would cause the Nintendo Account Services to operate other than in accordance with its documentation and intended use; (c) obtain, install or use any unauthorised copies of Nintendo Account Services; or (d) exploit the Nintendo Account Services in any manner other than to use them in accordance with the applicable documentation and intended use, in each case, without Nintendo’s written consent or express authorisation, or unless otherwise expressly permitted by applicable law. You acknowledge that if you fail to comply with the foregoing restrictions Nintendo may render the Nintendo Account Services and/or the applicable Nintendo device permanently unusable in whole or in part.”
TheNintendo Account Privacy Policyhas also been updated ahead of the release ofSwitch 2. Now, Nintendo will be able to record video and voice chats stored on your console for a limited period of time – if you give consent.
This is intended for anyone who encounters “language or behaviour that may violate applicable laws”, with the company able to review the last three minutes of recorded footage. This is to ensure a “safe and family-friendly online environment”.
The update comes ahead of theGame Chat feature on Switch 2, where players can essentially video call each other during gameplay.
Back in March,Nintendo shared a legal victory over French file-sharing company Dstorage, which it stated was “significant…for the entire games industry”.
It followed a string of moves against piracy, including theshutdown of Switch emulator Yuzuand alawsuit against a streamer who regularly played pirated copies of Nintendo games ahead of release.