What is a LUT? Here’s why lookup tables matter for video AND photo editing

Suddenly LUTs are everywhere, but exactly what is a LUT? They’ve been in video editors for years but now they’re in cameras and even photo editors

What is a LUT?

What is a LUT? It’s like an instant conversion profile that can be used to correct colors and tones but can also be used for creative effects(Image credit: Rod Lawton)

If you’re wondering what is a LUT, you’re not alone – the phrase has only entered the mainstream imaging lexicon fairly recently, but it has certainly spread like wildfire.

LUT is an abbreviation. It stands for lookup table, a rather dry and dusty definition that doesn’t even hint at what LUTs can do. And while they’re primarily used by video folks, they can also be employed in photography post-production as well.

They have two main uses in videography and editing/grading. One is to convert images from one color space or profile to another – for example, if you’ve shot in aSonylog profile and you need to convert that to a regular color space for editing and sharing.

The second, increasingly popular use, is to apply creative tonal and color shifts for stylized ‘looks’.

>>>Panasonic HC-V720GK Replacement Battery

What is a LUT: Must-know #1

LUTs are not just another type of picture style or simulation. They are, very literally, conversion tables that take pixel values from the original image or video, and change them to other pixel values with various tone and color shifts.

If you’re into color-managed desktop printing, you’ll be familiar with custom ‘printer profiles’ and ‘monitor profiles’. It’s a bit like that, but often used for creative purposes, not just color correction.

For this explanation I’ll stick to ‘creative’ LUTs, since this is the major trend right now. The way they work means you can shift colors and tones in any direction you like, for cross-processing effects, black-and-white filter effects and just about any other effect you can carry out with color and tone adjustments.

There are no sliders or adjustments with LUTs; they just do what they do. There are tools you can use to design your own LUTs, such asfylm.ai, for example.

This is probably a job for color grading experts, though, because there are many subtleties involved in producing effective and attractive color and tonal shifts that can be applied across a range of images/videos.

That’s the first thing to know about LUTs. It sounds a limitation, but a properly-designed LUT can be a friend for life. Once you start trying them out you will quickly find favorites that you will want to use again and again.

Panasonic Lumix S9

LUTs are used widely in video editing and color grading, but are now appearing in some cameras, like the Panasonic Lumix S9.(Image credit: Panasonic)
>>>Panasonic NV-DS5EG NV-DS5EN NV-DX1 Replacement Battery

What is a LUT: Must-know #2

But perhaps the key point about LUTs is that they use a standardized .cube format that can be used across multiple devices and editors. You could load a LUT that you love in yourvideo editing softwareinto yourPanasonic Lumix S9, for example, or use the same LUT in yourphoto editing software.

Not all photo editors support LUTs, and some use them (incorrectly) as an effect filter mixed in with the other editing tools. It’s best to think of LUTs as a kind of pre-processing treatment ahead of your actual editing. If you work inLightroomyou’ll be familiar with Profiles, which do a similar job – though it would be nice if Lightroom used LUTs instead!

So are LUTs the future? For videographers, they are an important technical and creative tool. For stills photographers, presets and filters do the same job with a little more control – but even here, once you’ve found some favorite LUTs, there’s often no going back.

Fitbit shares three new health tracking tools with some users  

Fitbit has announced three new features for users of its wearables: Medical record navigator, Symptom check and Unusual trends. These tools are currently available for some users to test as part of the Fitbit Labs program. The company notes that only some users are currently eligible and invited to join via the app.

Fitbit is allowing some users to test three new features in its app. (Image source: Fitbit)

Fitbit is allowing some users to test three new features in its ap

Fitbit has announced three new features, which are available throughFitbit Labs. This is an app-based testing program available for some users, allowing them to try out and get feedback on upcoming functionality.

The first feature is the Medical record navigator. It is said to help users to understand complex lab records, presumably like the results of a blood test. Users can upload an image or PDF of a report to the Fitbit app, with Gemini providing a summary in layman’s terms. There is also a new Symptom checker allowing users to describe how they feel. Examples prompts provided in the blog post include “my head hurts” and “I feel tired”. Users may then be asked some follow-up questions, with the tool offering an explanation of their symptoms.

Finally, Unusual trends is another new health-tracking feature. It helps you to spot instances where your health is below your baseline, looking at metrics like resting heart rate, heart rate variability (HRV) and sleep. For all three of these new tools, Fitbit makes it clear that they cannot diagnose or prevent any medical conditions, encouraging users to seek medical advice if these tools flag anything out of the ordinary.

>>>LSS271617 Battery for Fitbit Charge 5 FB421 Smartwatch

If you are eligible, you will see Fitbit Labs in the ‘You’ and ‘Today’ tabs for the Fitbit app or through a ‘Now’ card. These customers will either be able to start testing immediately or join the waiting list. It is worth noting that, by signing up, you must agree to share data for research and development purposes. If you cannot see the Fitbit Labs section in the app, then you are likely to be ineligible for the time being. However, Fitbit notes that these trials may become available to more users in the future. It remains to be seen when the new Fitbit Medical record navigator, Symptom checker and Unusual trends tools will officially launch for all users.

WHAT DOES V30 MEAN ON AN SD CARD?

Close up of a hand holding SanDisk V30 SD card

SD cards are useful for a variety of electronic devices like portable gaming consoles and DSLR cameras. While buying an SD card may seem straightforward, it’s crucial to pay attention to its specifications. Sure, storage capacity is a key factor you should consider, but it isn’t the only one. If you are going to use the SD card for recording video, for example, you should look for markings like V30 or similar on the label. But what does V30 on an SD card actually mean? 

The V in V30 stands for Video Speed Class, and the number represents the minimum continuous write speed that the SD card can maintain in MB/s. So, V30 on an SD card means it has a minimum continuous write speed of 30 MB/s. You can also find other speed classes such as V6, V10, V60, and V90. A high Video Speed Class essentially indicates that the SD card is capable of handling high-resolution video recording, including features like 360-degree capture and VR content.

Specifications like V30 are important because they allow you to determine whether the SD card is fast enough to capture video at your intended frame rate and resolution, and make it easier to compare SD cards across brands. So, what kind of video can you capture with a V30 SD card?

>>>NB-CP2LH Battery for Canon CP710 CP730 CP760 CP780 CP790 CP800

What can you use the V30 SD cards for?

V30 SD card next to a hard drive

V30 SD cards can be used to record Full HD videos as well as in 4K at lower frame rates (less than 30 fps). They are also suitable for capturing high-resolution burst images quickly. These features make V30 SD cards ideal for DSLR cameras, drones, action cameras, and other similar devices. If you use a card with a lower speed than V30 for 4K recording, you could experience dropped frame rates or recording errors. If you want to record 4K at higher frame rates or shoot 8K footage, you’d need at least a V60 SD card.

While the V30 indicates a minimum write speed of 30 MB/s, the actual performance of an SD card can be higher, so make sure you check the SD card’s detailed specifications on the product page or packaging. Also, note that the V30 only reflects write speeds of the SD card. Read speeds can be just as important. Faster read speeds will help you review your photos and videos quickly. 

>>>LB-60 Battery for Canon LK-62 IP100 IP110

When buying an SD card, you also need to consider the capabilities of the device you’ll be using it in. For instance, you could buy a V30 SD card, but if your camera doesn’t support high write speeds, you won’t be able to benefit fully from the card’s capabilities. Thankfully, manufacturers usually list the recommended write speed for their products on their website, so be sure to check those specifications.

WHAT DOES ‘SD’ STAND FOR ON AN SD CARD?

An assortment of SD cards

If you’ve spent a lot of time doing digital photography, or if you’ve owned a lot of Android devices, you’re likely familiar with the humble yet mighty SD card. Across multiple specifications and sizes  — SD, SDHC, SDXC, and SDUC, each available in regular and micro sizes  — it’s a tried and true storage format. These days, an SD card can hold upwards of one terabyte of data. For mobile photographers and videographers, their slim dimensions make it easy to carry a bunch of them and hot swap as needed. For those who own one of the vanishingly few Android phones with a MicroSD slot, they’re a convenient way to massively increase the storage capacity of those devices. But how did the SD card format come to be, and what does ‘SD’ mean? Hint: it doesn’t stand for SanDisk.

In fact, SD stands for “Secure Digital,” and these little memory cards were originally designed not for photos and videos, but for music. Back in 1999, Toshiba, SanDisk, and Panasonic joined forces to create a new memory standard that could rival Sony’s Memory Stick (more on that later). There was another motive at play in the background, too. The music industry was fighting a losing battle against digital piracy, and major labels were desperately searching for a way to stem the tide. 

The Secure Digital name was deliberately chosen in part because SD cards worked with the Secure Digital Music Initiative, the music industry’s effort to find ways of digitally distributing music that couldn’t be easily shared online. But by the early 2000s, SDMI had gone the way of the dodo. Though DRM compatibility remained a part of the spec, SD cards never became the future of music distribution, instead becoming a staple of simple storage solutions.

>>>PS0231UA1BRS Battery for Toshiba Dynabook PS0231UA1BRS

SD stands for Secure Digital, but it may have another meaning

An assortment of SD cards

The ‘SD’ stands for Secure Digital, but it originally stood for something entirely different. If you examine the SD logo stamped on an SD card or card reader, you may notice that the ‘D’ is shaped like a circular disc. Some printings of the logo even have visual accents on that letter to make it appear more like a CD or DVD. To state the obvious, nothing about an SD card is at all disc-like, so what gives?

It has been theorized that the SD card logo was originally intended for another Toshiba related technology that never made it to market. In 1995, Toshiba showed off its intended SD-ROM discs, which were meant to compete with the burgeoning DVD format being developed around the same time. The logo we now see on SD cards was plastered all over the press release. 

In this case, ‘SD’ stood for Super Density. Since laser discs increase their storage capacity by putting the microscopic grooves on their plate closer together, Super Density was an apt description. However, SD-ROM never came to market, leaving Toshiba with the logo. When the company got involved in the development of the SD card a few years later, we can surmise that it would have seemed like a perfect opportunity to finally put that logo to use. With such a long history, old SD cards are still useful, but they were never a DVD competitor.

>>>PA5368U-1BRS Battery for Toshiba Dynabook Satellite Pro L50-G

SD won the format war against Sony’s Memory Stick

An assortment of SD cards

As mentioned near the top of this article, SD cards were in large part a response to Sony’s Memory Stick format. Sony has had a long history of trying to popularize its proprietary media formats, and a long track record of losses. If you’re old enough to sigh when you sit down in chairs, you’ll probably remember the wars that raged between Toshiba’s HD-DVD and the now ubiquitous Blu-ray format created by Sony. That battle went in Sony’s favor largely thanks to the PlayStation 3 and 4, since if you owned one of those consoles, you also owned a Blu-ray player. But there are far more discontinued Sony formats than there are popular ones. Betamax, MiniDisc, and DAT have all been consigned to history’s waste bin.

Memory Stick survived longer than most Sony formats, again thanks to a hardware advantage. It’s well known that Sony makes some of the best cameras on the market, and for a very long time, the company insisted on exclusively using Memory Stick. But unlike with gaming consoles, it was easy to simply buy a Canon or Nikon camera if you didn’t like the Memory Stick. And a lot of people did not like Memory Stick. It was expensive, proprietary, and not widely supported. By 2003, SD cards had surpassed it in popularity, and the trend never reversed.

>>>PS0011UA1BRS Battery for Toshiba Dynabook X30L-J PCR10T-04N00X PCR10L-13L011

It wasn’t until 2010 that Sony tacitly admitted defeat by releasing new products with support for both SD cards and Memory Stick. That’s probably for the best. As much as consumers can initially benefit from competition, there eventually needs to be a single, unified standard that they can use.

Nvidia DLSS vs AMD FSR: which graphics upscaling technology is better?

AMD and Nvidia go head to head, but only one team has AI on its side

The Nvidia and AMD logos clashing with lightning bolts around them.

(Image credit: Shutterstock, AMD, Nvidia)

Upscaling technology has become the new battleground for GPU makers, with bothAMDandNvidia(oh, andIntel) offering their own options for improving your framerates when playing PC games.

For the uninitiated, upscaling technology works by making your graphics card render a game at a lower resolution (1080p, in most cases) before scaling it up – hence the name – to a higher target resolution with no loss of framerate. This lets you get a smoother gameplay experience at 1440p,4K, and even8K.

Nvidia’s Deep Learning Super Sampling (DLSS), as the name implies, uses deep-learning AI functionality to provide a highly effective upscaling solution. Over on the red side, AMD’s FidelityFX Super Resolution (FSR) notably doesn’t use AI, but ultimately produces the same service.

If you’re buying a new graphics card, then you’ll need to strongly consider which upscaling tech you’ll have to use in today’s demanding games.

Nvidia has made strides with DLSS (Deep Learning Super Sampling) since the introduction of the RTX 20 series, and AMD has backedFSRfor over four years. They’ve both come a long way since their introduction, but which is better out of DLSS vs FSR?

That’s what we’re hear to find out. It’s worth noting that both graphics upscalers are in their fourth iterations now, with DLSS 4 and FSR 4, respectively. A lot has changed with thebest graphics cardsover the last half-decade, and we’ve seen AMD switch gears in its attempt to be more competitive against its rival.

The two upscalers (previously) worked very differently, which meant a vast gap in compatibility and software support, something that’s been narrowed over the last few months.

We’re comparing DLSS vs FSR based on the performance, software compatibility, and the quality of upscaling to help you decide which is the right fit for use with your GPU.

Our considerations have been made with developments like AI-poweredFrame Generationtech as well, so you can go for higher framerates than ever before, provided you’ve got one of thebest gaming monitorsto make it worthwhile in 1440p, 4K, and even 8K.

>>>HP ProBook 430 440 445 450 630 640 650 G8 Replacement Battery

Nvidia DLSS vs AMD FSR: Performance

DLSS 4 framerate comparisons

(Image credit: Nvidia)

The biggest deciding factor of choosing Nvidia DLSS vs AMD FSR comes down to the respective performance, and it’s something that’s changed massively over the last few years.

When directly compared only a year or so ago, it would have been a night-and-day comparison of the two, with Nvidia’s AI-powered upscaling tech (largely) coming out on top, but that’s not necessarily true right now, thanks to the implementations made with AMD FSR 4.

To outline the differences in performance, we first need to know how the two graphics upscalers work. In brief, Nvidia DLSS utilizes the Tensor Cores (AI) of the RTX graphics cards in tandem with the GPU’s CUDA cores with developed algorithms that down-sample the native image and then blow it back up to a target resolution, enabling higher framerates (and better performance) than purely natively rendering.

Until the release of FSR 4, AMD’s upscaling tech was an open-source driver-based software that used sampling algorithms to downsample and then blow the image up to a target resolution. However, with AMD FSR 4, Team Red has embraced Machine Learning tech exclusively with its RDNA 4 GPU line to produce a better product that’s more on par with what Team Green is doing.

The new development for DLSS with the previous two GPU generations has been Frame Generation and Multi Frame Generation, which have been exclusive to the RTX 40 series and RTX 50 series, respectively.

This technology uses AI to generate frames that are interpolated with natively rendered ones for a higher framerate than native, even when compared to the boost afforded by down-sampling and then upscaling. AMD’s previous version of this tech, Fluid Motion Frames (AFMF), was a core part of FSR 3 as a driver-based solution, but it’s since been replaced by an AI-powered solution (loosely) on par with what was possible with DLSS 3.

With the context out of the way, we can state that DLSS 4 beats out FS4 in terms of producing higher framerates thanks to MFG, which AMD does not have yet, but the image quality results can be incredibly similar.

AMD has massively stepped up its game with how faithfully the upscaled gameplay can look under the right circumstances, without the need for excessive sharpening and heavy noise reduction of its previous versions.

However, FSR 4 is currently exclusive to only two GPUs on the market, the RX 9070 andRX 9070 XT, whereas DLSS 3 and DLSS 4 can be utilized by anyone running the respective RTX 40 series and RTX 50 series.

This is only when taking FSR 4 at face value, however, as FSR 3 (and older versions) remain open-source and driver-based, with how they work and can be implemented into modern games.

AMD’s result may not be as strong as what Nvidia’s doing, and its latest efforts may be limited, but it’s worth considering this as a plus all the same. Nvidia still takes the win for this round, but things could change if Team Red continues to innovate instead of just playing catch-up.

  • Winner:Nvidia DLSS

>>>HP HSTNN-OB2Z N07220-AC1 N07274-005 Replacement Battery

Nvidia DLSS vs AMD FSR: Compatibility

AMD FSR 4 running on Space Marine 2

(Image credit: AMD)

Compatibility with Nvidia DLSS vs AMD FSR is initially incredibly one-sided. As touched upon above, AMD FSR (except FSR 4) is completely open-source and driver-based, and can be used on many different generations of not only Team Red’s hardware, but even Intel’s and Nvidia’s as well.

FSR 3 is officially supported by the RX 5000 series and up, which were released more than six years ago, whereas Nvidia DLSS only works on the RTX 20 series and up, with the new versions getting more exclusive with each new GPU generation launch.

With that said, new iterations of Nvidia DLSS do not always lock away all the pivotal features. For example, while DLSS 3 is commonly thought to only be for the RTX 40 series, that only applies to the Frame Generation tech, and not some of its other features, such as Ray Reconstruction from DLSS 3.5 and DLSS 3.7’s new Streamline SDK presets, which can be utilized by the RTX 20 series and RTX 30 series as well.

As previously mentioned, FSR 4 goes all in on Machine Learning and forgoes its open-source and wide-ranging compatibility in favor of delivering a higher-quality product, but we still have to give AMD the win in this respect for everything else that can be with older versions of the software.

  • Winner:AMD FSR

>>>HP Envy X360 2-in-1 14-FC 14-FA N66000-B71 Replacement Battery

Nvidia DLSS vs AMD FSR: Game support

Nvidia DLSS 4 is said to support over 100 games and counting.

(Image credit: Nvidia)

DLSS and FSR quality and compatibility wouldn’t matter if games didn’t utilize the software, but that’s (thankfully) not the case.Nvidia claimsthat over 760 games now support its “RTX” technology, taking all versions into account since its launch back in 2018.

With that said, only around 13% of this total amount is said to use the latest version of the graphics upscaler, asNvidia confirmsover 100 games have (or will have) DLSS 4 support for Multi Frame Generation.

While that list no doubt includes many of thebest PC gameson the market, just shy of 800 supported titles is still a far cry from the tens of thousands of releases on the PC platform currently.

With that said, Nvidia is still running rings around Nvidia when you take the adoption figures of its upscaling tech into account. It’s believed that there are around 250 games that support FSR in all its different versions, with around 120 of this figure utilizing FSR 3 and about 40 now using the AI-based FSR 4,according to AMD.

Now, it’s still very much early days for both DLSS 4 and FSR 4, which were launched at the end of 2024 and beginning of 2025, respectively, so we’re expecting these figures to increase dramatically over the next few months (and years) as more developers take advantage of the tech.

We’ve gotten to the point where it’s common for a new AAA PC game to natively support both DLSS, FSR, and XeSS out of the box, or it usually gets added shortly after launch, where it’s now considered strange for a new release to forgo upscaling at release.

Weighing DLSS vs FSR in terms of game support, Nvidia wins out confidently still, but the tide could change if FSR 4 takes off as more RDNA 4 GPUs come out.

  • Winner:Nvidia DLSS

Nvidia DLSS vs AMD FSR: Which is best?

Delivering a verdict on which is better out of Nvidia DLSS vs AMD FSR isn’t as cut and dry as we’ve previously established above; it’s nuanced, and depends on the kind of hardware that you have access to in the first place.

For example, if you’re using acheap graphics card(or an older GPU), then you’re going to benefit better from previous versions of FSR to get your games into the playable 60fps range in 1080p and 1440p.

However, a cutting-edge RTX 5080 and RTX 5090 trying to achieve 120fps in 4K (and even 8K) will need the processing power of DLSS 4’s Multi Frame Generation, andwe’ve seen incredible things gaming at 8K.

Nvidia DLSS is exclusive to Team Green’s hardware, whereas AMD FSR can be used on not only Team Red graphics cards, but with its competitors as well. It’s going to depend on your hardware; if you’re running an RTX 40 series card then you’ll want to enable Frame Generation, and the RTX 50 series MFG will add a further shot in thearm.

Because of this, we can’t definitively say one’s wholesale better than the other, but we encourage you to try them out in the supported games if your hardware allows it. Which one gives you the best FPS boost and better picture quality? Then that’s the one to enable.

  • Winner:Tie

Nvidia DLSS vs AMD FSR: FAQs

Can I use both FSR and DLSS?

For the most part, yes, you can use both FSR and DLSS with your modern graphics card, provided you’re not trying to run FSR 4, which is exclusive to two AMD RDNA 4 GPUs right now.

Is DLSS 4 better than FSR 4?

While DLSS 4 and FSR 4 deliver very comparable results in image quality with their four respective presets, Nvidia’s AI-upscaling tech wins out with Multi Frame Generation, allowing for four times the native performance you would normally get, whereas AMD’s frame generation is far less powerful right now.

What is a CUDA core? The Nvidia GPU technology explained

What a CUDA core is, what it does, and why are it’s important

Nvidia Blackwell GPU

(Image credit: Nvidia)

Whether you’re running one of thebest graphics cardsmade byNvidiaor any entry-level model from several years ago, it’ll be backed with CUDA cores. Not to be confused withTensor Cores(AI cores), which power the likes of DLSS and Machine Learning, we’re going over everything there is to know about CUDA cores, including how they work, their history, and how they’re utilized.

CUDA cores play an essential role in powering the graphics tech behind some of thebest PC gamesand enabling data science workloads, as well as general computing, in addition to graphics rendering. We’re explaining how it all works and why it’s important further down the page.

What is a CUDA Core?

To understand CUDA cores, we first need to understand Compute Unified Device Architecture as a platform. Developed by Nvidia nearly 20 years ago, it’s a parallel computing platform for purpose-built APIs (Application Programming Interfaces) that lets developers access compilers and tools to run hardware-accelerated programs.

Supported programming languages for CUDA include C, C++, Fortran, Python, and Julia, with supported APIs including not only Direct3D and OpenGL, but specific frameworks such as OpenMP, OpenACC, and OpenCL. CUDA provides both low-level and higher-level APIs on its platform, with an ever-expanding list of libraries for generalized computing, which were previously only thought to be achieved through your computer’s processor.

A CUDA core is a SIMD (Single Instruction, Multiple Data) processing unit found inside your Nvidia graphics card that handles parallel computing tasks; with more CUDA cores, comes the ability to do more with your graphics card. The number of CUDA cores in today’s GPUs has steadily increased over the last 10 years, with top-end performers such as theRTX 5090featuring 21,760 of them and theRTX 4090using 16,384.

These two enthusiast-class graphics cards may be (primarily) marketed on their 4K and 8K gaming performance, but they’re also aimed at tasks such as data science, video processing, encoding, rendering, and AI model training.

Nvidia Blackwell die

(Image credit: Nvidia)
>>>HP TPN-DB1H N08497-005 Replacement Battery

History of CUDA Cores

Nvidia first created CUDA in 2006, with the first commercially available graphics cards to utilize the technology being the eighth generation of the original GeForce lineup, with the 8800 GTX later (featuring 128 CUDA cores).

Using CUDA, and its specifically developed API built on the platform, this GPU was significantly faster at general-purpose computing outside of just traditional graphics rendering, which were the sole point of video cards back in the day.

Every Nvidia graphics card released afterwards, including the GeForce 500 series, GeForce 600 series, GeForce 700 series, and GeForce 900 series, was built to support CUDA.

Around this time, we saw graphics cards begin to be fully marketed around their CUDA-capable prowess for advanced computing, such as with the Nvidia GTX Titan in 2013, which featured 2,688 CUDA cores and 6GB GDDR5 memory at a time when its contemporaries (like the GTX 770 and GTX 780) lagged significantly.

Fast-forwarding to today, thousands of applications have been developed with CUDA, and all graphics cards from Nvidia natively support the platform, whether they’re gaming GPUs (like theRTX 5070andRTX 5080) or high-end Quadro ones made expressly for developers and data servers.

TheCUDA Toolkithas been steadily upgraded since its launch in 2007, where it’s currently in its 12th iteration, which is primarily made for the company’s H100 and A100 GPUs, with new APIs and tools specific to data center platforms.

Nvidia CUDA-Q

(Image credit: Nvidia)
>>>HP EliteBook 840 G11 Replacement Battery

How do CUDA Cores work?

CUDA cores work similarly to how CPU cores work on a desktop or laptop processor; they’re built to process large amounts of data simultaneously with a technique called SIMT (Single Instruction, Multiple Threads). In essence, this means a large number of cores all working on an identical process at the same time.

Whereas some of thebest processorson the market (like theAMD Ryzen 9 9950X3D) may feature 16 processing cores, the average GPU now features around 3,000 processing cores, making hardware-based (GPU-accelerated) tasks, such as video editing, 3D rendering, gaming, and simulation, easier and faster to do.

Whereas a CPU core has lower latency and is good for serial processing, a CUDA core has higher throughput and breaks down the processes into smaller tasks through parallel processing.

As its name suggests, many thousands of CUDA cores built into your GPU execute the same process, synchronizing the sub-tasks independently. CUDA cores are, therefore, highly specialized for specific tasks compared to a CPU’s more generalized approach.

>>>HP EliteBook 440 Replacement Battery

How are CUDA Cores utilized for gaming and workloads?

Considering that CUDA cores are parallel processing units that excel at large and intensive operations, having more of them can make your gaming experience smoother and faster.

They handle advanced calculations such as lightning, shading, physics, rasterization, pixel calculating, anti-aliasing, frame rate optimization, texture mapping, and more. With parallel computing, these intensive tasks can be broken down into smaller jobs that the CUDA cores work through all at once.

For more advanced computing processing, CUDA cores can do things such as high-level data processing, scientific simulations, and mathematical operations, because of how a CUDA core executes a floating point and integer operation concurrently.

CUDA as a platform has been praised for its C/C++ interface, ease of use, large ecosystem, libraries, and existing programming models, and there are nearly 20 years of hardware developed to fall back on it. Everything from image processing, deep learning, and other forms of computational science can be achieved with the platform, after all.

AMD RDNA 4 die

(Image credit: AMD)

Do AMD graphics cards use CUDA cores?

CUDA is an Nvidia-developed platform, and CUDA cores are the company’s term for its GPU cores. AMD utilizes completely different Stream Processors for its GPU cores, which do not equate to one another.

To boil things down to the most basic comparison, both CUDA cores and Stream Processors are essentially just shaders (or Unified Shader Units), which are capable of parallel computing tasks, such as shading, etc.

Shocked that Skype lost the battle against Zoom? I knew it was doomed all the way back in 2011, and here’s why

A bag-fumbling of galactic proportions

A hand 'throwing' the Skype logo out of a car window.

(Image credit: Getty Images / Chev Wilkinson)

Well,there goes Skype. Bye-bye, you garbage piece of software. I’m surprised you managed to hang around for as long as you did, frankly.

Okay, I’m being a bit mean here; the impact of Skype on the global tech ecosystem shouldn’t be downplayed, as it effectively brought video communication to the mainstream – something that previously was the domain of corporate execs with money to burn on expensive early video-conferencing solutions. For a wonderful, all-too-brief period in the early 2010s, Skype was everywhere: a way to chat face-to-face with distant relatives or schoolmates who were just beyond the reach of an after-class bike ride.

But I can’t pretend Skype was all sunshine and rainbows, even before the pandemic lockdowns and the rise of its chief competitor,Zoom. I remember sitting for ages waiting for a call to connect, frequent audio dropouts, and sometimes struggling to log in at all. Sure, internet connections are faster and more consistent now than they were when Skype was first conceived back in 2003, but that’s not an all-encompassing excuse for the app’s many failings.

>>>MQ20 Battery for Microsoft Surface Pro 9 2032 1996 1997 2038

The Microsoft problem

See, Skype’s greatest victory was also a sword of Damocles hanging over its head: its 2011 purchase byMicrosoft. A multi-billion dollar deal that positioned Skype to replace Windows Live Messenger (formerly known as the ever-iconic MSN), the purchase proved to be an immediate boon for Skype, as it was widely inserted into Windows devices over the following years, thus reaching a massive global audience.

Unfortunately, this deal also meant that Skype was owned by Microsoft, which is rarely a safe position to be in. RememberZune? Yeah, me neither. The list of products and serviceskilled off by Microsoftover the years is long and storied, and many – including myself – saw the writing on the wall long before serious external competition arrived on the scene.

Star Lord reaching for his Zune music player in Guardians of the Galaxy Vol.3

Aside from a recent cameo role in the Marvel Cinematic Universe, Microsoft’s attempt to beat the iPod was a colossal failure.(Image credit: Marvel Studios)

A key issue was Microsoft’s long-running and ill-placed desire to make Teams work. I’ll be honest: as someone who was, in a previous and much worse place of employment, forced to use Microsoft Teams, I can say with conviction that it sucks. Rigid settings, feature bloat, and an inexplicable ravenous hunger for RAM make it a frequently painful piece of software to use, especially on an outdated work PC.

But Microsoft wanted – and still wants – it to be a Thing People Want To Use, which ultimately led to Skype taking a back seat as its features were gradually cannibalized to improve Teams. In fact, now that Skype has officially been taken out back with a shotgun, Microsoft is actively encouraging users toport their accounts over to Teams.

And what did Skype get in return? A drip-feed of features that nobody asked for, most of which did little to improve the core video-calling functionality. The interface became more cluttered, frequent UI redesigns left users confused, and yet there was a paradoxical feeling of stagnation; meanwhile, the meteoric rise of social-media-driven video calling across platforms such as Facebook andWhatsAppoffered a far more streamlined and pleasant user experience.

>>>G3HTA071H Battery for Microsoft Surface Laptop Studio 1964

Impacts of the pandemic

Zoom has been around since 2011 (ironically, the same year Microsoft acquired Skype) but you’d be forgiven for thinking that it just popped into existence at the start of the COVID-19 pandemic. When we were collectively displaced from our offices and had to construct impromptu workspaces inside our homes, video conferencing became an everyday necessity – and as we all know, this was where the cracks in Skype’s facade really started to show.

Technical debt is never an easy hurdle to overcome, and Skype’s aging software architecture – while cutting-edge back in 2003 – gradually became a weight chained around its ankle. With Teams at the forefront, investing in updating Skype never seemed like a priority for Microsoft. The app didn’t even change over to a centralized system from its outdated peer-to-peer networking until more than half a decade after Microsoft bought it.

One of the worst blunders was Microsoft’s insistence on keeping it partially anchored to actual phone numbers (with a dial pad feature, no less) in an era when interlinked accounts are king and phones are more than just phones. It was no doubt a move intended to retain the crop of older users who were unaware of the alternatives, but the 100-user call capacity and streamlined interface of Zoom made it an easy choice for professionals who needed to keep their careers afloat while the world screeched to a halt outside.

>>>G3HTA056H Battery for Microsoft Surface Pro X 13″ Tablet

Long live Zoom

It’s certainly not a universal truth that Microsoft ruins everything it touches -the Surface tablet line is finally good now!- but the tech giant has something of a reputation for enshittification. I’ve been following thegradual decay of Windowsfor years now, and looking at how Microsoft treats its most widely known product makes understanding the fall of Skype very easy.

The Surface Pro 11th Edition

Microsoft has finally achieved some success with its Surface tablets, but I’m quietly surprised the brand has lasted this long.(Image credit: Microsoft)

I’ve settled into a belief that Microsoft isn’t able to just let things be what they are. Everything had to bemore!More features, more information, more settings, more AI! Forget what consumers actually want; the line must go up, the goalposts must keep moving, everything must be constantly changing and innovating or it’s worthless. Once you start to see Microsoft as a tech company incapable of sitting still, its successes and failures all start to make a lot more sense.

What people needed for the remote working shift during the pandemic was an effective, straightforward video conferencing tool. They didn’t find that in Skype, which had already become a bloated shell of its former self after years of ‘innovation’ at the hands of Microsoft. So I say this now, to the creators of Zoom: if it ain’t broke…

Intel stealthily pulls the plug on Deep Link less than 5 years after launch

You can continue to use the technology, but it will not receive any future updates or support from Intel.

Intel Deep Link

(Image credit: Intel)

Intel has discontinued support for its Deep Link suite of technologies, as confirmed by a representative onGitHub, via X userHaze.After Intel quietly stopped promoting the feature in newer products such as Battlemage, it has now confirmed that active development for Deep Link has ceased. While you still might be able to use Deep Link, Intel has clarified that there will be no future updates or official support from their customer service channels.

“Intel Deep Link is no longer actively maintained and will not be receiving future updates, meaning that there will be no changes to the features regardless of their current functionality status.”

from X

Deep Link was introduced in late 2020. It allows you to harness the combined power of your Intel CPU and Arc GPU to improve streaming, AI acceleration, and overall efficiency. To utilize Deep Link, you needed an Intel 11th, 12th, or 13th Generation CPU and a dedicatedArc AlchemistGPU. The suite offered four key utilities: Dynamic Power Share, Stream Assist, Hyper Encode, and Hyper Compute.

>>>GH5AD-03-17-4S1P-0 Battery for Intel NUC23 X15 AC57 X15 LAPAC71H

Dynamic Power Share optimizes performance and power by intelligently shifting power resources between the CPU and GPU. Stream Assist improved streaming by offloading the task from the dedicated GPU to the integrated GPU. Hyper Encode accelerated video encoding using multiple Intel processors. Lastly, Hyper Compute leveraged your Intel CPU and GPU to accelerate AI workloads in OpenVINO.

“Deep Link is no longer actively maintained and will not be receiving future updates, meaning that there will be no changes to the features regardless of their current functionality status.”

Intel representative at GitHub

These features boosted performance in apps like OBS, DaVinci Resolve, and Handbrake. The user who originated the thread at GitHub could not get Stream Assist up and running with OBS using the latestArc B580paired with theCore Ultra 7 265K. Following a month-long wait, a representative relayed that Intel had discontinued software development.

>>>J91330-002 Battery for Intel Shooting StarTm Mini

It turns out that even Alchemist users had a hard time getting these features working in Handbrake and OBS. It’s possible that Intel considered Deep Link a niche feature and deemed the ongoing effort and investment not worthwhile. Besides, most of these features require per-vendor validation. Development was likely dropped a while back, asMeteor Lake, an architecture that dates back to late 2023, is also not among the supported CPUs.

WHAT DOES AIO STAND FOR? THE MEANING BEHIND THE ABBREVIATION EXPLAINED

Apple iMac on desk

If you’ve been in the market for a new computer, there’s a good chance you’ve seen the term AIO thrown around. It can be difficult to assess what the term means, since it is extremely context-dependent. You may have encountered it when shopping for multiple different types of computers. It’s especially prevalent in the world of custom-built PCs, but you may also hear it when shopping for a prebuilt system.

As it turns out, AIO, or “all-in-one,” is a fairly flexible term that can refer to anything that integrates multiple parts that would usually be separate into a single product. However, with that in mind, there are two types of products you’ll hear referred to as AIO: liquid PC coolers and integrated desktops. These uses of AIO should be disambiguated from other terms we won’t discuss in this article like AI/O (asynchronous input/output) and artificial intelligence optimization, or various medical terms (adhesive intestinal obstruction, sounds awful  — new fear unlocked). In this article, we’ll break down the most common uses of the term as it relates to consumer computer hardware. Here’s what you need to know.

>>>L22M4PC2 Battery for Lenovo Legion Slim 5 16APH8

AIO can refer to all-in-one desktop PCs

All-in-one Desktop PC

AIO is an abbreviation that usually means “all-in-one,” but it has multiple applications in the realm of computers. One common meaning is a desktop computer that contains all of its components within a single unit that includes a display. The most straightforward example of an all-in-one desktop is the Apple iMac, which has been a staple of school computer labs, households, and offices for decades. Rather than the computer and display monitor being two separate units, the iMac integrates its motherboard, processor, RAM, storage, and more into the display itself. All a user needs to do is connect a mouse and keyboard. Other AIO desktops include the Microsoft Surface Studio, which folds down to become a massive desktop tablet for artists, and the Lenovo 24 All-In-One, which is closer to the sort of thing you might see on a receptionist’s desk at a doctor’s office.

In addition to being convenient, especially for users who aren’t very comfortable with or knowledgeable about computers, all-in-one desktops have two more benefits. First, they’re often relatively cheap, with many priced at under $1,000. Bulk deals make them even more affordable, which goes a long way to explain their popularity in offices and schools. Second, they’re space savers, eliminating the need to make room for a big desktop tower while also cutting down on cable management.

>>>L22M4PA2 Battery for Lenovo Legion Slim 5 14APH8

AIO can also refer to PC liquid coolers

custom gaming pc with AIO cooler installed

Another common use of the AIO abbreviation appears in the realm of PC components, where it can refer to an all-in-one cooler. Water cooling has become increasingly popular among custom PC enthusiasts, as it provides more effective cooling for CPUs during intense workloads compared to traditional air coolers that use heat pipes and fans. However, cooling with a custom loop is rather complicated, requiring multiple components including a radiator, pump, reservoir, and water block. In addition to the complexity of assembling such a system, you risk spilling the cooling liquid into your system if you’re not careful, and you need to maintain it by changing out the liquid every so often.

That’s where AIO coolers come in. They have all the necessary components of a cooling loop, but are contained in a single closed ecosystem within a single unit. An AIO cooler has a pump with a cold plate that mounts to the CPU. This is connected by tubes to a radiator with fans that mounts (in most instances) on the top or back of the PC case. Unlike cooling loops, AIO coolers come pre-filled with water-based coolant, so you never have to worry about spills. You also don’t need to refill AIO coolers. The liquid inside lasts for the lifetime of the AIO. Cooling loops can be more fun, with some PC enthusiasts using them to add flair to their PCs. Lately, though, more AIOs come with RGB effects and other aesthetic touches. When trying to choose the right cooling kit for your PC, these factors make AIO coolers the best choice for people who aren’t experienced PC builders.

WHAT’S THE DIFFERENCE BETWEEN USB-C AND USB4?

Close up of a USB-C connector

If you, like most people, have a drawer full of USB cables for all your different devices, then you know how quickly those cables can get out of hand. However you organize your cables, keeping track of the different types can require a course in tech jargon just to make sense of it all. Two of the terms you’ve likely come across are USB-C and USB4, especially if you’ve been out shopping for new tech recently. Most of us are at least somewhat familiar with USB-C and what it can do, but if we throw USB4 in the mix, things can start to get confusing, especially since they share the same connector. 

While you’ll often see USB-C and USB4 listed side by side on packaging and spec sheets, they’re not interchangeable. USB4 refers to the technology standard, while USB-C describes the physical shape of the connector. In other words, when we talk about USB-C, we’re referring to the design of the connector itself: the small, reversible plug that you see at the end of the cables you use for everything from charging your phone to connecting accessories like external drives and monitors. However, just because two cables share the same connector doesn’t mean they deliver the same performance; that’s where USB4 comes in. USB4 is a technology standard that uses the USB-C connector to deliver faster speeds, better power delivery, and more advanced features. 

>>>Cable Charger for Apple Macbook Pro 13″ 15″ with Multi-Touch Bar

USB-C is just the connector, but performance varies widely

Red USB-C connector next to smartphone

Whether you’re charging the brand new MacBook Air M4 or the Samsung Galaxy S25 Ultra, you’re plugging your device into a USB-C port. Over the past few years, the compact, rectangular-shaped reversible connector has become the standard across modern tech, replacing older USB-A and USB-B ports. From laptops and tablets to smartphones and wireless earbuds, if you bought a new device in the last couple of years, there’s a good chance it has a USB-C port. And that’s what makes the USB-C connector so special: it has a universal port shape that’s easy to plug in and supports a wide variety of devices and functions, including charging, data transfer, and even video output.

As convenient as this universal connector is, there’s a catch: USB-C only describes the shape of the connector; it says nothing about its speed, charging capability, or features. For example, that USB-C cable you’re using to charge your device may only support slow USB 2.0 speeds, or it may support much faster USB 3.2 or USB4 speeds; it all depends on the cable’s specifications and the standards it supports. 

Some USB-C cables are even charge-only, meaning they don’t support data transfer at all. In other words, the USB-C cable you’re using could be limited to basic power delivery or support high-wattage charging for laptops. It’s this variation that makes USB-C so confusing for many people. You can be staring at two seemingly identical cables, and one charges your device rapidly while the other moves at a snail’s pace and doesn’t even support data transfer.

>>>W16-045N5A Charger for Google Pixelbook 45W USB Type-C Charger Power 3.0 USB-C – BRICK ONLY

How USB4 improves performance over older USB-C standards

USB-C cable restingon top of black smartphone

While USB-C describes the physical connector, USB4 refers to the technology standard that uses it. Introduced in 2019, USB4 is a major upgrade over previous USB technology, offering faster speeds, better power delivery, and built-in support for Thunderbolt features. That said, just because a device uses a USB-C connector doesn’t guarantee it supports USB4; you’ll need to check the specs carefully. If you’re using a USB-C connector that supports USB4, you’ll get data transfer rates of up to 40 Gbps, which is almost double what USB 3.2 Gen 2 can provide. That means you’ll be able to move your large files a lot faster, stream high-resolution video, or connect multiple monitors through a single port and not worry about lag.

USB4’s capabilities go beyond just delivering faster speeds; it also improves power delivery, supporting up to 100 watts, which is something you’ll find especially useful if you have large devices like laptops that need to be charged. Another important feature is its compatibility with Thunderbolt 3 or later, which makes it possible to use high-speed accessories like external monitors, docks, and storage drives through the same USB-C port. Getting to know the difference between Thunderbolt and USB-C can be helpful in making sure you get the right cable or accessory for your device, as well.