The page that you have requested could not be found at this time. We have provided you a list of related content below or you can use our site search to find the information that you are looking for.

Intel's MESO Transistor Project Could See Results in Two to Five Years

Late last year, Intel announced that they were working on a new type of transistor that could offer a massive performance leap over current CMOS chips. "MESO" transistors, as they call them, could operate at voltages as low as 100mV, but at the time, Intel said the technology was at least a decade away from commercialization. Today, in an interview with VentureBeat, an Intel researcher said he is "excited about spin-off results MESO is likely to produce within the next two to five years." AI accelerators are supposedly less complicated an more fault tolerant that traditional chip designs, and MESO's characteristics are "coincidentally' well suited to neural network architectures, meaning they could hit the market sooner rather than later.

Khosrowshahi: CPUs, which are the most commonplace when you're building silicon, are oddly enough the hardest thing to build. But in AI, it's a simpler architecture. AI has regular patterns, it's mostly compute and interconnect, and memories. Also, neural networks are very tolerant to inhomogeneities in the substrate itself. So I feel this type of technology will be adopted sooner than expected in the AI space. By 2025, it's going to be biggest thing... Young: If we can get these improvements in power-performance - MESO will be a 10 to 30 times better power-performance or energy-delay product - but let's say we only get a 2X improvement. That gives us, for a given power into the device, a 2X performance benefit, so it's a huge leg up on the competition. That's what drives this. Not only is this good for my company but it's an opportunity for the industry. The research is open, because we have so much heavy lifting to do with these materials. But if this is a thing that we as an industry can get a hold of, this could be a game changer for the semiconductor industry. It will take it through this curve that has been flattening. We may accelerate again. And that would be really neat.

Posted by alphaatlas February 21, 2019 11:01 AM (CST)

Facebook Is Allegedly Working on Custom Machine Learning Hardware

Nvidia GPUs are the undisputed king of the machine learning hardware market today, but more and more companies are throwing their hat into the AI ring. Google has already introduced their machine learning-focused TPU, and other giants like Amazon and Intel are reportedly following suit, while a number of smaller startups are filling in niches or taking riskier approaches to compete with the bigger players. Last year, various reports surfaced claiming that Facebook was working on their own, custom ASICs, but an EE Times report said that it was "not the equivalent of [Google's] TPU." Now, according to a Bloomberg report published earlier this week, some of Facebook's upcoming custom silicon may focus on machine learning after all. Facebook's chief AI researcher says that "the company is working on a new class of semiconductor that would work very differently than most existing designs," and mentioned that future chips will need radically different architectures.

"We don't want to leave any stone unturned, particularly if no one else is turning them over," he said in an interview ahead of the release Monday of a research paper he authored on the history and future of computer hardware designed to handle artificial intelligence... LeCun said that for the moment, GPUs would remain important for deep learning research, but the chips were ill-suited for running the AI algorithms once they were trained, whether that was in datacenters or on devices like mobile phones or home digital assistants.

Posted by alphaatlas February 20, 2019 9:35 AM (CST)

Digital Foundry Analyzes Crackdown 3's Cloud Based Destruction

Fully destructible environments have long been a holy grail of game physics engines. I remember Red Faction: Guerrilla generating quite a bit of buzz when it came out, and according to Digital Foundry, the Crackdown devs have been working on an even more ambitious system that leverages the power of Microsoft's cloud servers. Crackdown 3 is the culmination of those efforts, and while it does have destructible environments that seem to be synced across multiplayer instances, the game itself feels rushed and somewhat underwhelming. The competitive "wrecking zone" mode, for example, has conspicuously small arenas and doesn't even have a party system, while the co-op mode still falls short of the 2015 tech demo. Check out the analysis in the video below:

What Wrecking Zone delivers is still impressive in many respects, but is definitely a simplification of the original demo - a situation which looks like a combination of both technological limitations and gameplay considerations. To begin with, the cityscape of the original demo becomes a series of enclosed holodeck-esque arenas - high on verticality, but small in terms of their overall footprint. What's clear from the 2015 demo is that it's exactly that - a demonstration, with no real gameplay as such. Limiting the scale of the play space means that players can actually find one another, which definitely helps, but there's still the sense that there's not much to actually do. The destruction can look wonderful, but little of the gameplay is actually built around the concept. Technologically, the cutbacks are legion. Micro-scale chip damage is completely absent, while destruction generally is far less granular, with buildings and statues breaking apart into more simplistic polygonal chunks. It's interesting to stack up Wrecking Zone with Red Faction Guerrilla Remastered - a game we sorely regret not covering at the time of its launch. Originally a last-gen Xbox 360 title, it does many of the same things as Wrecking Zone - on a smaller scale definitely, but with more granularity and detail. And this raises the question of whether the cloud would actually be necessary at all for Wrecking Zone.

Posted by alphaatlas February 20, 2019 8:21 AM (CST)

Hardware Unboxed Calls NVIDIA DLSS "The Biggest RTX Fail of Them All"

Hardware Unboxed has released its newest video where they dissect the image quality of Battlefield V with the new NVIDIA technology called Deep Learning Super Sample (DLSS) enabled. They not only compare the DLSS image quality to the native 4K image; Tim takes it a step further and compares the DLSS image to an 1685p upscaled 78% resolution scale image. They chose 1685p because it performs at a similar frame rate as when DLSS is enabled in-game. In all instances, the DLSS image looks to be a smeared image and the 1685p upscaled 78% resolution scale image is much more pleasing to look at. Tim Schiesser says, "The 1685p image destroys the DLSS image in terms of sharpness, texture quality, clarity; basically everything." He goes on to say, "The 78% scaled image preserves the fine detail on the rocks, the sign, the sandbags, the cloth, the gun; pretty much everywhere. With DLSS everything is blurred to the point where this detail is lost." Resolution scaling has been available to gamers for decades. They hold back no punches and say, "DLSS sucks."

But the real kicker is looking at the visual quality comparisons. We'll start with native 4K versus 4K DLSS. Across all of the scenes that I've tested, there is a severe loss of detail when switching on DLSS. Just look at the trees in this scene. The 4K presentation is just as you'd expect; sharp, clean, high detail on both the foliage and trunk textures. But DLSS is like a strong blur filter has been applied. Texture detail is completely wiped out. In some cases it's like you've loaded a low texture mode. While some of the fine branch detail has been blurred away, or even thickened in some cases. Which makes the game look kinda weird in some situations. Of course this is to be expected. DLSS was never going to supply the same image quality as native 4K while also providing a 37% performance uplift. That would be pretty much black magic. But the quality difference comparing the two is almost laughable at how far away DLSS is from the native presentation in these stressful areas.

Posted by cageymaru February 18, 2019 4:28 PM (CST)

Digital Foundry Analyzes Metro Exodus's Visual Fidelity

While many reviews have already dug into the technical aspects of 4A Games' newest Metro title, Digital Foundry recently posted a video showing how all that fancy tech actually impacts the in-game experience. Like many other games and benchmarks, Metro Exodus is pretty and demanding, but DF points out that those visuals do an excellent job of making the game feel immersive, as opposed to coming off as gimmicky features. The particle effects, for example, really contribute to Metro's moody atmosphere, while little touches like a remarkably detailed first person body view and the conspicuously detailed shadow it casts all make the game feel realistic. Check out the video on Metro Exodus's immersiveness below, or read Eurogamer's lengthy interview with 4A programmer Ben Archard and CTO Oles Shishkovstov if you're more interested in the technical aspects.

The open world maps are completely different to the enclosed tunnels maps of the other games. Environments are larger and have way more objects in them, visible out to a much greater distance. It is therefore a lot harder to cull objects from both update and render. Objects much further away still need to update and animate. In the tunnels you could mostly cull an object in the next room so that only its AI was active, and then start updating animations and effects when it became visible, but the open world makes that a lot trickier... So, a good chunk of that extra time is spent with updating more AIs and more particles and more physics objects, but also a good chunk of time is spent feeding the GPU the extra stuff it is going to render. We do parallelise it where we can. The engine is built around a multithreaded task system. Entities such as AIs or vehicles, update in their own tasks. Each shadowed light, for example, performs its own frustum-clipped gather for the objects it needs to render in a separate task. This gather is very much akin to the gathering process for the main camera, only repeated many times throughout the scene for each light. All of that needs to be completed before the respective deferred and shadow map passes can begin (at the start of the frame).

Posted by alphaatlas February 18, 2019 10:26 AM (CST)

Playing Chicken: Kentucky Fried Intel Core i9-9900KFC Processor Listed

Someone at Intel must be a fan of greasy, artery-clogging fried chicken: a chip called the Core i9-9900KFC has been listed on system-diagnostics app AIDA64. The "K" and "F" would point toward a processor that can be overclocked but lacking integrated graphics. It isn’t clear what the "C" stands for (no, not Colonel Sanders), but AnandTech believes it could involve eDRAM.

If every letter has a special meaning for a feature in a product, and a product portfolio offers a mix and match of those features, then eventually a combination of letters will end up with a secondary meaning. Today we’re seeing the beginning of the Kentucky Fried version of Intel: in the latest changelog to AIDA64, a well-known utility for system identification and testing, the company behind the software has added in the hooks and details for the Core i9-9900KFC.

Posted by Megalith February 17, 2019 9:45 AM (CST)

Chinese Telecommunications Hardware Is About to Be Banned by Executive Order

The United States is gearing up for the widespread installation of 5G networks, but Chinese hardware may have no part in it: there is chatter an executive order will soon be issued banning such for upgrades of cellular networks. "As contracts for the installation of 5G networks are in the works, the White House is looking to send a message that security must not be compromised for the next generation of wireless connectivity."

The order is part of a series of announcements leading up to Mobile World Congress designed to showcase what the United States is doing to prevent cyber attacks from being harmful to the nation. Huawei has been under great scrutiny in recent times, but remember that ZTE was also heavily put under the microscope not long before. Accusations against the Chinese telecom companies have ranged from theft of trade secrets to violations of trade embargoes. As charges mount against Huawei executives, there is compelling reason to believe that the United States will not be awarding any contracts to Chinese businesses.

Posted by Megalith February 10, 2019 1:40 PM (CST)

Buildzoid Analyzes the PCB and VRM Layout of the AMD Radeon VII

Buildzoid from Actually Hardcore Overclocking on YouTube has performed an in-depth analysis of the PCB and VRM layout of the AMD Radeon VII. Watch him discuss the efficiency and cost of the various exotic components that are found in the design of the AMD Radeon VII. You can view our unboxing and teardown video here.

AMD's Radeon VII card left its initial embargo, which allowed tear-downs (link below), just recently, and that allowed us to look closer at the VRM for analysis.

Posted by cageymaru February 06, 2019 6:01 PM (CST)

EE Times Looks at Intel's New CEO

Following the resignation of Brian Krzanich, Bob Swan became Intel's new "interim" CEO, and after a 7 month search for a replacement, Intel's leadership decided to make Swan's position permanent last week. Some worried about this decision, as Intel has a longstanding policy of hiring "insider" CEOs with a technical background, but the analysts EE Times talked to seem to think Swan is a good choice. They point out that being a "business guy" could allow the company to make the strong business decisions it needs, and his position as an "outsider" with plenty of "insider" experience could inject some fresh blood into the company without sending it in a bad direction. For what it's worth, we (and other news organizations) noticed that Intel has totally overhauled their relationship with the public under Swan's leadership.

McGregor noted that Swan is not the first Intel CEO to come from a business rather than technical background. Paul Otellini, who served as Intel's CEO from 2005 to 2013, was the first non-engineer to hold the post... Krewell said that he initially thought choosing Swan would be a mistake, akin to "kicking the can down the street." But seeing Swan in person and listening to him talk about the company and strategy, Krewell was won over by Swan's balanced view of Intel's position in the market. "I think that, while he's not a technologist, he has a deep reserve of technical executives he can draw on, Krewell said... Moorhead said that Swan has told him that he really started to enjoy the job once he held the interim job. (In a letter to employees, customers, and partners posted Thursday, Swan alluded to this change of heart and said that he jumped at the opportunity when approached by the board)

Posted by alphaatlas February 05, 2019 11:36 AM (CST)

AdoredTV Analyzes AMD Engineering Sample Benchmarks

Some AMD engineering samples with strange performance figures have been popping up in the UserBenchmark database recently. In an effort to put those results in perspective, AdoredTV just uploaded a video that starts with a brief history of CPU memory hierarchies. Then, he attempts to analyze just what's going on with the AMD engineering sample's inconsistent latency curves. Check it out in the video below:

A look back to the early days of cache-less computing, to what's coming next with Zen 2.

Posted by alphaatlas February 05, 2019 8:12 AM (CST)

Techgage Tests 2990WX Performance Scaling With Coreprio

Some 2990WX users have been complaining about performance regressions in Windows, and we noticed some strangeness in our own Threadripper benchmarks as well. Fortunately, Level1Techs seemingly nailed down the issue in a series of articles from a few weeks ago, and pointed to Coreprio as a good solution to Window's scheduling issues. Techgage just put that solution to the test in a wide range of benchmarks, and the results are interesting, to say the least. Adobe Premiere Pro renders, which didn't scale with the 2990WX's cores in our testing, saw a massive improvement with Coreprio, while other programs like Blender didn't seem to benefit at all. They also compared Windows to Linux performance in Geekbench, and for whatever reason, saw a night-and-day improvement when switching to the open source OS.

Does all of this mean that Linux is the best OS for a chip like the 2990WX? It's really hard to believe otherwise. To base that off of GeekBench alone would be nonsense, but we have other testing experience to back up those opinions. Blender almost always performs better in Linux than in Windows, so the fact that a many-core chip works better in the penguin OS isn't a huge surprise... Fortunately, using either DLM or Coreprio won't hurt your performance in other areas too much, but it's important to note that it can in fact negatively impact them. On the flipside, if you bought a 2990WX (or 2970WX) and are running against a regression, you shouldn't hesitate in giving the tool a test. Don't like the result, or don't need it active all of the time? All you need to do is simply stop the service from within the applet, and you'll be back to normal.

Posted by alphaatlas February 01, 2019 10:49 AM (CST)

Second Apple Hardware Engineer Charged with Stealing Trade Secrets for China

An Apple hardware engineer working in the top secret "Project Titan" autonomous vehicle division has been arrested for stealing trade secrets. A fellow Apple employee witnessed Jizhong Chen taking unauthorized pictures of the vehicle. He also neglected to tell Apple that he had been hired by an autonomous vehicle company in China. He was caught when he tried to board a flight to China. When confronted, he admitted to backing up 2,000 files containing manuals and schematics from the project to his personal hard drive. This is the second Apple engineer caught trying to board a plane to China with Apple autonomous vehicle trade secrets. Thanks @TheCommander !

Apple said disclosure of the data taken by Chen would be "enormously damaging," according to prosecutors. Among the photos seized by the government: an image stamped Dec. 19 diagramming Apple's autonomous driving architecture. Another from June 2018 depicts an assembly drawing of a wire harness for an autonomous vehicle. The engineer later told Apple he intended to travel to China to visit his ill father, but was arrested last week before he could board his direct flight. He was released from federal custody after posting $500,000 in cash and property on Jan. 25.

Posted by cageymaru January 30, 2019 5:30 PM (CST)