The page that you have requested could not be found at this time. We have provided you a list of related content below or you can use our site search to find the information that you are looking for.

Goodbye HardOCP - Hello Intel

We have some big changes happening here at HardOCP. Kyle Bennett will be taking on new challenges very soon with Intel working as its Director of Enthusiast Engagement.
Posted by Kyle March 19, 2019 6:30 AM (CDT)

The PC Perspective Podcast Is Live

The PC Perspective Podcast is now live! Drop in to talk to the guys as they discuss the latest tech news topics.

The live recording of our weekly podcast. Quality not guaranteed. This stream contains content that may disturbing to some audiences. Viewer discretion is advised.

Posted by cageymaru March 07, 2019 9:13 PM (CST)

New Speculative Execution Bug Allegedly Affects Intel CPUs

Back in 2018, when the Spectre and Meltdown vulnerabilities were first publicized, many security experts feared that they opened a figurative Pandora's box. Those two exploits are part of a wider class of potential speculative execution flaws, and this week, those fears were realized, as researchers from Worcester Polytechnic Institute have revealed (PDF Warning) a new speculative execution exploit dubbed "Spoiler." Intel CPUs reportedly use "dependency resolution logic" to resolve false dependencies when speculatively executing load operations, and the researchers say "the dependency resolution logic suffers from an unknown false dependency independent of the 4K aliasing. The discovered false dependency happens during the 1 MB aliasing of speculative memory accesses which is exploited to leak information about physical page mappings." In that vein, the researchers claim this particular exploit only requires "a limited set of instructions," and that all Intel "Core" CPUs running on any operating system are vulnerable to the attack. The attack can be loaded with Javascript code from a website, without any need for privilege escalation beforehand, and the researchers successfully demonstrated the exploit on Nehalem, Sandy Bridge, and Ivy Bridge-based Xeon servers. Intel was reportedly informed of the exploit on December 1st, 2018, and they recently published this response:

Intel received notice of this research, and we expect that software can be protected against such issues by employing side channel safe software development practices. This includes avoiding control flows that are dependent on the data of interest. We likewise expect that DRAM modules mitigated against Rowhammer style attacks remain protected. Protecting our customers and their data continues to be a critical priority for us and we appreciate the efforts of the security community for their ongoing research.

While speculative execution enables both SPOILER and Spectre and Meltdown, our newly found leakage stems from a completely different hardware unit, the Memory Order Buffer. We exploited the leakage to reveal information on the 8 least significant bits of the physical page number, which are critical for many microarchitectural attacks such as Rowhammer and cache attacks. We analyzed the causes of the discovered leakage in detail and showed how to exploit it to extract physical address information. further, we showed the impact of SPOILER by performing a highly targeted Rowhammer attack in a native user-level environment. We further demonstrated the applicability of SPOILER in sandboxed environments by constructing efficient eviction sets from JavaScript, an extremely restrictive environment that usually does not grant any access to physical addresses. Gaining even partial knowledge of the physical address will make new attack targets feasible in browsers even though JavaScript-enabled attacks are known to be difficult to realize in practice due to the limited nature of the JavaScript environment. Broadly put, the leakage described in this paper will enable attackers to perform existing attacks more efficiently, or to devise new attacks using the novel knowledge.

Posted by alphaatlas March 06, 2019 8:47 AM (CST)

The USB Promoter Group Announces USB4 Specification

The USB Promoter Group has announced the pending release of the USB4 specification that is based on the Thunderbolt protocol contributed by Intel Corporation. USB4 compliments and builds upon the existing USB 3.2 and USB 2.0 specifications, doubles the bandwidth of USB, and enables multiple simultaneous data and display protocols. Key features of the USB4 solution include: Two-lane operation using existing USB Type-C cables and up to 40 Gbps operation over 40 Gbps certified cables. Multiple data and display protocols to efficiently share the total available bandwidth over the bus. Backward compatibility with USB 3.2, USB 2.0 and Thunderbolt 3. The specification is expected to be completed in the middle of 2019, but The Verge is reporting that USB4 devices won't be available until 18 months later. When the USB4 specification is completed, an updated USB Type-C Specification will be made to comprehend USB4 bus discovery, configuration and performance requirements.

"Releasing the Thunderbolt protocol specification is a significant milestone for making today's simplest and most versatile port available to everyone," said Jason Ziller, General Manager, Client Connectivity Division at Intel. "By collaborating with the USB Promoter Group, we're opening the doors for innovation across a wide range of devices and increasing compatibility to deliver better experiences to consumers." "The primary goal of USB is to deliver the best user experience combining data, display and power delivery over a user-friendly and robust cable and connector solution," said Brad Saunders, USB Promoter Group Chairman. "The USB4 solution specifically tailors bus operation to further enhance this experience by optimizing the blend of data and display over a single connection and enabling the further doubling of performance."

Posted by cageymaru March 04, 2019 11:30 AM (CST)

Windows 10 Update Mitigates Spectre Related Performance Slowdown

A big Linux release over the weekend added the "Retpoline" Spectre mitigation to the Linux kernel, but BleepingComputer reports that Windows got the same treatment. Google shared the Retpoline software mitigation technique last year, shortly after they publicly revealed Spectre and Meltdown, which Microsoft says "works by replacing all indirect call or jumps in kernel-mode binaries with an indirect branch sequence that has safe speculation behavior." Microsoft claims the update that brings the mitigations is enabled by default in Windows Insider Fast Builds, but if you want to verify the Spectre protection status yourself, they posted a fairly straightforward Powershell tutorial. In a nutshell, just download the SpeculationControl module from a link in the guide, unzip it, open a powershell console via the start menu, and copy the commands into the console. Note that I couldn't get the script working without changing "Import-Module.\SpeculationControl.psd1" to "Import-Module SpeculationControl.psd1".

Retpoline has significantly improved the performance of the Spectre variant 2 mitigations on Windows. When all relevant kernel-mode binaries are compiled with retpoline, we've measured ~25% speedup in Office app launch times and up to 1.5-2x improved throughput in the Diskspd (storage) and NTttcp (networking) benchmarks on Broadwell CPUs in our lab. It is enabled by default in the latest Windows Client Insider Fast builds (for builds 18272 and higher on machines exposing compatible speculation control capabilities) and is targeted to ship with 19H1.

Posted by alphaatlas March 04, 2019 10:15 AM (CST)

Linux 5.0 Release Includes FreeSync Support and Spectre Fixes

The Linux 5.0 kernel has been released, and among other things, it officially adds support for FreeSync displays on AMD GPUs. Phoronix notes that AMD previously supported FreeSync on Linux "via their hybrid driver package with its DKMS module in Radeon Software," but posted a tutorial for enabling it and testing FreeSync support in Ubuntu. They note that Vulkan games (which presumably includes Valve's Proton renderer,) compositors, web browsers, media players and a few other apps aren't currently supported by FreeSync at this time. Meanwhile, the publication also put the recent Spectre performance mitigation measures to the test, and found that performance on the Core i9 7980XE dropped by about 13% with the Spectre protections enabled. Core i7 8086K performance dropped by about 17%, while Ryzen 7 2700X performance only dropped by 3%. Unfortunately, running the same benchmarks on previous Linux kernels would be like comparing apples to oranges, so its hard to say exactly how much Linux 5.0 mitigates Spectre's performance hit, but it looks like certain workloads are still relatively sensitive to the security countermeasures.

To utilize FreeSync you need to be using the xf86-video-amdgpu DDX driver. You can verify so looking for "AMDGPU" in the Xorg.0.log. You also need the above-shown Xorg.conf snippet to enable the "VariableRefresh" AMDGPU DDX driver option. Using the xf86-video-modesetting DDX is unsupported at this time. Your xf86-video-amdgpu driver also has to be relatively new, but such supported X.Org driver can be found in the likes of the Padoka PPA... After enabling the VariableRefresh option and restarting the X.Org server, you can verify that the DDX is new enough and option is working by ensuring that VariableRefresh is successfully mentioned in your Xorg log.

Posted by alphaatlas March 04, 2019 9:02 AM (CST)

NVIDIA GeForce GTX 1650 Turing Specs Allegedly Leak: 1.4GHz Base Clock, 4GB GDDR5

Bangkok-based leaker APISAK (@TUM_APISAK) has posted a 3DMark screenshot revealing the alleged specs of NVIDIA’s upcoming GeForce GTX 1650 GPU, which will reportedly launch alongside the GTX 1660 in spring. The Turing-based card is listed with a 1,395MHz base clock and 1,560MHz boost clock, and 4GB of presumed GDDR5 memory: "Past leaks peg the memory bus at 128-bit, and with a 2,000MHz effective clock speed, that would give the card 128GB/s of memory bandwidth."

...we can reasonably surmise that this will be yet another Turing card that lacks RT and Tensor cores, which are what give GeForce RTX series cards their real-time ray tracing and Deep Learning Super Sampling (DLSS) mojo. NVIDIA rightly recognized that gamers at large are waiting for both features to be more widely supported before investing in the necessary hardware. Hence why the GeForce GTX 1660 Ti exists -- it lacks those features and is the least expensive Turing card on the market.

Posted by Megalith March 02, 2019 9:40 AM (CST)

"I Don't Want Your Respect": NetherRealm Responds to Mortal Kombat "Going Woke"

There is a growing subset of Mortal Kombat fans who aren’t pleased with the artistic direction in which NetherRealm Studios is taking series. Specifically, they allege the female roster has had their "sex appeal" taken away, a controversy that was reignited this week after NetherRealm revealed a ghastly looking, covered-up Jade. NetherRealm’s community manager has punched back against one fan who called it "disgusting" and how the developer should make the girls "sexy" in order to earn his "money and respect," calling such comments "truly awful." Even female gamers are beginning to agree the characters have been "censored," however.

@Draka_: For a "community manager" you don't seem to get how this thing called "community" works and how ridiculously godawful your communications skills are. If you start acting butthurt bc of one negative comment, you ain't exactly helpful to the "community" or the company you represent. @laope_: Gore = Core Values, CG tiddies = TOO OFFENSIVE! @irxson: You don't want your customers respect? I'm less inclined to buy the game now. Of course I will wait and see but I have to agree, Character designs are bad and I loved the older designs, my favorites being from MK9. It is sad to see the team is not going to consider our feedback.

Posted by Megalith February 16, 2019 1:05 PM (CST)

Suspected Drone Activity Disrupts Dubai Airport

The New York Times reports that "suspected drone activity" shut down Dubai International Airport, one of the busiest airports in the world, between 10:13 a.m. and 10:45 a.m. Reuters mentioned that there are "severe penalties in the United Arab Emirates for unauthorized drone activity," while the Times claims that their regulations aren't really open to interpretation either. Alleged drone sightings previously shut down Gatwick Airport and Newark Airport. Media reports suggest that incoming flights were still allowed to land at Dubai during the disruption, and while Gatwick in particular was shut down for a longer period, Dubai serves even more passengers than Gatwick does. Drone manufacturers and regulatory bodies are supposedly working on measures that could curb these disruptions, like identification systems or drone classification standards, but those will take some time to roll out.

Government regulations in Dubai are fairly unambiguous about flying drones in areas where there might be significant air traffic: It is forbidden "near, around and over airports," and users must obtain a certificate from the General Civil Aviation Authority in the United Arab Emirates... "Dubai Airports has worked closely with the appropriate authorities to ensure that the safety of airport operations is maintained at all times and to minimize any inconvenience to our customers," the airport said.

Posted by alphaatlas February 15, 2019 11:48 AM (CST)

4A Games Releases Metro Exodus Special Weapons Class Trailer and Pre-Load Info

Metro Exodus developers, 4A Games, has released the latest trailer for the game that showcases the Special Weapons Class. The video shows Helsing and Tikhar weapons in all of their glory. Metro Exodus launches on February 15, 2019 on PC and consoles. Steam pre-order customers will be able to pre-load the game! Unfortunately those who purchase Metro Exodus from the Epic Games Store will not be able to pre-load the game. The pre-load for Xbox and PS4 customers will begin 48 hours before launch.

Take a closer look at the Special Weapons Class in Metro Exodus. Featuring the Helsing and Tikhar.

Posted by cageymaru February 12, 2019 9:24 PM (CST)

Specialized Processors are Seemingly Overtaking General Purpose Chips

The "CPU" has been the heart of the modern computer for decades. As chip design costs. and speeds, have increased, the tech industry has typically defaulted to dumping more resources into flexible, integrated general purpose processors rather than spending millions on specialized chips for specific tasks. But, citing a paper from Thompson and Spanuth, The Next Platform believes that the era of general purpose computing may be coming to an end. The meteoric rise of graphics processors is perhaps one of the earliest and most visible examples on the trend. At first, these "semi-specialized" processors only took over very specific graphics workloads from the CPU. Eventually, specialized logic was added to handle video decoding and encoding, general purpose compute, and as as of recent times, machine learning workloads. But processors specifically tailored for machine learning are already starting to overtake GPUs, while demands from the IoT market are also making custom tailored, efficient processor designs more economically viable. Thanks to cageymaru for the tip.

Thompson and Spanuth offer a mathematical model for determining the cost/benefit of specialization, taking into account the fixed cost of developing custom chips, the chip volume, the speedup delivered by the custom implementation, and the rate of processor improvement. Since the latter is tied to Moore's Law, its slowing pace means that it's getting easier to rationalize specialized chips, even if the expected speedups are relatively modest. "Thus, for many (but not all) applications it will now be economically viable to get specialized processors - at least in terms of hardware," claim the authors. "Another way of seeing this is to consider that during the 2000-2004 period, an application with a market size of ~83,000 processors would have required that specialization provide a 100x speed-up to be worthwhile. In 2008-2013 such a processor would only need a 2x speedup."

Posted by alphaatlas February 07, 2019 11:18 AM (CST)

HardOCP Interviews Scott Herkelman

AMD launched the their first 7nm GPU, the Radeon VII, at CES 2019. The company published some internal performance figures to back up their claims, which we analyzed, and the launch itself caused quite a stir, but there are still some details about the GPU that AMD left out of their CES event. So we sat down with Scott Herkelman, the Vice President and General Manager of the Radeon Gaming Business Unit, and got some more info on Radeon VII, the Vega architecture, and the future of AMD's gaming division. Check out the full interview here.

HardOCP: Is the Radeon VII a true "Vega20" GPU? Beside the smaller process, what improvements are in the new Vega vs the old one? Can we get full specs? Scott Herkelman: Yes, Vega20 is the underlying architecture for this product. We made some surgical enhancements to the Vega architecture to scale to frequencies on 7nm. We also increased the memory interface from 2048 to 4096 bits, all while reducing the footprint from 495mm2 to 331mm2 and we are super happy with the results. This chart shows the full Radeon VII specs...

Posted by alphaatlas January 14, 2019 8:13 AM (CST)