Toggle light / dark theme

There’s been a lot of focus on how both Intel and AMD are planning for the future in packaging their dies to increase overall performance and mitigate higher manufacturing costs. For AMD, that next step has been V-cache, an additional L3 cache (SRAM) chiplet that’s designed to be 3D die stacked on top of an existing Zen 3 chiplet, tripling the total about of L3 cache available. Today, AMD’s V-cache technology is finally available to the wider market, as AMD is announcing that their EPYC 7003X “Milan-X” server CPUs have now reached general availability.

As first announced late last year, AMD is bringing its 3D V-Cache technology to the enterprise market through Milan-X, an advanced variant of its current-generation 3rd Gen Milan-based EPYC 7,003 processors. AMD is launching four new processors ranging from 16-cores to 64-cores, all of them with Zen 3 cores and 768 MB L3 cache via 3D stacked V-Cache.

AMD’s Milan-X processors are an upgraded version of its current 3rd generation Milan-based processors, EPYC 7003. Adding to its preexisting Milan-based EPYC 7,003 line-up, which we reviewed back in June last year, the most significant advancement from Milan-X is through its large 768 MB of L3 cache using AMD’s 3D V-Cache stacking technology. The AMD 3D V-Cache uses TSMC’s N7 process node – the same node Milan’s Zen 3 chiplets are built upon – and it measures at 36 mm², with a 64 MiB chip on top of the existing 32 MiB found on the Zen 3 chiplets.

Get a year of Nebula and Curiosity Stream for only $14.79 when you sign up at http://www.curiositystream.com/joescott.
We’ve been hearing for years how nanotechnology is going to change the world. In movies and in headlines, nanotechnology is almost like “future magic” that will make the impossible possible. But how realistic are those predictions? And how close are we to seeing some of them come true? Let’s take a look at the state of nanotechnology.

Want to support the channel? Here’s how:

Patreon: http://www.patreon.com/answerswithjoe.
Channel Memberships: https://www.youtube.com/channel/UC-2YHgc363EdcusLIBbgxzg/join.
T-Shirts & Merch: http://www.answerswithjoe.com/store.

Check out my 2nd channel, Joe Scott TMI:

Research On Humans Adapting, Living & Working In Space — Colonel (ret) Dr. Samantha Weeks, Ph.D., Polaris Dawn, Science & Research Director


Colonel (ret) Dr. Samantha “Combo” Weeks, Ph.D. is the Science & Research Director, of the Polaris Dawn Program (https://polarisprogram.com/dawn/), a planned private human spaceflight mission, operated by SpaceX on behalf of Shift4 Payments CEO Jared Isaacman, planned to launch using the Crew Dragon capsule.

Polaris Dawn is the first of three planned missions in the Polaris Program (https://polarisprogram.com/), which endeavors to rapidly advance human spaceflight capabilities by demonstrating new technologies and conducting extensive scientific research to expand our knowledge of humans adapting, living and working in space. Much of this research also has purpose and applicability to improve life here on Earth.

For about 9 months, Elon has been suggesting that Booster 4 with Starship 20 on top of it would do the first orbital test of Starship.

The big question was how safe would it be to launch with 29 Raptor engines at once? A lot of people were talking about Russia’s N1 rocket which failed in all four attempts with its 31 engines, causing one of the world’s largest nonnuclear explosions and killing over a hundred people in the process. The most Raptor engines that have ever been static fire tested at once is 6. It would be very difficult to rebuild the Starship tower if it was destroyed. Easily ten times as hard as building another Starship and booster.

Note that using so many engines is not impossible. For example, the Falcon Heavy launches with 27 engines and all its launches have been successful so far. The problem is that the Raptor is the world’s first full-flow staged-combustion-cycle engine and SpaceX has not perfected it yet. For example, the only Starship which successfully landed from a medium-height test almost missed the landing pad and was on fire when it landed. (All other medium-height test Starships exploded, one before it even hit the ground.)

Anyway, today Elon admitted that there will never be an orbital test with Raptor engines and instead plans on doing a test with Raptor 2 engines in two months. This test will be with 33 Raptor 2 engines at once but those engines are considered to be much more stable and also more powerful which matters when you wish to make orbit. The current deal with Raptor 2 engines is they often explode when pushed to 250 tons of force, but work quite well at 230 tons of force. The cranky Raptor engines could do 185 tons of force.

This orbital test will be with a Starship that has 6 engines although Elon has said that eventually Starship will have 9 engines while the booster will have 33 engines.

As scientists prepared in 2010 to collapse the first particles in the Large Hadron Collider (LHC), media representatives imagined that the EU-wide experiment could create a black hole that could swallow and destroy our planet. How on earth, columnists rage, could scientists justify such a dangerous indulgence for the pursuit of abstract, theoretical knowledge?

For self-driving cars and other applications developed using AI, you need what’s known as “deep learning”, the core concepts of which emerged in the ’50s. This requires training models based on similar patterns as seen in the human brain. This, in turn, requires a large amount of compute power, as afforded by TPUs (tensor processing units) or GPUs (graphics processing units) running for lengthy periods. However, cost of this compute power is out of reach of most AI developers, who largely rent it from cloud computing platforms such as AWS or Azure. What is to be done?

Well, one approach is that taken by U.K. startup Gensyn. It’s taken the idea of the distributed computing power of older projects such as SETI@home and the COVID-19 focussed Folding@home and applied it in the direction of this desire for deep learning amongst AI developers. The result is a way to get high-performance compute power from a distributed network of computers.

Gensyn has now raised a $6.5 million seed led by Eden Block, a web3 VC. Also participating in the round is Galaxy Digital, Maven 11, Coinfund, Hypersphere, Zee Prime and founders from some blockchain protocols. This adds to a previously unannounced pre-seed investment of $1.1 millionin 2021 — led by 7percent Ventures and Counterview Capital, with participation from Entrepreneur First and id4 Ventures.

Four-legged robots are nothing novel — Boston Dynamics’ Spot has been making the rounds for some time, as have countless alternative open source designs. But with theirs, researchers at MIT claim to have broken the record for the fastest robot run recorded. Working out of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), the team says that they developed a system that allows the MIT-designed Mini Cheetah to learn to run by trial and error in simulation.

While the speedy Mini Cheetah has limited direct applications in the enterprise, the researchers believe that their technique could be used to improve the capabilities of other robotics systems — including those used in factories to assemble products before they’re shipped to customers. It’s timely work as the pandemic accelerates the adoption of autonomous robots in industry. According to an Automation World survey, 44.9% of the assembly and manufacturing facilities that currently use robots consider the robots to be an integral part of their operations.

Today’s cutting-edge robots are “taught” to perform tasks through reinforcement learning, a type of machine learning technique that enables robots to learn by trial and error using feedback from their own actions and experiences. When a robot performs a “right” action — i.e., an action that’ll lead it toward a desired goal, like stowing an object on a shelf — it receives a “reward.” When it makes a mistake, the robot either doesn’t receive a reward or is “punished” by losing a previous reward. Over time, the robot discovers ways to maximize its reward and perform actions that achieve the sought-after goal.

Ever since last year’s annual American Association of Cancer Research (AACR) meeting, Volastra’s phone has been “ringing off the hook,” according to CEO Charles Hugh-Jones, M.D. | Two years since its inception, Volastra Therapeutics is partnering with Bristol Myers Squibb for up to three oncology targets focused on chromosomal instability, a deal that could exceed $1.1 billion should the assets hit milestones.

The researchers simulated the molecules H4, molecular nitrogen, and solid diamond. These involved as many as 120 orbitals, the patterns of electron density formed in atoms or molecules by one or more electrons. These are the largest chemistry simulations performed to date with the help of quantum computers.

A classical computer actually handles most of this fermionic quantum Monte Carlo simulation. The quantum computer steps in during the last, most computationally complex step—calculating the differences between the estimates of the ground state made by the quantum computer and the classical computer.

The prior record for chemical simulations with quantum computing employed 12 qubits and a kind of hybrid algorithm known as a variational quantum eigensolver (VQE). However, VQEs possess a number of limitations compared with this new hybrid approach. For example, when one wants a very precise answer from a VQE, even a small amount of noise in the quantum circuitry “can cause enough of an error in our estimate of the energy or other properties that’s too large,” says study coauthor William Huggins, a quantum physicist at Google Quantum AI in Mountain View, Calif.