Understanding the four big compute pillars

Understanding the four big compute pillars

Microsoft’s big compute solution has enabled manufacturers to digitally recreate the physical world 

Toby Ingleton |


This article first appeared in the Winter 2017 issue of The Record.

Technology is a driving force behind a fourth industrial revolution in manufacturing, with digital innovation now a vital element of business success. New, digital technologies are reshaping how products and solutions are created, and beginning to reinvent the relationship between manufacturer and customer.

These technologies are also enabling long-established manufacturers to reinvent their processes – be it by visualising real-time factory or product performance from anywhere, simulating or replicating physical products with digital twins to optimise performance, or incorporating feedback from internet of things (IoT) enabled products in the field. Microsoft’s portfolio of ‘big compute’ solutions are allowing manufacturers to shift towards digital, and enabling companies to reach new levels of efficiency and effectiveness. According to John Reed, Microsoft’s director of manufacturing industry solutions and business development, Microsoft has established four big compute pillars that address the broad scope of these solutions. Reed explains: “We’ve identified four areas of focus – cloud workstation; cloud rendering; high performance computing simulation and analysis; and deep learning and artificial intelligence (AI) training.”

Reed says the cloud workstation area is primarily focused on providing design and engineering teams with virtual workstations. These can be easily accessed by team members via standard PCs or devices.

“This allows engineers or designers to work anywhere and have a full set of design services available,” he explains. “It also enables scenarios around distributed teams, meaning workers can be remote where in the past they may have had to be in an office to carry out their work. That opens up an environment where collaboration across geographies, teams and manufacturers and suppliers is easier.”

Cloud workstation will help garner a culture of collaboration in manufacturing, says Reed.

“We see cloud workstation as making life easier for engineers, but also helping to improve collaboration both inside and outside the organisation,” he says. A number of different types of computing environments fall within the deep learning and AI training pillar, Reed says.

“In certain environments, such as autonomous vehicle for example, data from sensors, assets and devices needs to be brought together,” he says. “There needs to be a training and learning exercise that happens in order for those devices and their computing solutions to work effectively. This translates to enhanced cloud computing capabilities that might be used by an autonomous vehicle team for example, or a manufacturer that might be working on an autonomous vehicle platform.”

Reed says the territory of big compute covers a lot of work cases, and oftentimes these capabilities form parts of other solutions.

“Going back to the cloud workstation case, we might be working with not only a manufacturer or a tier one supplier, but also our partners who have computer aided design and engineering systems,” he says. “These would often be a primary potential user of cloud workstation as part of what they offer their partners.”

In the case of cloud rendering, Reed sees engineering teams at manufacturers and tier one suppliers under increasing time pressure. He believes there’s an opportunity to improve time to market and product quality by allowing for many iterations of prototypes or designs.

“This might be the design itself, or a simulation where that design is overlaid with other parts of a manufacturing plan to see how the finished product would work or how various subsystems might work together,” he says. “Cloud rendering allows for a manufacturer to get a number of different projects serviced out of the cloud environment. If they were working inside the constraints of their own computing environment, that can often create bottlenecks for resources.”

The final pillar of big compute is simulation and analysis. Reed explains that the design and manufacturing of both digital and physical components typically involves different design iterations. But as these designs increase in complexity, the ability to design over and over again becomes a crucial capability in terms of time to market and quality.

“As a complement to cloud rendering and cloud-based design workstations, we uniquely offer a variety of simulation and analysis workloads backed by Remote Direct Memory Access and Infiniband networking for tightly-coupled simulations,” Reed says. “This primarily creates the opportunity for engineers to be unshackled by constraints typical of on-premise computing resources, and means that not only does queue time drop to zero, engineers can run design of experiment jobs at a massive scale and solve for specific engineering requirements. Also, Microsoft Azure has a more aggressive hardware refresh cycle versus ¬on-premise implementations. Running jobs on the latest hardware in the cloud not only reduces run time, it helps reduce software costs. So, unlike a fixed manufacturing on-premise environment, engineers have the opportunity to run and scale jobs as business requirements dictate, but tap Microsoft Azure compute resource on-demand and pay only for what they use.”

Ultimately, Microsoft’s range of big compute solutions are providing modern manufacturers with the opportunity to do more, and be effective in how they do it.

Subscribe to the Technology Record newsletter


  • ©2024 Tudor Rose. All Rights Reserved. Technology Record is published by Tudor Rose with the support and guidance of Microsoft.