from HPC to cloud computing

HPC

Ansys HPC, OpenFoam in amazon EC2, rescale, these are some samples of current CAE applications in HPC cloud.

most CAE simulations, in nut, are solving large sparse matrix, either optmization or differential, which depends on famous linear equations solvers/libs, e.g. Petsc, libMesh, Intel Math Kernel e.t.c, inside which are implemented with message passing interface(MPI), share memory, or similar techs.

why MPI is a have-to in these applications? because the problem interested itself is so huge, which is so much computing heavily than windows office, WeChat.

as engineers want the result more preciesly, the problem interested dimension will increase more. e.g. a normal static stress analysis of vehicle engine is about 6 million elements, each of which will take 6 ~ 12 vertexes, and each vertex has 6 DoF, which gives about a kinematic matrix with size about 200 million x 200 million, no single PC can store this much data, and not even to do the matrix computing then.

so all these problems have to be running on super computers, or high-performance computer(HPC), which have more memory or CPU cores. beyond large memory and CPU cores, also need fast-speed network, as each CPU can do calculation really fast, 2 ~ 4 GHz, so if the system can’t feed data that fast, the CPUs are hungry, which is the third feature of HPC: Gb high-speed internal network.

as the NUMA/multi-core CPU architecture is popular now, to achieve the best performance from these chips is also a hot topic.

cloud

if taken HPC as centerlized memory and CPU cores(the Cathedral), then cloud is about distributed memory and CPU cores(the Bazaar). cloud is more Internet, to connected weak-power nodes anywhere, but totally has great power. the applications in cloud must be easy to decomposed in small pieces, so each weak-power node can handle a little bit. when coming to cloud computing, I’d think about Hadoop, OpenStack, and docker.

hadoop is about to analysis massive data, which is heavily used in e-commercial, social network, games. docker is the way to package small application and each of the small image can run smoothly in one node, which currently is call micro-serivce, which is exactly as the name said, MICRO. OpenStack take care virtualization layer from physical memory, cpu resources, which used about in public cloud, or virtual office env.

A or B

cloud is more about DevOps, as each piece of work is not difference from the single PC, but how to deploy and manage a single piece of work in a large cloud is the key, so develop and operations together. compare to HPC, both develop and operation needs professions.

cloud computing has extended to edge computing, which is more close to normal daily life, and more bussiness-valued; while HPC guys are more like scientist, who do weather forcast, rocket science, molecular dynamics e.t.c.

finally, the tech itself, cloud or HPC, has nothing to do with the bussiness world.

refer

cloudscaling: grid, hpc, cloud… what’s the difference

linkedin: supver computer vs cloud