During this morning, AMD has announced its new hUMA, the heterogeneous Uniform Memory Access. In short, it is an intelligent computing architecture that enables CPU, GPU and any other processor to access the same memory, which eventually brings many improvements on almost every aspect of the platform.
According to AMD, the hUMA is an intelligent computing architecture that enables any processor, including of course CPU and GPU, to work in harmony from a single piece of silicon in a single pool of memory an seamlessly move task to the best suited processing unit. With it, unlike the old traditional CPU/GPU memory sharing, means that a pointer (application, task) can be simply passed from CPU to GPU depending on which processor would do it faster and then both processors can read results with any form of copying between them.
By moving the GPU and CPU onto a single die, AMD gave the GPU a direct access to the CPU memory from the same address space as with the bi-direction coherent memory, any updates made by one processing element can be seen by all other members. Next feature of the hUMA is pageable memory where GPU can take page faults and is no longer restricted to page locked memory. Of course, the key feature is the noted "entire memory space" as CPU and GPU processes can dynamically allocate as much memory as they need from the entire available memory space.
AMD's hUMA also promises some other improvements as it will be much easier for programmers, does not need any special APIs, can move CPU multi-core algorithms to the GPU without recoding, generally has lower development cost and, of course, offers more performance and needs less power to do it.
AMD's hUMA is expected to debut with Kaveri APUs scheduled to appear in next few months if all goes well for AMD.
Source:
AMD.com.