Solving the power consumption problem while delivering for modern workloads

Parallel Arm Node Designed Architecture (PANDA)

A Revolutionary Approach to Server Design

Bamboo Systems’ patented Parallel Arm Node Designed Architecture (PANDA) was designed to scale-out and deliver the high throughput computing platform required by cloud-targeted applications and modern highly parallel workloads while significantly reducing the energy and cost of the server. Driven by a high-performance multi-core low power-consuming processor, the Bamboo system architecture maximizes system throughput while consuming less power and hence generating less heat. This in turn enables industry leading compute and thermal density, delivering more performance in less rack space than traditional servers. Bamboo Arm servers are perfect for the increasingly popular scale-out and micro-services-based applications and are certified to be deployed anywhere from the edge to a hyperscale data center.

Parallel Arm Node Designed Architecture (PANDA)

PANDA

Bamboo Systems’ patented Arm architecture has been created to efficiently deploy compute, storage, and networking in a more balanced and throughput optimized manner to address the needs of today’s scale-out applications and micro-services-based workloads.

  • Delivering the required performance while saving up to 75% energy consumption in a fifth of the required rack space through:
    • Using the inherently more power efficient embedded system design methodologies
    • Delivering the performance of multiple Layerscape NXP 2160A processors that were designed for power efficiency over peak single-node performance
    • Deploying clusters of servers in a single chassis to share peripherals and remove the unnecessary duplicated costs over clustering multiple independent servers
    • Providing every processor directly the resources it requires to keep it running application power efficiently
    • Permitting individual processors to be turned off when utilization drops while maintaining an active idle ​
  • Offloading, within the same chassis, I/O processing, and management from application processing​ that both increases security and application performance.
  • Scaling number of memory channels and storage bandwidth when increasing total core count to reduce bottlenecks and contention while enabling faster data processing.
  • Attaching NVMe storage to every application processor reducing the need for large amounts of DRAM otherwise required to cache slow network-based storage, increasing overall performance while reducing power consumption and overall cost.
  • Ability to share, without contention, high-bandwidth network ports across multiple servers reducing up-link switch port and cabling costs while providing network isolation.

How the History of the Microprocessor Dictates the Future of the Data Center

Listen to John Goodacre, Founder and Chief Scientific Officer, as he discusses computer system design and its evolution and future in this interview with Sebastian Moss of Datacenter Dynamics.

* Ethernet switch capable of network virtualization and redundant path configurations

Bamboo is deployed as a cluster of four servers with associated switching within one blade. A B1000N can have up to two blades with a total of eight servers in 1U of rack space.

Resources are shared across those eight servers, including the power supplies and system components that boot and manage the servers. Full bandwidth and network to every server delivers high throughput computing.

Why It Matters in Modern Software Design

Why It Matters in Modern Software Design

PANDA-based servers are designed for microservices-based software architectures rather than monolithic software design and are able to run Kubernetes right out of the box.

The Arm architecture means each application has unique resources for better throughput and is more secure as network I/O runs in a separate processor.

Parallel throughput is further improved with locally attached storage.

Why it matters to the planet

Why It Matters to the Planet

Hyperscalers have been driving towards net-zero data centers, but have concentrated on resolving the symptom of the problem such as how to use renewable energy, how to reduce the amount of electrical power used in datacenter infrastructure and, finally, how to re-use the heat generated. They, however, do not resolve the basic issue of heat generated.​

PANDA-based servers are designed specifically to solve this problem – the generation of heat. We use between 95% and 80% less electricity for a given workload, thereby generating less heat.​

Imagine how much further those data center infrastructure strategies will go when used in combination with a PANDA-based server.