Submitted Successfully!
To reward your contribution, here is a gift for you: A free trial for our video production service.
Thank you for your contribution! You can also upload a video entry or images related to this topic.
Version Summary Created by Modification Content Size Created at Operation
1 handwiki -- 1033 2022-11-01 01:46:33

Video Upload Options

Do you have a full video?


Are you sure to Delete?
If you have any further questions, please contact Encyclopedia Editorial Office.
HandWiki. Asperitas Microdatacenter. Encyclopedia. Available online: (accessed on 25 June 2024).
HandWiki. Asperitas Microdatacenter. Encyclopedia. Available at: Accessed June 25, 2024.
HandWiki. "Asperitas Microdatacenter" Encyclopedia, (accessed June 25, 2024).
HandWiki. (2022, November 01). Asperitas Microdatacenter. In Encyclopedia.
HandWiki. "Asperitas Microdatacenter." Encyclopedia. Web. 01 November, 2022.
Asperitas Microdatacenter

A microdatacenter is a small, self contained data center consisting of computing, storage, networking, power and cooling. Micro data centers typically employ water cooling to achieve compactness, low component count, low cost and high energy efficiency. Its small size allows decentralised deployment in places where traditional data centers cannot go, for instance edge computing for Internet of things.

water cooling edge computing microdatacenter

1. Distributed Micro Edge Data Centers

In July 2017, the Dutch company Asperitas presented a distributed micro edge data center model[1] at the Datacenter Transformation[2] educational event in Manchester.

1.1. Heat Reuse

The model is focused on the energy transformation[3] to usable heat and flexible deployment where heat is required in a larger scale, with constant heat demand. The micro data centers optimally don't require any overhead installations for cooling or no-break systems. The cooling of the servers is facilitated by sourcing cold water from the heat user, thus creating a synergy between different industries.Especially the adoption of temperature chaining, or a cascade of thermal energy, high reusable temperatures can be achieved. Due to the minimised overhead, these nodes can be deployed in large quantities near or within network hubs for urban or office areas or even as part of a non-data center facility which can directly benefit from the reusable heat. This allows for fast network access and simple energy reuse.

The micro edge nodes (10-100 kW) function as forward locations of the core data centers. The edge nodes provide services like data processing for IoT systems, data caching for digital content (YouTube, Netflix, etc.) and fast access to cloud services. The edge nodes are continuously replicated with the core datacenters and several strategic other edge nodes. This provides constant availability through geo-redundancy.[4]

By making information available in multiple locations at the same time, it becomes easy to exchange between different physical structures when interacting with the information. The capacity of overhead installations can be minimised to allow only for normal operation and a shutdown phase in case of emergency while active data processes are moved to a different facility.

The micro edge nodes are small locations with minimised overhead installations. They will have simplified configurations which consists of a small data floor, switchboard and energy delivery. Often without redundancy in power or cooling infrastructure (there is a significant thermal buffer with Immersed Computing®), but with sufficient sustainable Li-ion battery power (i.e. Tesla Powerpack) to allow for replication and shutdown. The facilities are based on Immersed Computing® and additional liquid technologies when required. This allows these facilities to become enclosed air environments which prevents environmental impact like noise or exterior installations. The liquid infrastructure is cooled with any present external cooling strategy which is available.

1.2. Edge Management

The management of the distributed datacenter model, is possible through the emergence of software platforms providing ubiquitous management of data, network and computation capacities. These kind of platforms already exists for traditional centralised infrastructure, but new challenges emerge from this hybrid and distributed architecture. Closer to the end users, edge nodes in urban areas have new constraints in terms of energy consumption and heat production. Containerisation, through technologies like Docker[5] or Singularity,[6] opens great opportunities to make applications more scalable, flexible and less dependent on the infrastructure. Many frameworks appeared recently (Swarm, Kubernetes[7]) to manage decentralised clusters. Some of them also integrate energy and heat management by design like “Q.ware[8] developed by Qarnot computing.[9] This positive dynamic in the software industry is an essential pillar to enabling core datacenters and edge nodes with an integrated architecture.

1.3. Network Optimization

The use of core datacenters and edge nodes allows for network optimisation by preventing long distance transport of raw (large) data and allowing the processing of data close to the source. By bringing data which is in high demand closer to the end user (caching), high volume data transmission across long distance backbones is greatly reduced, as well as latency which is a critical factor for delivering good end user experience.

1.4. Energy Grid Balancing

One of the limitations for datacenter growth today is the capacity of the existing power grid. In most areas in the world, the power grid was designed and implemented long before data centers even existed. There are numerous areas where the power grid will reach its maximum capacity within the next 3–5 years. The traditional datacenter approach causes high loads on very specific parts of the grid. By applying the distributed data center model, the power grid is more balanced and the impact of expansion greatly reduced.

1.5. Energy Production

By focusing on the reuse of energy, each edge node rejects its thermal energy directly into a reusable heat infrastructure (district heating/heat storage), building heating (hospitals/industry), water heating (hospitals/zoos) or other heat users. The core data centers become large suppliers of district heating networks or will be connected to 24/7 industries which require constant heating within a large scale industrial process.

1.6. Cooling Strategies in the Edge

There are numerous edge cooling strategies which are optimal for the scale of micro edge nodes. All strategies require 24/7 thermal rejection, thus completely eliminate the need for cooling installations.

Here are a few commonly available cooling strategies in urban areas:

  • Spas and swimming facilities with multiple pools have a constant demand for heating due to constant convection (near 100% reuse).
  • Hospitals and hotels equipped with warm water loops which require constant 24/7 thermal input (near 100% reuse).
  • Urban fish and vegetable farms using aquaponics (near 100% reuse).
  • Aquifers for energy storage, these can normally be supplied with thermal energy 24/7 (75% reuse).
  • Water mains can provide distributed energy savings (29% reuse).
  • Canals, lakes and sewage water can be used for heat rejection when reuse is not possible (0% reuse).

2. Edge Technologies

2.1. Asperitas Immersed Computing and AIC24

In March 2017 the Dutch company Asperitas presented Immersed Computing®,[10] a concept and portfolio dedicated for usability and easy deployment in core- and micro data centers. This different micro edge data center solution and approach is compatible with generic and branded servers and allows for large scale energy reuse. The AIC24 solution is based on a larger enclosure[11] which can deliver a maximum of 22 kW of heat.

2.2. Iceotope

A different technology which also uses complete immersion of servers is Iceotope.[12]


  1. MarketingTeamLiquid (2017-07-13). Datacentre of the Future. 
  2. Communications, Angel Business. "DATACENTRE TRANSFORMATION MANCHESTER" (in en). 
  3. "The datacentre of the future by Asperitas – Asperitas" (in en-US). 
  4. mmacy. "Data replication in Azure Storage" (in en-us). 
  5. "Docker" (in en). 
  6. "Singularity | Singularity". 
  7. "Kubernetes" (in en-US). 
  8. "Qarnot - The first computing heater for smart buildings" (in en-US). 
  9. Computing, Qarnot. "Qarnot Computing" (in en). 
  10. "Immersed Computing® by Asperitas – Asperitas" (in en-US). 
  11. "AIC24 – Asperitas" (in en-US). 
  12. "EdgeServer" (in en-GB). 
Contributor MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to :
View Times: 386
Entry Collection: HandWiki
Revision: 1 time (View History)
Update Date: 01 Nov 2022
Video Production Service