Systems

These are some systems I built. The most recent one goes first. See the software and doc pages for details.


Although not listed here, I made significant development for Plan 9 from Bell Labs. All the software we made is either available through the standard Plan 9 distribution or in a contrib site exported publicly for Plan 9 systems.


Kv

Kv is a proprietary key-value data store built for leanXcale, distributed and built for high scalability and performance.


Clive 

Clive is a project just starting. Its aim is to bring back our beloved Plan9 style computing environment but for highly efficient cloud computing software stacks written in the Go language. In fact the plan is to remove the software stack and to let the software run on the bare hardware.

The system is written in Go and C.


Nix 

Nix is joint work of Laboratorio de Sistemas with Bell Laboratories, Sandia National Labs, and Vitanuova. 

Nix is a system kernel that can partition cores into:

  • TCs: Time-sharing cores. Similar to cores as handled by SMP systems.
  • ACs: Application cores. User code runs undisturbed on them.
  • KCs: Kernel cores. They take system load like system calls and interrupt handling.

Cores can change their role at run-time, depending on the actual system load.

This is most useful for HPC applications but also for Cloud Computing, because of both performance and convenience.


Octopus 

Octopus is a system designed to provide ubiquitous access to computing resources. Its approach is unique in that the central idea to distribute the system is to centralize everything on a personal computer. Devices and other services are later connected to this central system to provide distributed computing. Terminal devices rely on UpperWare to access system resources.


Philo II 

Philo is a distributed data base core built for Ericson Research. To fulfill the terms of the project, no information is disclosed here.


O/live and O/mero 

O/live and O/mero are a novel UIMS (window system and toolkit included) that relies on files to distribute user interfaces in a transparent way, while giving the user the freedom to rearrange and reconfigure anything in the user interface without requiring application facilities. They include a command language to control to control the interfaces.

They were built as part of the Octopus, although they are a project on their own. See the octopus page for for information, software, demos, and selected papers.


Plan B 

Plan B is an operating system designed to work in distributed environments where the set of available resources is different at different points in time. Its 4th edition is implemented as a set of user programs to run on top of Plan 9 from Bell Labs. Previous editions used their own system kernel. It’s main design guidelines are:

  • All resources are perceived as volumes. A volume is a file tree exported to the network together with a name and constraints.
  • The system operates on both local and remote boxes through the same protocol. Any implementor of such protocol can be used as part of a Plan B system.
  • Each application has its own name space and can customize it. Customization is done by defining names for volumes and specifying the desired order and constraints to tailor automatic import of network volumes.
  • Applications try to avoid connections to resources, by using calls that accept file names instead of file descriptors.
  • Volumes can be advertised as they become available to be automatically bound to pre-specified names in the name spaces of applications that care about such resources. 


Off++ 

Off++ was a distributed adaptable microkernel built as part of the UIUC 2K OS project.

The more peculiar things about Off++ is that

  • The whole network, not a single node is considered to be the hardware to be managed.
  • The microkernel is made of three simple servers:
    • The portal server: Portals can be used as ports to deliver messages. They are global and can migrate.
    • The shuttle server: provides customizable processes termed Shuttles. A shuttle is a processor context which can be extended later on. Shuttles can execute at any available CPU. 
    • The memory manager: provides a distributed address space. Each address space can hold address translations for remote physical memory. 
  • A single abstraction, the Box, is used to export all resources (like Plan 9 does with files).