Namespaces (part 2) : Solaris

namespaces
Author

Emmanuel Jeandel

Published

March 20, 2024

This is the second post in a series of posts about namespaces, and how OS other than Linux implement some form of namespaces.

In this post we look at Solaris Zones. Some of the open source versions of Solaris have different models of zones. The zones that I will examine in this post are the zones as they were initially conceived in Solaris 8.

The commands here were tested in Solaris 11.4.

Zones

It’s good to think of zones as a entirely new operating system, completely isolated from the main system: the only thing they have in common with the original system is that they run on the same computer, but they use different filesystems, different PIDs, etc. This is really apparent from the fact that the first thing one does when installing a zone is configuring it as one whould essentially configure a new system (what’s its name, what’s its timezone, etc). In this sense, I would say the closest similarity in the Linux namespace world is LXC/LXD or Incus and certainly not docker/podman.

It is still possible to share some things with the zones: some filesystems, or some network cards (or rather, a virtual version of your network card).

It is difficult to discuss Solaris without discussing zfs. The revolutionary filesystem makes it able to create volumes (partitions) on the fly. All zones in what follows will each use a different volume, created inside the /zones directory

zfs create -o mountpoint=/zones rpool/zones

Installation

Using a zone takes 4 steps:

  • Configure the zone
  • Install the zone
  • Boot the zone
  • Access the zone

Here is a typical configuration of a zone, using the zonecfg command. This is an interactive command that can also be used noninteractively

solaris% zonecfg -z mypc
Use 'create' to begin configuring a new zone.
zonecfg:mypc> create
create: Using system default template 'SYSdefault'
zonecfg:mypc> set zonepath=/zones/mypc
zonecfg:mypc> info
zonename: mypc
zonepath: /zones/mypc
brand: solaris
anet 0:
        linkname: net0
        configure-allowed-address: true
zonecfg:mypc> exit
solaris%        

The zone is named mypc. The root of the filesystem is located in /zones/mypc. The info command tells us that this new zone will have a network card called net0 that will be configured automatically.

We can now install the zone:

solaris% zoneadm -z mypc install
The following ZFS file system(s) have been created:
    rpool/zones/mypc
Progress being logged to /var/log/zones/...
       Image: Preparing at /zones/mypc/root.

 Install Log: /system/volatile/install.5551/install_log
 AI Manifest: /tmp/manifest.xml.50GKxb
  SC Profile: /usr/share/auto_install/sc_profiles/enable_sci.xml
    Zonename: mypc
Installation: Starting ...

        Creating IPS image
Retrieving catalog 1/1 solaris ...
...
DOWNLOAD                                PKGS         FILES    XFER (MB)   SPEED
library/expat                         45/247    6184/39850   55.1/301.2  4.4M/s
...
Installing new actions                   12232/62305
...
Installation: Succeeded
 done.

        Done: Installation completed in 241.426 seconds.


        Next Steps: Boot the zone, then log into the zone console (zlogin -C)
        
              to complete the configuration process.
solaris%

This is an excerpt of the install procedures. As is apparent from the capture, it looks very much like a full OS is installed: a list of packages to install is created, and then installed.

We can now boot the system, then log into the system:

solaris% zoneadm -z mypc boot
solaris% zlogin -C mypc

As one can expect when starting a new OS, here is the first screen that we see: a configuration screen:

We then get after a few steps to a login prompt, where we can test our new system. (Hint: type ~. to close the connection to your brand new OS and go back to the original one).

This new system sees nothing of the original one. From the original system, we can access via /zones/mypc/root to the root filesystem of this particular zone, and we can see the virtual network card, installed on top of the existing one:

solaris% dladm
LINK                    CLASS      MTU    STATE    OVER
net0                    phys       1500   up       --
mypc/net0               vnic       1500   up       net0

Mounting filesystems

It is possible to mount a filesystem from the host (technical term is “from the global zone”) for it to be accessible to another zone:

solaris% zonecfg -z mypc
zonecfg:mypc> add fs
zonecfg:mypc:fs> set type=lofs
zonecfg:mypc:fs> set special=/shared
zonecfg:mypc:fs> sed dir=/usr/shared
zonecfg:mypc:fs> set options=ro
zonecfg:mypc:fs> end
zonecfg:mypc> exit

In this case, the directory /shared from the global zone will be shared read-only as the directory /usr/shared in the zone mypc. This doesn’t work if the zone is currently running, and you have to reboot it (zoneadm shutdown, zoneadm boot) for it to work.

Isolated networks

Using the concept of virtual network interfaces, we can create two zones, with two different IPs, that can communicate with each other, but not with the rest of the world.

First we create (in the global zone) a bridge, and two network interfaces that are connected to this bridge:

solaris% dladm create-etherstub bridge1
solaris% dladm create-vnic -l bridge1 eth1
solaris% dladm create-vnic -l bridge1 eth2

Now we need to create two zones: the first one will use eth1, the second one eth2. While creating the zones, we also need to remove the default internet interface:

solaris% zonecfg -z pc1
zonecfg:pc1> set zonepath=/zones/pc1
zonecfg:pc1> remove anet
zonecfg:pc1> add net
zonecfg:pc1:net> set physical=eth1
zonecfg:pc1:net> end
zonecfg:pc1> info
zonename: pc1
zonepath: /zones/pc1
brand: solaris
net 0:
        physical: eth1
zonecfg:pc1> exit

and the same for pc2.

Now we only need to fix the IP addresses when booting the zone for the first time :

or do it afterwards:

pc1% ipadm create-ip eth1
pc1% ipadm create-addr -T static -a local=10.0.0.1/24 eth1/v4

From the global zone, here is how the network interfaces appear:

solaris% dladm
LINK                    CLASS      MTU    STATE    OVER
net0                    phys       1500   up       --
bridge1                 etherstub  9000   unknown  --
eth1                    vnic       9000   up       bridge1
eth2                    vnic       9000   up       bridge1
pc1/eth1                vnic       9000   up       bridge1
pc2/eth2                vnic       9000   up       bridge1

Next Time

Next Time, we examine FreeBSD jails.