2012-05-15

okL4 & Minix3 vs Xen #microkernel #security

4.23: web: okl4 vs xen:
what is okL4? a microkernel impl'ing a hypervisor:
A hypervisor is a virtual-machine monitor,
designed for running de-privileged “guest” OS's .
. it contains a kernel (ie, a part that is running in
the most privileged mode of the hardware).
A microkernel contains the minimal amount of code
needed to run in the most privileged mode of the hardware
in order to build arbitrary (yet secure) systems.
So the primary difference between the two is purpose,
and that has implications on structure and APIs.
By the microkernel's generality requirement,
it can be used to implement a hypervisor.
This is what OKL4 labs is doing .
In fact, the 1997 SOSP paper by Härtig et al
. the AIM benchmarks for L4-Linux
report a maximum throughput which is
only 5% lower than that of native Linux;
therefore, (well-designed) microkernels are
quite usable as hypervisors.

How about the other way round?
Can a hypervisor be used to implement a microkernel?
While a hypervisor is less powerful in the sense that
it doesn't have the generality of a microkernel,
it typically has a much larger TCB
(trusted computing base) .
It contains all the virtualization logic,
and all the physical device drivers .
For example, the Xen hypervisor itself is about
5–10 times the size of the OKL4 microkernel
(in kLOC [1000's of Lines Of Code]).
In addition, it has the privileged
special virtual machine “Dom0”,
which contains a complete Linux system,
all part of the TCB (which is therefore
of the order of a MLOC [1000 kLOC]).
Compare this 1000 kLOC hypervisor
to the OKL4's 15 kLOC TCB .
A small TCB is important for
safety, security and reliability
-- by enforcing the principle of least authority --
and as such, it's especially important to
mission-critical embedded systems.
4.23: web: minix and okL4 are complementary:

4.29: web.cyb/sec/minix vs okL4/minix on L4/
searching the minix3 group:

(vs the group for the old minix)
[minix3] Re: minix as hypervisors
jayesh     11/5/09
. we are thinking of using the
microkernel feature of minix
to implement hypervisors,
can anybody suggest on where to start ...
Tomas Hruby     11/5/09
Is your goal to make Minix kernel a hypervisor
or you want to use it without touching?
You would need to change the kernel a lot
to be able to use it as a hypervisor
as it is fairly tight with the userspace.
Changing the kernel (and userspace to be compliant)
so that the kernel could host another personality
would be very valuable work.
We already work on certain features
which should get us closer to a true hypervisor
although it is not our high priority.

Unlike in Linux where kernel is what makes difference,
in Minix the userspace system is what makes it unique.
In theory, there is not much difference between a
hypervisor and a micro-kernel,
Minix kernel would need substantial changes though.
As I mentioned, we sort of follow that direction.
It's going to take some time and effort.
The biggest obstacle I see is how the new VM works.
There is a strong coupling between the kernel and VM.
Right now you cannot have
multiple VM servers on top of Minix kernel,
therefore you cannot have multiple
independent virtual machines.
I can imagine a stage between the current state
and Minix-true-hypervisor
when the machines share the one VM server.
They would not be cleanly isolated though.
On the other hand, it would be a great improvement.

Possibly an interesting project would be to
port the Minix system (server, driver, etc,
except the kernel) to some variant of L4.
L4 is a high performing micro-kernel
used primarily for virtualization
that lacks a server-based system like Minix.
This would be an interesting, valuable
and much simpler task.
In contrast to Minix,
some L4 clones run on many architectures
which would make Minix OS
immediately available for them too.
Ben Leslie (ok-labs.com) Mon Jul 9 09:24:46 EST 2007
On Wed Jun 27, 2007 at 12:45:39 +0200, Martin Christian wrote:
>> As for Minix3, it grows fast since last year due to
>> good organization and open strategy
>> that attracts open-source programmers.
>> And I think maybe one day, it will become
>> more influential than L4 if things progress as now.
>That's a good point! I was also wondering
> what OKL4's understanding of Open Source is?
> More precisly these are my questions:
>1.) Why is OKL4 developed in a closed source repository?
> It would add much more confidence over
> OKLs commitment to Open Source
> if they used an open repository
> like the Linux kernel does.

Some of our work is subject to NDAs [non-disclosure agreements]
which have been signed with the relevant clients.
As such this work cannot be made public
and we cannot even 'talk around' the work being done.
We therefore made the conservative decision to
not make our repositories open to the public
but instead to release the publicly releasable code
in the form of a tarball that we can verify
contains nothing that would put us in breach of any NDA.
We take our customers' privacy concerns very seriously.
At the same time, we also want to keep our code open
for use by the community.
The solution we have arrived at allows us to
keep the released source very much 'open'.
...
We are quite happy with how Minix is going
and think there is enough room for both of us
out there in the development community.
In fact there have been thesis topics at UNSW
about reusing Minix component on top of OKL4[1].
It seems that there has also been other interest
in this in the past such as the L4/Minix project[2],
although this was based on Minix 2, not Minix 3.
Ben
[1] www.cse.unsw.edu.au ... KJE13.html
[2] http://research.nii.ac.jp/~kazuya/L4.Minix/
Gernot Heiser 6/27/07:
    >> Could anybody explain why Fiasco
    >> is so big in comparison with Minix 3?
    NHW> Hendrik, I recollect that
    NHW>  Minix3 does not support paged memory:
    NHW> it uses segments for enforcing protection,
    NHW> which explains a large part of the
    NHW> difference you observe.
... which, among others, means that it cannot
set up shared memory regions for
high-bandwidth cross address-space communication.
The lack of portability has been mentioned too.
Finally, L4 versions have a lot of
fastpath code for low-latency IPC.
My understanding is that Minix IPC is
significantly slower than L4's.
There are other optimisations in L4 implementations
that add to the code size.
In general,
you can always expect the need for a few extra things
once a system moves from academia
into the real world (as L4 has).
Having said that,
Open Kernel Labs has stripped a fair bit of code
from their OKL4 kernel (derived from Pistachio).
    Gernot
Espen Skoglund
Typically people use a tool that accounts for
blank lines and comments
when counting the lines of code.
In Karlsruhe we've always used SLOC count
[source lines of code] for our measurements
(including for the code base being compared to).
And yes, headers and macros are counted as well.

That said, there are a number of other factors
that drive up the line count.
For example, in Pistachio there are currently
two implementations of the mapping database
(keeping track of mapping relationships).
The old obsolete code has not been backed out yet
because some of the architectures
have not yet been converted to using the new code.
This accounts for about 1000 sloc.
Then there are a number of assertions and
tracing/debugging features spread all around
that do not contribute to any functionality
(and is often not even compiled in)
but that accounts for a few more thousand sloc.
Of course, other projects also have debug features
that must be accounted for
--- Pistachio is not unique in that respect.
All I'm saying is that
even when using tools such as sloccount,
it still won't tell you the whole story.

Lastly, the sloccount of a project also heavily depends on
the coding style being used.
Personally I favour a style where I don't try to
cram as much as possible into a single line.
For example, in Pistachio we always write:
        if (...)
        {
        }
rather than:
        if (...) {
        }
Using the latter approach,
i.e., not accounting for the '{' lines,
would save us more than 3000 sloc
over the whole Pistachio kernel tree
(not counting the kernel debugger).
Clearly, such a simple change in coding style
does not have any effect on the complexity of the code.
It does add quite a bit of overhead to the sloccount though.
        eSk
Alan Grimes-2:
There can't be any valid comparison between L4 and minix
because the Minix IPC mechanisms are obscenely [slow].
They managed to wrestle defeat
out of the jaws of a triumphant victory...  =((((

The problem with L4 on the other hand
is that it doesn't have an OS wrapped around it
in order to make it usable...
I have a number of earth-shaking projects
that I want to develop but can't for lack of a
reasonably good IPC mechanism
and no way to develop for L4
for reason of it's obfuscated system libraries
and no development environment. =(((

Got a counterexample of an installable and usable
L4 development system?
Wei Shen 26 Jun 2007:
>    Got a counterexample of an installable and usable
>    L4 development system?
 I think L4Env or Kenge + Iguana can do some work,
though they are not mature yet.
...
It seems that both OS Group in TUD
and ERTOS in NICTA
focus more on their new L4 API.
IMHO, seL4 and L4.sec may be great academic work
[this is 2007, and NICTA's commerical arm, okL4,
had launched only in 2006]
but at present a strong development environment
for existing kernels and APIs
may be more important to expanding the influence of  L4,
and better meet the appetite of
most open-source programmers
(of course, these may not be what concern you).
As for Minix3, it grows fast since last year due to
good organization and open strategy
that attracts open-source programmers.
And I think maybe one day, it will become
more influential than L4 if things progress as now.
news: minix's driver reliability infrastructure:
Leonid Ryzhyk 2009`On the Construction of
Reliable Device Drivers:

...  dissertation is dedicated to the problem of
device driver reliability. ... .
currently active microkernels
that support user-level drivers
include both commercial microkernels,
such as OKL4 [OKL], QNX [Don00], and INTEGRITY [Gre]
and academic systems
like seL4 [EDE07], MINIX3 [HBG+ 06],
and Nexus [SWSS05].

Recently, an attempt at building a complete
driver reliability infrastructure for MINIX 3 [HBG+ 06]
was undertaken by Herder et al. [HBG+ 09].
They extended fault isolation facilities
available in previous systems
with a limited form of temporal failure detection
based on heartbeat messages.
In addition, MINIX 3, using an IOMMU, isolates the
most severe form of device protocol violations .

MINIX 3 also implements an
original approach to fault recovery
by outsourcing the task of
restoring the state of the failed driver
to its clients [HBG+ 07].
The OS simply creates a new copy of the driver
and reconnects it to the client.
The client is notified about the failure
and can take compensatory actions
to conceal the failure from user applications.
The rationale behind this design is that
the client often contains data required for recovery
and hence can implement recovery without the overhead
of runtime monitoring.
For instance, the file system stores pending disk operations
and can reissue them in case of the disk driver failure.
As another example, a network protocol stack
handles network controller driver failures by
detecting and retransmitting lost packets
as part of the TCP protocol;
applications that use unreliable protocols, like UDP,
implement application-specific mechanisms
for tolerating lost packets.

A number of microkernels and similar systems
were not covered in the above survey,
because, despite their adherence to
microkernel design principles,
they implement drivers as part of the kernel.
These include V [Che84], Accent [RR81],
the Cache kernel [CD94], Amoeba [TKRB91],
early versions of Chorus [RAA+ 88],
Exokernels [EKO95] and others.
news: okL4 and CAmkES component technology:
Towards trustworthy computing systems:
Taking microkernels to the next level
Gernot Heiser, Kevin Elphinstone, Ihor Kuz, Gerwin Klein and Stefan M. Petters
    School of Computer Science and Engineering
    University of New South Wales
    Sydney 2052 Australia
    NICTA, Sydney, Australia
    Open Kernel Labs, Sydney, Australia
. microkernels are the best approach for
delivering truly trustworthy computer systems
in the foreseeable future.
. this presents the NICTA OSs research vision,
centered around the L4 microkernel
and based on four core projects:
# seL4 project:
. designing an improved API for a secure microkernel .
# L4.verified:
. produces a full formal verification of the microkernel .
# Potoroo:
. combines execution-time measurements
with static analysis to determine the worst case
execution profiles of the kernel .
# CAmkES (component architectures for
microkernel-based embedded systems):
. provides a component architecture for
building systems that use the microkernel .

Through close collaboration with Open Kernel Labs
(a NICTA spinoff) the research output of these projects
will make its way into products over the next few years.

Component technology: CAmkES
While the above projects focus on the microkernel itself,
providing complete embedded-systems solutions
requires that we also focus on
building the software systems that use the kernel.
The goal is to build reliable and trustworthy systems,
and to do so we must take full advantage of
what the kernel has to offer.
Doing this requires that the system be designed and built
as a highly structured, componentised,
software architecture, with individual system services
contained in separate components.
These components must have well defined interfaces,
explicitly specifying any interaction points,
and clearly detailing interaction between components.
The aim of the CAmkES project is to
provide such an architecture .

With an appropriate granularity of components,
a componentised design gives developers control over
which functionality is included in the system,
helping to keep the TCB small and manageable.
Furthermore, by placing the components in
separate protection domains,
fault containment is provided,
supported by the underlying
hardware memory management unit
and managed by the microkernel.
Given proper runtime support,
a structured design based on well-defined components
also enables the system architecture
to be changed at runtime,
allowing the loading and unloading of components
as necessary.
This leads to possibilities such as
hot-swapping of service implementations,
dynamic update of OS and system services,
and even detection and restarting of
faulty services (including drivers).
Verification of a complete software system’s correctness
will also benefit from a well structured design.
This takes advantage of the fact that
components are isolated and interact using only
the mechanisms provided by the microkernel.
Since a verified kernel gives
guarantees about interaction properties,
we can reason about the system using
models of the individual components
and their explicit interaction patterns.
. however, there are tradeoffs involved
in componentising a system:
having many small components means that
there will be more communication involved.
While IPC in L4 has been made fast and efficient,
too much communication will
nevertheless have a performance impact.
On the other hand,
a design with many small components
is good for safety, reliability and verification.
Depending on system requirements,
decisions must be made regarding these tradeoffs.
CAmkES: Component Architecture for
microkernel-based Embedded Systems
Besides providing for improved software engineering
through component-based development (CBD) [56],
the architecture is flexible
and allows for the making of tradeoffs .
. being a framework for developing a
wide range of embedded systems
(including those with limited resources),
it does not make a resulting system pay for
features that it does not need or use.

Two properties of CAmkES are
key to providing these features.
The first is that connectors, like components,
are first-class entities [54].
Rather than being implicit in the architecture,
as they are in most component systems,
connectors are explicitly defined and implemented
by system developers. This means that
new connectors may be added without modifying
the core architecture.
Thus, when designing a system,
the developer specifies the components used,
the connections between components,
and the specific connectors used for these connections.
In this way, the developer has full control over
the mechanisms used for interaction between components,
however, the concerns of component
functionality and interaction
remain cleanly separated.
Not only does this provide flexibility of design,
it also provides flexibility in making tradeoffs between
performance and other desired properties.

For example, during the design and implementation of a
network access router based on CAmkES and L4
we took advantage of the flexibility of connectors
to experiment with safety vs. performance tradeoffs.
We implemented several connectors
ranging from those that provided
complete memory protection between components
to no protection between components.
These connectors were used to produce
versions of the router that represented
the different trade-offs with regards to
protection and performance.
We also developed a version of the router that
combined the different connectors internally.
In this router, components of the network stack
were connected with performance optimised connectors,
while the stack itself was protected from
application-level components
using connectors that provided maximum protection.

The second key property
is that the architecture is minimal
and does not include superfluous features that
negatively impact run-time resource consumption.
While the core architecture only provides support for
statically-defined and -deployed systems,
it is designed to be extensible.
Extra functionality is added in the form of
extension components,
which are similar to regular components.
Extension components can be added and combined
as required to achieve desired functionality.
Currently we provide extensions that allow
components and connectors to be dynamically created
and destroyed at runtime.
Besides the benefit to modularity,
this also benefits verification, since it can
greatly reduce the number of system components
that must be verified (i.e., only those that are
actually used need to be verified).

The CAmkES project is nearing completion
and currently provides the core architecture,
dynamic extensions, and several demonstrator systems.

A scheduled follow-on project will investigate
advanced aspects of trustworthy,
componentised embedded systems
including real-time and power management,
correctness of components and their connections,
and the application of further software engineering approaches,
such as model driven development (MDD)
...
The present set of research projects
is to conclude within about a year,
at which time the technology will be
handed over to our spinoff Open Kernel Labs
for commercialisation.
Various results will make their way into products
over the next two years.
We believe that in 2009 it will be possible to deploy
real embedded systems products that
provably satisfy the highest possible levels of
safety, security and trustworthiness.
12/02/2009: CAmkES for OKL4 3.0 available
A basic port of CAmkES to OKL4 3.0 is available.
Note that this port does not use the new OKL4 SDK
and performs an 'in-tree' build.
It also still depends on Iguana.

CAmkES is a joint project between the
Empirical Software Engineering (ESE)
and ERTOS groups in NICTA.
(Embedded, Real-Time and Operating Systems)
The official project page .
While the official CAmkES project has ended,
research based on CAmkES and
componentised systems within ERTOS
continues under the CAmkES banner.
The CAmkES technology has been licensed to
Open Kernel Labs
for integration into their products.
The CAmkES framework for OKL4
has been released as open source.
Please see our software pages .

Ihor Kuz and Yan Liu
Extending the capabilities of component models for embedded systems
Proceedings of the Third International Conference on
the Quality of Software-Architectures (QoSA),
Boston, MA, USA, July, 2007


Ihor Kuz, Yan Liu, Ian Gorton and Gernot Heiser
CAmkES: a component model for secure microkernel-based embedded systems
Journal of Systems and Software Special Edition on Component-Based
Software Engineering of Trustworthy Embedded Systems,
80(5), 687–699, (May, 2007) preprint


No comments:

Post a Comment