Showing posts with label microkernel. Show all posts
Showing posts with label microkernel. Show all posts

2017-11-24

Intel's lack of documentation stalled MINIX

11.21: news.cyb/adds/doc/Intel's lack of documentation stalled MINIX:
. in an IEEE Computer interview,
[ieeeComputerSociety 2014]
Tanenbaum revealed he designed MINIX when
AT&T closed the source for Unix Version 7
and they could no longer use it
when teaching about the design of an OS.
. but it kept crashing until he heard a rumor
(nothing in the documentation mentioned it)
that the Intel chip when overheating
was causing an interrupt 15.
If his student Robert hadn't revealed the interrupt,
"there would have been no MINIX" he said.

2014-12-20

securing the virtual machine #addm

addm/sec/securing the virtual machine:
4.20: 12.20: summary:
. a virtual machine can provide security
along with some user freedom to develope code
by using a microkernel architecture
and learning from the Chrome browser team:
you can't build a secure app unless you
secure the underlying OS and firmware too .

2012-08-31

parasail is big win for reliable concurrency

12.7.2: news.adda/co/ParaSail
(Parallel Spec and Impl Lang) is by Ada's Tucker Taft:

first siting of parasail:
11/09/11 Language Lessons: Where New
Parallel Developments Fit Into Your Toolkit

By John Moore for Intelligence In Software
The rise of multicore processors and programmable GPUs
has sparked a wave of developments in
parallel programming languages.
Developers seeking to exploit multicore and manycore systems
-- potentially thousands of processors --
now have more options at their disposal.
Parallel languages making moves of late
include the SEJITS of University of California, Berkeley;
The Khronos Group’s OpenCL;
the recently open-sourced Cilk Plus;
and the newly created ParaSail language.
Developers may encounter these languages directly,
though the wider community will most likely find them
embedded within higher-level languages.

. ParaSail incorporates formal methods such as
preconditions and post-conditions,
which are enforced by the compiler.
In another nod to secure, safety-critical systems,

2012-06-14

architectures that prevent freezing #mac

5.9: sci.cyb/mac/architectures that prevent freezing:
to: cocoa-dev@lists.apple.com
. in a pre-emptive OS there should be no freezing;
given the new concurrency model
that includes the use of the graphics processor GPU
to do the system's non-graphics processing,
my current guess is that the freezes happen when
something goes wrong in the GPU,
and the CPU is just waiting forever .
. the CPU needs to have some way of getting control back,
and sending an exception message to
any of the processes that were affected by the hung-up GPU .
. could any of Apple's developers
correct this theory or comment on it ?

2012-06-12

helenOS built from scratch #microkernel

5.13: web.cyb/sec/microkernels/
helenOS/built from scratch:

5.15: summary:
. why is the helenOS crew constructing
yet another microkernel, when okL4 is here?
it's an academic project .
. anyway, we need microkernels,
and good competition is a good thing!
. that may be why Google Summer of Code is
funding some of it -- along with genode .
. see helen's wiki
and design papers .

2012-05-15

okL4 & Minix3 vs Xen #microkernel #security

4.23: web: okl4 vs xen:
what is okL4? a microkernel impl'ing a hypervisor:
A hypervisor is a virtual-machine monitor,
designed for running de-privileged “guest” OS's .
. it contains a kernel (ie, a part that is running in
the most privileged mode of the hardware).
A microkernel contains the minimal amount of code
needed to run in the most privileged mode of the hardware
in order to build arbitrary (yet secure) systems.
So the primary difference between the two is purpose,
and that has implications on structure and APIs.
By the microkernel's generality requirement,
it can be used to implement a hypervisor.
This is what OKL4 labs is doing .
In fact, the 1997 SOSP paper by Härtig et al
. the AIM benchmarks for L4-Linux
report a maximum throughput which is
only 5% lower than that of native Linux;
therefore, (well-designed) microkernels are
quite usable as hypervisors.

How about the other way round?
Can a hypervisor be used to implement a microkernel?
While a hypervisor is less powerful in the sense that
it doesn't have the generality of a microkernel,
it typically has a much larger TCB
(trusted computing base) .
It contains all the virtualization logic,
and all the physical device drivers .
For example, the Xen hypervisor itself is about
5–10 times the size of the OKL4 microkernel
(in kLOC [1000's of Lines Of Code]).
In addition, it has the privileged
special virtual machine “Dom0”,
which contains a complete Linux system,
all part of the TCB (which is therefore
of the order of a MLOC [1000 kLOC]).
Compare this 1000 kLOC hypervisor
to the OKL4's 15 kLOC TCB .
A small TCB is important for
safety, security and reliability
-- by enforcing the principle of least authority --
and as such, it's especially important to
mission-critical embedded systems.
4.23: web: minix and okL4 are complementary:

4.29: web.cyb/sec/minix vs okL4/minix on L4/
searching the minix3 group:

(vs the group for the old minix)
[minix3] Re: minix as hypervisors
jayesh     11/5/09
. we are thinking of using the
microkernel feature of minix
to implement hypervisors,
can anybody suggest on where to start ...
Tomas Hruby     11/5/09
Is your goal to make Minix kernel a hypervisor
or you want to use it without touching?
You would need to change the kernel a lot
to be able to use it as a hypervisor
as it is fairly tight with the userspace.
Changing the kernel (and userspace to be compliant)
so that the kernel could host another personality
would be very valuable work.
We already work on certain features
which should get us closer to a true hypervisor
although it is not our high priority.

Unlike in Linux where kernel is what makes difference,
in Minix the userspace system is what makes it unique.
In theory, there is not much difference between a
hypervisor and a micro-kernel,
Minix kernel would need substantial changes though.
As I mentioned, we sort of follow that direction.
It's going to take some time and effort.
The biggest obstacle I see is how the new VM works.
There is a strong coupling between the kernel and VM.
Right now you cannot have
multiple VM servers on top of Minix kernel,
therefore you cannot have multiple
independent virtual machines.
I can imagine a stage between the current state
and Minix-true-hypervisor
when the machines share the one VM server.
They would not be cleanly isolated though.
On the other hand, it would be a great improvement.

Possibly an interesting project would be to
port the Minix system (server, driver, etc,
except the kernel) to some variant of L4.
L4 is a high performing micro-kernel
used primarily for virtualization
that lacks a server-based system like Minix.
This would be an interesting, valuable
and much simpler task.
In contrast to Minix,
some L4 clones run on many architectures
which would make Minix OS
immediately available for them too.
Ben Leslie (ok-labs.com) Mon Jul 9 09:24:46 EST 2007
On Wed Jun 27, 2007 at 12:45:39 +0200, Martin Christian wrote:
>> As for Minix3, it grows fast since last year due to
>> good organization and open strategy
>> that attracts open-source programmers.
>> And I think maybe one day, it will become
>> more influential than L4 if things progress as now.
>That's a good point! I was also wondering
> what OKL4's understanding of Open Source is?
> More precisly these are my questions:
>1.) Why is OKL4 developed in a closed source repository?
> It would add much more confidence over
> OKLs commitment to Open Source
> if they used an open repository
> like the Linux kernel does.

Some of our work is subject to NDAs [non-disclosure agreements]
which have been signed with the relevant clients.
As such this work cannot be made public
and we cannot even 'talk around' the work being done.
We therefore made the conservative decision to
not make our repositories open to the public
but instead to release the publicly releasable code
in the form of a tarball that we can verify
contains nothing that would put us in breach of any NDA.
We take our customers' privacy concerns very seriously.
At the same time, we also want to keep our code open
for use by the community.
The solution we have arrived at allows us to
keep the released source very much 'open'.
...
We are quite happy with how Minix is going
and think there is enough room for both of us
out there in the development community.
In fact there have been thesis topics at UNSW
about reusing Minix component on top of OKL4[1].
It seems that there has also been other interest
in this in the past such as the L4/Minix project[2],
although this was based on Minix 2, not Minix 3.
Ben
[1] www.cse.unsw.edu.au ... KJE13.html
[2] http://research.nii.ac.jp/~kazuya/L4.Minix/

2012-05-14

historical moment linux is announced #minix #microkernel #security

4.23: news.cyb/sec/minix/
historical moment linux is announced to minix list:

. Linus is asking comp.os.minix
what they would like to see featured
in his 86-specific rewrite of minix .
. the first thread ends with an ignored call to
not have to compile device drivers into the kernel .
. decades later I would find a youtube lecture
complaining that linux really needs to have
modular device drivers
so that you don't have to reinstall them
every time a kernel upgrade comes out .
Adam David     8/26/91
One of the things that really bugs me about minix
is the way device drivers have to be compiled into the kernel.
So, how about doing some sensible
installable device driver code
(same goes for minix 2.0 whenever).
(adamd@rhi.hi.is)
Samuel S. Paik     6/26/92
User Level Drivers!  User Level Drivers!
Why should device drivers per se
be part of the kernel? (Well, performance purposes...)
I've liked Domain OS where you could
map a device's registers into your process memory.
If you also include a way of bouncing interrupts
from a particular device to a process,
then we could have user level device drivers.
Then, for performance reasons,
after everything is debugged
there should be a way to move device drivers
into the kernel--but only if we want to...
Samuel Paik
d65y@vax5.cit.cornell.edu
Frans Meulenbroeks     6/29/92
Nice idea, but there are a lot of hardware configurations
where you cannot simply give [a process]
access to one device register .
microkernel vs a monolithic
may not be the real issue:

. much of what Linus objects to in minix
is not that it's a microkernel vs a monolithic;
rather, minix in 1992 was also
high-level coded to be portable,
whereas linux is tailor-fit to the 80386 cpu
using very-low level assembler code .
. making the best use of a particular processor
requires a lot of assembly language programming;
so, couldn't you have a tailor-fit microkernel ?

. the primary intent of the microkernel
is to take advantage of the fact that
a processor has 2 privilege levels;
and, your job as a microkernel designer
is to minimize the amount of code
that is running within supervisor mode;
such that even the drivers are in user mode .

. Linus stated "( porting things to linux
is generally /much/ easier
than porting them to minix );
now, I'm not sure of the particulars,
but it seems that this ease of porting
would come from the fact that most programs
are making full use of unix-usable C language,
and apparently the problem with minix was
requiring security-related modifications to source code
such that the minix-usable C
looked much different than unix-usable C .

. the key to popular security, it would seem,
is creating a special compiler that
transforms unix-C source code
into binaries that respect our boundaries .
. if the C code is also including assembler,
then only virtualization can save us .

the multi-threaded issue:
. when writing a unix file system
a monolithic os is naturally multithreaded
whereas the minix microkernel
required hacks of its message queues?

. perhaps this means that the filesystem
was considered to be one object,
such that if multiple processes want to access files,
they have to request jobs through one interface;
whereas, linux is not using one interface,
rather, each process is capable of locking a file
and so, any time a process is given some cpu time,
it has direct access to its file buffer
(in both cases the disk drive would be seen as one object;
because, it queues the disk access requests
and orders them to minimize disk arm movement;
then it buffers some files in ram,
and processes have instant access to their file's buffer).

Frans Meulenbroeks  and Linus debate multi-threading:
Linus>
I'd also suggest adding threading support:
the fs and mm processes need to be multithreaded
(or page faults etc are very difficult indeed to handle,
as a page-fault can happen in the fs process
and often needs the fs process to be handled).
Frans>
My thoughts about multithreading are mixed.
On the one side I like the performance gain.
On the other hand this complicates things,
so it does not really fit into the minix scope.
Linus Jun 26 1992>
Multi-threading isn't a question of performance:
you generally get better performance too,
but the most important part is that,
without multithreading, some things are
impossible or much more complicated .
I already mentioned demand-paging and virtual memory
that effectively /need/ multithreading,
but some other quite mundane things are simply
not possible to do without it.

The filesystem simply /has/ to be multithreaded
or you need a lot of ugly hacks.
Look at the tty code in the minix fs:
it's not exactly logical or nice.
As a tty request can take a long time,
minix has to do small ugly hacks
to free the fs process as fast as possible
so that it can do some other request while the tty is hanging.
It does a messy kind of message redirection,
but the redirection isn't a kernel primitive,
but an ugly hack to get this particular problem solved.

Not having multithreading
also results in the fact that the system tasks
cannot send each other messages freely:
you have to be very careful that there aren't
dead-locks where different system calls try to
send each other messages.  Ugly.
Having multithreaded system tasks
would make a lot of things cleaner
(I don't think user tasks need to multi-thread,
but if the kernel supports it for system tasks,
it might as well work for user tasks also).
...
[. a hacked single-process message-passing fs]
removes a lot of the good points of messages.
What's the idea in using minix as a teaching tool
if it does some fundamentally buggy things?
Frans Jun 29 1992>
Sorry, but I do not understand why I cannot get
paging or virtual memory without multithreaded systems.
Of course there are essential parts of the system
that must remain in memory permanently.
But why can't the core kernel do
demand paging or virtual memory
(or dispatch the work to another tasks).
What other mundane things are not possible??

I don't think multithreadedness is needed. Not even for fs.
What is needed is a message buffering ipc mechanism
and a version of fs which does not do a sendrec, but only send,
and which has a global receives
which gets all the result messages.
Then a single threaded fs does work.
--
Frans Meulenbroeks        (meulenbr@prl.philips.nl)
        Philips Research Laboratories

[5.14: TODO: back to the main point:
. I'm still wondering what it is about Linus's
definition of "(microkernel)
that precludes it having a high degree of parallelism
regardless of whether you're being multi-threaded
or using message queues .]

Linus Benedict Torvalds     1/29/92
>1. MICROKERNEL VS MONOLITHIC SYSTEM
True, linux is monolithic,
and I agree that microkernels are nicer. ... .
From a theoretical (and aesthetical) standpoint
linux loses.
If the GNU kernel had been ready last spring,
I'd not have bothered to even start my project:
the fact is that it wasn't and still isn't.
Linux wins heavily on points of being available now.
Ken Thompson     2/4/92
...
I would generally agree that microkernels are
probably the wave of the future.
However, it is in my opinion easier to
implement a monolithic kernel.
It is also easier for it to turn into a mess in a hurry
as it is modified.
                                Regards,
                                        Ken
-- "Rowe's Rule: The odds are five to six
that the light at the end of the tunnel
is the headlight of an oncoming train."       -- Paul Dickson
5.14:
. the new minix3 is much like
Linus was wishing minix 1 would have been;
plus, it's a microkernel .

2011-11-30

evlan.org describes many #addx features

10.31: news.adda/lang"evlan/
evlan.org describes many addx features:

. Kenton Varda's Evlan has many addx features;
but, it "(currently has no type system);
and, it died in 2007.10.1, when the author "(got a real job)!

. what first caught my was cap-based security:
Most of today's computer platforms operate under
a fundamental assumption that has proven utterly false:
that if a user executes a program,
the user completely trusts the program.
This assumption has been made by just about
every operating system since Unix
and is made by all popular operating systems used today.
This one assumption is arguably responsible for
the majority of end-user security problems.
It is the reason malware -- adware, spyware, and viruses --
are even possible to write,
and it is the reason even big-name software
like certain web browsers
are so hard to keep secure.
Under the classical security model,
when a user runs a piece of software,
the user is granting that software the capability
to do anything that the user can do.
Under capability-based security,
when a user runs a piece of software,
the software starts out with no capabilities whatsoever.
The user may then grant specific capabilities to the program.
For example, the user might grant the program
permission to read its data files.
The user could also control whether or not
the program may access the network, play sounds, etc.
Most importantly,
all of these abilities can be controlled
independently on a per-program basis.
. the other main idea like addx
is its being a virtual operating system
which runs on top of any other operating system;
via a virtual machine that is the sole access
to the underlying platform's OS .
. by a complete platform,
he means it is not specialized to be
efficiently accessing all of bloated unix,
but to define what it needs,
and then have the vm access what it needs
for impl'ing the Evlan virtual platform .

adda's secure by language idea:
Even better than implementing security in an operating system,
is implementing it in a programming language:
developers are able to control the capabilities available
to each piece of code independently.
Good practice, then, would be to only give each component
the bare minimum capabilities that it needs to
perform its desired operation.
And, in fact, it is easier to give fewer capabilities,
so this is what programmers will usually do.
The result is that if a security hole exists in some part of your program,
the worst an attacker can do is
gain access to the capabilities that were given to that component.
--[. keep in mind that this won't work without
also implementing security at the hardware level;
this is the level where modularity can be enforced;
otherwise, malware could just jump to
what ever code there is with the permissions it needs .
(see qubes doc's) .]
It is often possible to restrict all "dangerous" capabilities
to one very small piece of code within your program.
Then, all you have to do is make sure that
that one piece of code is secure.
When you are only dealing with a small amount of code,
it is often possible to prove that this is the case.
It is, in fact, quite possible to prove
that a program written in a capability-based language
is secure.
[. great to know if your os and every other app
-- that means esp'ly your browser --
are all written in this provable language ? ]

see: Capability-based Security is used in the
EROS operating system and the E programming language.
Also, Marc Stiegler et al.

addx's idea that concurrency is the key
to secure speed:
Evlan has strong support for highly concurrent programming
being a purely functional language,
Where automatic parallelization is not possible,
there are very lightweight threads (without stacks).
Most of today's systems use imperative languages
meant for a single CPU; eg, [written in adda:]
processAll(results#t.type).proc:
for(i.int = results`domain):
      results#i`= processItem results#i
).

. in c, it would process each item in order,
from first to last.
It does not give the system any room to
process two items in parallel
or in some arbitrary (most-efficient) order .
adda aspires to be simple:
It is far easier to remember
multiple uses for a single construct
than to remember multiple constructs.
--[. however he has ada-itis and braggs about
having only 19 keywords;
one of which is "(where)
which could have been said with a reused "(:) .]
interoperability done via compatibility layers:
If and when the legacy software is no longer in use,
the compatibility layers can be removed,
leaving only the ultra-efficient Evlan core behind.
functional:
Because Evlan is purely functional,
no function call is allowed to have "side effects".

. see www.acmqueue.com/modules.php?name=Content&pa=showpage&pid=225
(dead link, is this right: queue.acm.org/detail.cfm?id=864034)?
-- ask google where else that link can be found:

Natural programming languages and environments

Communications of the ACM September 2004/Vol. 47, No. 9
by Brad A. Myers, John F. Pane, Andy Ko
This article presents findings on natural programming languages.
(i.e. natural as in intuitive in the UserCentric way)
It advocates features for programming languages,
that are justified by these findings.
Areas include:
    SyntacticSugar, that causes fewer errors
    (it more intuitive/natural).
    Better integrated debugging-support,
    esp. for hypotheses about bug-causes.
which suggests that humans like to write rules of the form
"When condition X is true, do Y".
I am not yet sure to what extent this will
change the current system, however.
natural programming:
By "natural programming" we are aiming for
the language and environment to work the way that
nonprogrammers expect.
We argue that if the computer language were to
enable people to express algorithms and data
more like their natural expressions,
the transformation effort would be reduced.
Thus, the Natural Programming approach
is an application of the standard user-centered design process
to the specific domain of programming languages and environments.

[at that link is a] good short summary of the
Natural Programming project here at CMU.
We're affiliated with the EUSES consortium,
which is a group of universities working on
end-user software engineering.
google's 2007 Update on the Natural Programming Project
Functional Bytecode
Bytecode for the Evlan virtual machine
uses a functional representation of the code.
In contrast, the VMs of Java and .NET
use assembly-like imperative instructions.
. the GNU C Compiler's "Static Single Assignment" phase
is actually very close to being equivalent to
a purely functional language. [no overwrites]
If GCC does this internally just to make optimization easier,
then it doesn't take a huge stretch to imagine a system which
compiles imperative code to functional bytecode .
--[11.30:
. another contrast to assembly-like imperative instructions
is addm's idea for a high-level interpreter:
instead of translating from an abstract syntax tree (AST)
into a sequence of assembly instructions;
addm, the vm, would try to evaluate an AST . ]
The purpose of constraining variables to types
is to get the compiler to check your work for bugs
and optimize code as well .
. if you could specify that an index to an array
must be less than the size of the array
-- and if the compiler could then check for that --
then buffer overruns (one of the most common bugs
leading to security holes in modern software)
would be completely eliminated.
(run-time bounds checking can prevent buffer overruns,
but then users see your app shutting down unexpectedly).

2010-11-12

exception mgt

11.9: adda/exception mgt/intro:
. we expect some errors;
because, we'd like to
use some software before having to
formally verify its correctness,
so, we compensate
by finding ways to limit damage:

# isolation:
. modules should never crash other modules:
we can verify a microvisor or microkernel,
and it enforces the firewalls between processes .
. addx should be such a microkernel design;
[11.12: just as the app doesn't have
direct access to the user (it's the mvc`model
while adde is the view&controller )
it also has no direct access to the machine:
all c-level activity is handled by addx|addm .]

# regularly yielding to the gui coroutine:
. a critical priority of addx
is never letting the gui be unresponsive;
this is done by having adda embedding calls to addx
at every place in a subrogram that could
mean more than a milisec of cpu time:
# inside every function call,
# inside every loop .
. this is cooperative multi-tasking with
system-enforced coroutine yielding
for paravirtualizing true multi-tasking .

# regular reporting to supervisor:
. getting a regular progress report
will limit waiting on the down employee .
. employees can be waiting on
down service providers;
loops can be waiting on a terminal condition
that never arrives;
as when there's unexpected input;
mutual recursions (eg, f->g->h->f)
are another form of looping;
deadlocks can cause indefinite waiting,
but those can be avoided by
proper use of concurrency,
and by not locking resources
(using queues instead).
. a subprogram can be waiting because of
a failure to establish dialog with the user
(eg, if the system allows a window to be
always on top,
then dialog windows may be hidden from view).
. all threads are periodically tested for
whether they're waiting for a job offer,
for a {service, resource},
or are performing a service;
in which case,
addx measures how long ago the thread
produced its latest progress report .
. the progress reports read like a narrative
that a human would appreciate;
to make these reports quick,
msg's are encoded, so that each msg
is represented by some value in a byte (0...255)
(or a 16-bit integer if this subprogram had
more than 256 msg's).

# blackbox recorder:
. it tells us what was going on
just before a crash .
. it saves the progress reports,
and the recent call chain .

# a debug mode:
. when users' tools are not working well,
some users may like to see what's wrong
by using developers' visualization tools;
eg,
a freezing problem can be illustrated
as a looping call chain:
. the tool collects all the recent calls
(saved by the blackbox recorder)
then tries to find a recursion,
and shows the list of calls that are repeated .

11.8: sci.adda/exception mgt/reporting#loop count approximations:
[11.11: obs:
. it would be better to use progress reporting .]
. if adda can't statically figure out
when a loop won't terminate,
it should remind coders to give
approximate loop counts;
then the run-time can use that how?
sci:
. before the loop entry,
adda sets the counter, and a limit,
then has the scheduler check the limit .
. scheduler doesn't have to check all the time;
but if 2sec's go by,
nearly all loops are done by then,
if there is no limit given,
it starts giving it more cpu time if it can,
so the real time is compressed,
if another sec goes by
and there's other jobs to do,
then it starts getting starved ?
it could be doing a huge job!

2009-12-29

microkernels

10.5: bk.addn/gnu`hurd:
. needs cap-based security:
who cares that it protects the kernel
if it doesn't minimize authority gen'ly
and protect your data and stability ?!