adda/co:
8.16: syntax:
. when an object is given a message,
it is the object's type mgt that applies a method;
and, these type mgr's are tasks (co.programs);
but how do we get the obj's themselves
to be tasks? (ie, running code on a separate thread).
Showing posts with label multitasking. Show all posts
Showing posts with label multitasking. Show all posts
2012-11-15
2012-11-11
read-only and write-only params
8.14: adda/cstr/function/syntax/read-only and write-only params:
if a parameter gets a pointer, (eg, /.int)
then it can be modifying both the pointer and the int,
so, shouldn't we have a syntax for expressing
what the algorithm intends to do with both?
. so, we still need a syntax for what is read-only .
. if a parameter expects a read-only,
then we can give it a modifiable
while expecting that it won't modify it .
. how about syntax for write-only?
maybe we should use (.typemark^) for that?
[11.11:
. the latest idea obviates a read-only syntax:
(inout param's)`f(x) -- in that example,
the x param is in-mode only (read-only)
that includes both the pointer and its target .
. notice that if the input is shared by any co.programs,
then we need to lock it as read-only or copy it,
unless we expect it to rely on the interactive value .]
if a parameter gets a pointer, (eg, /.int)
then it can be modifying both the pointer and the int,
so, shouldn't we have a syntax for expressing
what the algorithm intends to do with both?
. so, we still need a syntax for what is read-only .
. if a parameter expects a read-only,
then we can give it a modifiable
while expecting that it won't modify it .
. how about syntax for write-only?
maybe we should use (.typemark^) for that?
[11.11:
. the latest idea obviates a read-only syntax:
(inout param's)`f(x) -- in that example,
the x param is in-mode only (read-only)
that includes both the pointer and its target .
. notice that if the input is shared by any co.programs,
then we need to lock it as read-only or copy it,
unless we expect it to rely on the interactive value .]
Labels:
adda,
cstr,
functional,
multitasking,
param's,
rom,
syntax,
wo
2012-11-10
signaling in gui systems
8.12: adda/cstr/signals/signaling in gui systems:
how is the gui related to the signal?
. as a summary of 2012/07/gui-notification-systems
I was thinking gui's should be impl'd with
gui-optimized signals,
rather than the general signal system,
but then I wondered if that idea was
keeping mvc in mind,
so that generally all IPC (interprocess communications)
would be seeing the same windows that a
human can see with the help of a gui agent .
how is the gui related to the signal?
. as a summary of 2012/07/gui-notification-systems
I was thinking gui's should be impl'd with
gui-optimized signals,
rather than the general signal system,
but then I wondered if that idea was
keeping mvc in mind,
so that generally all IPC (interprocess communications)
would be seeing the same windows that a
human can see with the help of a gui agent .
2012-07-02
asynchronous communication and promises
6.15: adda/co/asynchronous communication and promises:
. the idea of protected types vs task types
is about whether the interaction is using an entry queue .
[6.17:
. this reminded me of what promises are:
task types are taking orders on paper,
ie, instead of being called directly,
the callers put their call details in a queue
just like a consumer-waiter-cook system .
. the ada task is designed to make the consumer wait
(ie, do nothing until their order is done);
but this doesn't have to be the case;
because, instead of blocking the caller's entire process
we could put a block on some of its variables instead;
eg, say y`= f(x) is a task call,
so the caller must wait because
the rest of its program depends on y's value;
and, y's value depends on that task call, f, finishing .
. however, we could block access to y,
such that the caller can do other things
up until trying to access y .
]
. notice in ada's style
both protected and tasking types are blocking;
ie, synchronously communicating .
. how do we ask instead for
asynchronous communication ?
we don't need new syntax because
functions logically require blocking
whereas message-sends do not .
. by logically I mean
the function code is saying:
"( I am waiting for this return value), ...
[6.17: not true:
. asynchronous behaviour should be encoded in syntax
because it does make a difference at a high level:
. the only reason to feature asynchronous calling
is when the job or its shipping could take a long time;
if the job is quick
then there's no use bothering with multitasking,
and conversely, if the job is slow,
then multitasking is practically mandatory;
and, multitasking can be changing the caller's algorithm .
]
. there are 2 ways to ask for asynch:
# pass the address of a promise-kept flag
that the caller can check to unblock a promised var;
# caller passes an email address
for sending a promise-kept notification:
the caller selects an event code
to represent a call being finished,
and that event code gets emailed to caller
when job is done .
. the usual protocol is to
have the called task email back to the caller
a pointer to the var that has been updated,
so if there are multiple targets,
the async'ly called might send an email for each completion,
or they could agree that when all is done
to email the name of the async sub that has finished;
but if caller wants to know which job was done;
then only the event code idea would be indicating
a per-call completion event .
6.26: news.adda/co/perfbook:
Is Parallel Programming Hard, And, If So, What Can You Do About It?
Paul E. McKenney, December 16, 2011
Linux Technology Center, IBM Beaverton
paulmck@linux.vnet.ibm.com
2011 version:
src:
--
seen from here: cpu-and-gpu-trends-over-time
from: JeanBaptiste Poullet @jpoullet
Bioinformatician/statistician , next generation sequencing (NGS),
NMR, Linux, Python, Perl, C/C++
. the idea of protected types vs task types
is about whether the interaction is using an entry queue .
[6.17:
. this reminded me of what promises are:
task types are taking orders on paper,
ie, instead of being called directly,
the callers put their call details in a queue
just like a consumer-waiter-cook system .
. the ada task is designed to make the consumer wait
(ie, do nothing until their order is done);
but this doesn't have to be the case;
because, instead of blocking the caller's entire process
we could put a block on some of its variables instead;
eg, say y`= f(x) is a task call,
so the caller must wait because
the rest of its program depends on y's value;
and, y's value depends on that task call, f, finishing .
. however, we could block access to y,
such that the caller can do other things
up until trying to access y .
]
. notice in ada's style
both protected and tasking types are blocking;
ie, synchronously communicating .
. how do we ask instead for
asynchronous communication ?
we don't need new syntax because
functions logically require blocking
whereas message-sends do not .
. by logically I mean
the function code is saying:
"( I am waiting for this return value), ...
[6.17: not true:
. asynchronous behaviour should be encoded in syntax
because it does make a difference at a high level:
. the only reason to feature asynchronous calling
is when the job or its shipping could take a long time;
if the job is quick
then there's no use bothering with multitasking,
and conversely, if the job is slow,
then multitasking is practically mandatory;
and, multitasking can be changing the caller's algorithm .
]
. there are 2 ways to ask for asynch:
# pass the address of a promise-kept flag
that the caller can check to unblock a promised var;
# caller passes an email address
for sending a promise-kept notification:
the caller selects an event code
to represent a call being finished,
and that event code gets emailed to caller
when job is done .
. the usual protocol is to
have the called task email back to the caller
a pointer to the var that has been updated,
so if there are multiple targets,
the async'ly called might send an email for each completion,
or they could agree that when all is done
to email the name of the async sub that has finished;
but if caller wants to know which job was done;
then only the event code idea would be indicating
a per-call completion event .
6.26: news.adda/co/perfbook:
Is Parallel Programming Hard, And, If So, What Can You Do About It?
Paul E. McKenney, December 16, 2011
Linux Technology Center, IBM Beaverton
paulmck@linux.vnet.ibm.com
2011 version:
src:
--
seen from here: cpu-and-gpu-trends-over-time
from: JeanBaptiste Poullet @jpoullet
Bioinformatician/statistician , next generation sequencing (NGS),
NMR, Linux, Python, Perl, C/C++
Labels:
adda,
asynchronous,
concurrency,
multitasking,
Promise
2012-06-14
architectures that prevent freezing #mac
5.9: sci.cyb/mac/architectures that prevent freezing:
to: cocoa-dev@lists.apple.com
. in a pre-emptive OS there should be no freezing;
given the new concurrency model
that includes the use of the graphics processor GPU
to do the system's non-graphics processing,
my current guess is that the freezes happen when
something goes wrong in the GPU,
and the CPU is just waiting forever .
. the CPU needs to have some way of getting control back,
and sending an exception message to
any of the processes that were affected by the hung-up GPU .
. could any of Apple's developers
correct this theory or comment on it ?
to: cocoa-dev@lists.apple.com
. in a pre-emptive OS there should be no freezing;
given the new concurrency model
that includes the use of the graphics processor GPU
to do the system's non-graphics processing,
my current guess is that the freezes happen when
something goes wrong in the GPU,
and the CPU is just waiting forever .
. the CPU needs to have some way of getting control back,
and sending an exception message to
any of the processes that were affected by the hung-up GPU .
. could any of Apple's developers
correct this theory or comment on it ?
Labels:
architecture,
co.dev,
concurrency,
CUDA,
dev.cocoa,
dev.mac,
microkernel,
multitasking,
multithreading,
preemptive,
security
2012-05-14
historical moment linux is announced #minix #microkernel #security
4.23: news.cyb/sec/minix/
historical moment linux is announced to minix list:
. Linus is asking comp.os.minix
what they would like to see featured
in his 86-specific rewrite of minix .
. the first thread ends with an ignored call to
not have to compile device drivers into the kernel .
. decades later I would find a youtube lecture
complaining that linux really needs to have
modular device drivers
so that you don't have to reinstall them
every time a kernel upgrade comes out .
Adam David 8/26/91
may not be the real issue:
. much of what Linus objects to in minix
is not that it's a microkernel vs a monolithic;
rather, minix in 1992 was also
high-level coded to be portable,
whereas linux is tailor-fit to the 80386 cpu
using very-low level assembler code .
. making the best use of a particular processor
requires a lot of assembly language programming;
so, couldn't you have a tailor-fit microkernel ?
. the primary intent of the microkernel
is to take advantage of the fact that
a processor has 2 privilege levels;
and, your job as a microkernel designer
is to minimize the amount of code
that is running within supervisor mode;
such that even the drivers are in user mode .
. Linus stated "( porting things to linux
is generally /much/ easier
than porting them to minix );
now, I'm not sure of the particulars,
but it seems that this ease of porting
would come from the fact that most programs
are making full use of unix-usable C language,
and apparently the problem with minix was
requiring security-related modifications to source code
such that the minix-usable C
looked much different than unix-usable C .
. the key to popular security, it would seem,
is creating a special compiler that
transforms unix-C source code
into binaries that respect our boundaries .
. if the C code is also including assembler,
then only virtualization can save us .
the multi-threaded issue:
. when writing a unix file system
a monolithic os is naturally multithreaded
whereas the minix microkernel
required hacks of its message queues?
. perhaps this means that the filesystem
was considered to be one object,
such that if multiple processes want to access files,
they have to request jobs through one interface;
whereas, linux is not using one interface,
rather, each process is capable of locking a file
and so, any time a process is given some cpu time,
it has direct access to its file buffer
(in both cases the disk drive would be seen as one object;
because, it queues the disk access requests
and orders them to minimize disk arm movement;
then it buffers some files in ram,
and processes have instant access to their file's buffer).
Frans Meulenbroeks and Linus debate multi-threading:
Linus>
[5.14: TODO: back to the main point:
. I'm still wondering what it is about Linus's
definition of "(microkernel)
that precludes it having a high degree of parallelism
regardless of whether you're being multi-threaded
or using message queues .]
Linus Benedict Torvalds 1/29/92
...
. the new minix3 is much like
Linus was wishing minix 1 would have been;
plus, it's a microkernel .
historical moment linux is announced to minix list:
. Linus is asking comp.os.minix
what they would like to see featured
in his 86-specific rewrite of minix .
. the first thread ends with an ignored call to
not have to compile device drivers into the kernel .
. decades later I would find a youtube lecture
complaining that linux really needs to have
modular device drivers
so that you don't have to reinstall them
every time a kernel upgrade comes out .
Adam David 8/26/91
One of the things that really bugs me about minixSamuel S. Paik 6/26/92
is the way device drivers have to be compiled into the kernel.
So, how about doing some sensible
installable device driver code
(same goes for minix 2.0 whenever).
(adamd@rhi.hi.is)
User Level Drivers! User Level Drivers!Frans Meulenbroeks 6/29/92
Why should device drivers per se
be part of the kernel? (Well, performance purposes...)
I've liked Domain OS where you could
map a device's registers into your process memory.
If you also include a way of bouncing interrupts
from a particular device to a process,
then we could have user level device drivers.
Then, for performance reasons,
after everything is debugged
there should be a way to move device drivers
into the kernel--but only if we want to...
Samuel Paik
d65y@vax5.cit.cornell.edu
Nice idea, but there are a lot of hardware configurationsmicrokernel vs a monolithic
where you cannot simply give [a process]
access to one device register .
may not be the real issue:
. much of what Linus objects to in minix
is not that it's a microkernel vs a monolithic;
rather, minix in 1992 was also
high-level coded to be portable,
whereas linux is tailor-fit to the 80386 cpu
using very-low level assembler code .
. making the best use of a particular processor
requires a lot of assembly language programming;
so, couldn't you have a tailor-fit microkernel ?
. the primary intent of the microkernel
is to take advantage of the fact that
a processor has 2 privilege levels;
and, your job as a microkernel designer
is to minimize the amount of code
that is running within supervisor mode;
such that even the drivers are in user mode .
. Linus stated "( porting things to linux
is generally /much/ easier
than porting them to minix );
now, I'm not sure of the particulars,
but it seems that this ease of porting
would come from the fact that most programs
are making full use of unix-usable C language,
and apparently the problem with minix was
requiring security-related modifications to source code
such that the minix-usable C
looked much different than unix-usable C .
. the key to popular security, it would seem,
is creating a special compiler that
transforms unix-C source code
into binaries that respect our boundaries .
. if the C code is also including assembler,
then only virtualization can save us .
the multi-threaded issue:
. when writing a unix file system
a monolithic os is naturally multithreaded
whereas the minix microkernel
required hacks of its message queues?
. perhaps this means that the filesystem
was considered to be one object,
such that if multiple processes want to access files,
they have to request jobs through one interface;
whereas, linux is not using one interface,
rather, each process is capable of locking a file
and so, any time a process is given some cpu time,
it has direct access to its file buffer
(in both cases the disk drive would be seen as one object;
because, it queues the disk access requests
and orders them to minimize disk arm movement;
then it buffers some files in ram,
and processes have instant access to their file's buffer).
Frans Meulenbroeks and Linus debate multi-threading:
Linus>
I'd also suggest adding threading support:Frans>
the fs and mm processes need to be multithreaded
(or page faults etc are very difficult indeed to handle,
as a page-fault can happen in the fs process
and often needs the fs process to be handled).
My thoughts about multithreading are mixed.Linus Jun 26 1992>
On the one side I like the performance gain.
On the other hand this complicates things,
so it does not really fit into the minix scope.
Multi-threading isn't a question of performance:Frans Jun 29 1992>
you generally get better performance too,
but the most important part is that,
without multithreading, some things are
impossible or much more complicated .
I already mentioned demand-paging and virtual memory
that effectively /need/ multithreading,
but some other quite mundane things are simply
not possible to do without it.
The filesystem simply /has/ to be multithreaded
or you need a lot of ugly hacks.
Look at the tty code in the minix fs:
it's not exactly logical or nice.
As a tty request can take a long time,
minix has to do small ugly hacks
to free the fs process as fast as possible
so that it can do some other request while the tty is hanging.
It does a messy kind of message redirection,
but the redirection isn't a kernel primitive,
but an ugly hack to get this particular problem solved.
Not having multithreading
also results in the fact that the system tasks
cannot send each other messages freely:
you have to be very careful that there aren't
dead-locks where different system calls try to
send each other messages. Ugly.
Having multithreaded system tasks
would make a lot of things cleaner
(I don't think user tasks need to multi-thread,
but if the kernel supports it for system tasks,
it might as well work for user tasks also).
...
[. a hacked single-process message-passing fs]
removes a lot of the good points of messages.
What's the idea in using minix as a teaching tool
if it does some fundamentally buggy things?
Sorry, but I do not understand why I cannot get
paging or virtual memory without multithreaded systems.
Of course there are essential parts of the system
that must remain in memory permanently.
But why can't the core kernel do
demand paging or virtual memory
(or dispatch the work to another tasks).
What other mundane things are not possible??
I don't think multithreadedness is needed. Not even for fs.
What is needed is a message buffering ipc mechanism
and a version of fs which does not do a sendrec, but only send,
and which has a global receives
which gets all the result messages.
Then a single threaded fs does work.
--
Frans Meulenbroeks (meulenbr@prl.philips.nl)
Philips Research Laboratories
[5.14: TODO: back to the main point:
. I'm still wondering what it is about Linus's
definition of "(microkernel)
that precludes it having a high degree of parallelism
regardless of whether you're being multi-threaded
or using message queues .]
Linus Benedict Torvalds 1/29/92
>1. MICROKERNEL VS MONOLITHIC SYSTEMKen Thompson 2/4/92
True, linux is monolithic,
and I agree that microkernels are nicer. ... .
From a theoretical (and aesthetical) standpoint
linux loses.
If the GNU kernel had been ready last spring,
I'd not have bothered to even start my project:
the fact is that it wasn't and still isn't.
Linux wins heavily on points of being available now.
...
I would generally agree that microkernels are5.14:
probably the wave of the future.
However, it is in my opinion easier to
implement a monolithic kernel.
It is also easier for it to turn into a mess in a hurry
as it is modified.
Regards,
Ken
-- "Rowe's Rule: The odds are five to six
that the light at the end of the tunnel
is the headlight of an oncoming train." -- Paul Dickson
. the new minix3 is much like
Linus was wishing minix 1 would have been;
plus, it's a microkernel .
Labels:
adds,
architecture,
linux,
microkernel,
minix,
multitasking,
multithreading,
security
2010-07-27
adda's virtual preemptive multitasking
7.4: adda/translate/tasklets:
. tasklet here means
a chunk of app that is considered by
the task scheduler to be
a unit of time slicing,
allowed to be run uninterrupted .
. it comes from Ada's "(task),
a thread of execution,
and is a way of translating c's
single-threaded model
into a multi-threaded one .
--
. definitions of tasklet by others
include stackless`threads
and linux:
"(Tasklets are a deferred-execution method
as a way for interrupt handlers to schedule work
to be done in the very near future.
a tasklet is a function(of data pointer)
to be called in a software interrupt
as soon as the kernel finishes running
an interrupt handler .
)[7.7:
. app's are calling to the system
only for system resources:
for things they need permission to share
like ipc, and file sharing .]
. if app's have to call back
to system for so much,
why chop them into tasklets?
. why can't apps simply
call task mgt periodically
and in every loop ?
#:
. also need to watch for
mutual recursion loops
where stack over-use
is more dangerous than time hogging .
#:
. periodic calls to system might work for
multi-tasking system work
like when an app gives the system's
controller-view obj's a chance to
interact with user;
but, for giving many concurrent app's
a fair share of time slices,
that requires tasklets .
. c has no concept of threads,
each with their own stack,
so c's one system stack must be reserved for
only the tasklet-running system;
the calls to apps are not c calls at all .
tasklets are modeled from asm:
. each thread's program
is an array of tasklets;
pc: index into array of tasklets;
sp: ptr to thread's stack;
task mgt calls are
(thread#pc)(sp) .
7.6: adda/tasklets/goto's:
. the tasklet plan is complicated by
the looping made by goto's?
just treat each loop's {entry, exit} points
as instructions,
and the blocks between those instr's
will be tasklets .
. this will put at least 2 yield points
in every loop .
adda/tasklets/declare.blocks:
. recall nested declare.blocks
can create new locals,
so at level#n,
the thread has a local's array
with n ptrs to [decl'block` act'rec]'s
(one for each nest level) .
7.7:
. threads do need their own artificial stack:
each call to another sub
is considered a tasklet
unless it's calling a sub that is so brief
that it hasn't been broken into tasklets .
[7.27:
. there's basically 2 kinds of calls:
app calls that run an algorithm;
and type'mgt calls that do
brief operations on their members .
app calls are kept on the thread's
artificial stack,
whereas, calls to brief operations
are allowed to run on the c`stack .]
. one way is already written:
"(. all c routines
take an arg of where to start .
. the routine exits by returning
what step it should continue at .
)
-- if that is written somewhere
it's a defunct writing:
.the better way is to replace all calls
with the system's style of calls .
. I'm worried that any arificial stack idea
will be no more efficient than
just doing vm code?
. every artificial call
becomes a stack & exit
but not every call
needs to be a separate tasklet .
. also, the tasklet idea is meant only for
the app's that want multi-threading;
so, tasklets can be as unresponsive
as app's can tolerate;
adda's insured gui response,
on the other hand,
comes from adda embedding code into app's,
so that all code is frequently
calling the system to handle real-time .
. tasklet here means
a chunk of app that is considered by
the task scheduler to be
a unit of time slicing,
allowed to be run uninterrupted .
. it comes from Ada's "(task),
a thread of execution,
and is a way of translating c's
single-threaded model
into a multi-threaded one .
--
. definitions of tasklet by others
include stackless`threads
and linux:
"(Tasklets are a deferred-execution method
as a way for interrupt handlers to schedule work
to be done in the very near future.
a tasklet is a function(of data pointer)
to be called in a software interrupt
as soon as the kernel finishes running
an interrupt handler .
)[7.7:
. app's are calling to the system
only for system resources:
for things they need permission to share
like ipc, and file sharing .]
. if app's have to call back
to system for so much,
why chop them into tasklets?
. why can't apps simply
call task mgt periodically
and in every loop ?
#:
. also need to watch for
mutual recursion loops
where stack over-use
is more dangerous than time hogging .
#:
. periodic calls to system might work for
multi-tasking system work
like when an app gives the system's
controller-view obj's a chance to
interact with user;
but, for giving many concurrent app's
a fair share of time slices,
that requires tasklets .
. c has no concept of threads,
each with their own stack,
so c's one system stack must be reserved for
only the tasklet-running system;
the calls to apps are not c calls at all .
tasklets are modeled from asm:
. each thread's program
is an array of tasklets;
pc: index into array of tasklets;
sp: ptr to thread's stack;
task mgt calls are
(thread#pc)(sp) .
7.6: adda/tasklets/goto's:
. the tasklet plan is complicated by
the looping made by goto's?
just treat each loop's {entry, exit} points
as instructions,
and the blocks between those instr's
will be tasklets .
. this will put at least 2 yield points
in every loop .
adda/tasklets/declare.blocks:
. recall nested declare.blocks
can create new locals,
so at level#n,
the thread has a local's array
with n ptrs to [decl'block` act'rec]'s
(one for each nest level) .
7.7:
. threads do need their own artificial stack:
each call to another sub
is considered a tasklet
unless it's calling a sub that is so brief
that it hasn't been broken into tasklets .
[7.27:
. there's basically 2 kinds of calls:
app calls that run an algorithm;
and type'mgt calls that do
brief operations on their members .
app calls are kept on the thread's
artificial stack,
whereas, calls to brief operations
are allowed to run on the c`stack .]
. one way is already written:
"(. all c routines
take an arg of where to start .
. the routine exits by returning
what step it should continue at .
)
-- if that is written somewhere
it's a defunct writing:
.the better way is to replace all calls
with the system's style of calls .
. I'm worried that any arificial stack idea
will be no more efficient than
just doing vm code?
. every artificial call
becomes a stack & exit
but not every call
needs to be a separate tasklet .
. also, the tasklet idea is meant only for
the app's that want multi-threading;
so, tasklets can be as unresponsive
as app's can tolerate;
adda's insured gui response,
on the other hand,
comes from adda embedding code into app's,
so that all code is frequently
calling the system to handle real-time .
Labels:
adda,
concurrency,
multitasking,
preemptive,
tasklet
Subscribe to:
Posts (Atom)