11.7: adde/outliner/links whose default is
[target replaces current link instead of current page]:
. outlines are like a tree of links except that
instead of the destination replacing the current view
it becomes added to it .
. notice that this is a generalizable link option:
a link would have a default activity,
but by using the context menu, you could select either
{ target replaces current link instead of current page
, target replaces current page
, target lauches in new {window, tab, todo list}
}.
2011-11-30
inferred typing
11.12: adda/types/inferred typing:
(reviewing evlan's type system draft)
[@] 11.10.31/news.adda/lang"evlan/
evlan.org describes many addx features
. the reason you want to infer types
is so that you can have an efficient run-time;
if you don't care about that, as when prototyping,
then oop can be checking everything at run-time,
and both the coding and the compile times can be very fast .
. perhaps another reason to infer types
is so that you can warn users when oop might not help?
-- see the obj'c text's explanation of
why you don't want to type everything weakly
(as a pointer to anything);
basically it's because obj'c offers
so much freedom with deferred linking
that the compiler can make only the most general
inference:
if you don't mention a constraint
then the compiler has to assume you need
an unconstrained type for the purpose of
linking to future objects of unknown type .
. prototype developers and even production coders
like oop especially because it lends itself well
to such typeless programming,
ie, specifically avoiding type constraints by
letting the objects themselves decide whether
they understand a certain command .
. when you get into binary operations though,
(ie, pairs of objects -- and pairing is where
the dancing gets interesting )
then typeless programming is not so useful,
so typing is the only way to do what I call oop
(the classic example of oop
is the polymorphic type"number,
a class of subtypes including integers and floats;
ie, integer and floats can be playing well together
only because they are declared as subtypes of
the same type supertype, Numbers ).
(reviewing evlan's type system draft)
[@] 11.10.31/news.adda/lang"evlan/
evlan.org describes many addx features
. the reason you want to infer types
is so that you can have an efficient run-time;
if you don't care about that, as when prototyping,
then oop can be checking everything at run-time,
and both the coding and the compile times can be very fast .
. perhaps another reason to infer types
is so that you can warn users when oop might not help?
-- see the obj'c text's explanation of
why you don't want to type everything weakly
(as a pointer to anything);
basically it's because obj'c offers
so much freedom with deferred linking
that the compiler can make only the most general
inference:
if you don't mention a constraint
then the compiler has to assume you need
an unconstrained type for the purpose of
linking to future objects of unknown type .
. prototype developers and even production coders
like oop especially because it lends itself well
to such typeless programming,
ie, specifically avoiding type constraints by
letting the objects themselves decide whether
they understand a certain command .
. when you get into binary operations though,
(ie, pairs of objects -- and pairing is where
the dancing gets interesting )
then typeless programming is not so useful,
so typing is the only way to do what I call oop
(the classic example of oop
is the polymorphic type"number,
a class of subtypes including integers and floats;
ie, integer and floats can be playing well together
only because they are declared as subtypes of
the same type supertype, Numbers ).
Labels:
adda,
deferred linking,
late binding,
oop,
type,
type cluster,
type inferencing
evlan.org describes many #addx features
10.31: news.adda/lang"evlan/
evlan.org describes many addx features:
. Kenton Varda's Evlan has many addx features;
but, it "(currently has no type system);
and, it died in 2007.10.1, when the author "(got a real job)!
. what first caught my was cap-based security:
is its being a virtual operating system
which runs on top of any other operating system;
via a virtual machine that is the sole access
to the underlying platform's OS .
. by a complete platform,
he means it is not specialized to be
efficiently accessing all of bloated unix,
but to define what it needs,
and then have the vm access what it needs
for impl'ing the Evlan virtual platform .
adda's secure by language idea:
also implementing security at the hardware level;
this is the level where modularity can be enforced;
otherwise, malware could just jump to
what ever code there is with the permissions it needs .
(see qubes doc's) .]
-- that means esp'ly your browser --
are all written in this provable language ? ]
see: Capability-based Security is used in the
EROS operating system and the E programming language.
Also, Marc Stiegler et al.
addx's idea that concurrency is the key
to secure speed:
meant for a single CPU; eg, [written in adda:]
having only 19 keywords;
one of which is "(where)
which could have been said with a reused "(:) .]
Because Evlan is purely functional,
no function call is allowed to have "side effects".
. see www.acmqueue.com/modules.php?name=Content&pa=showpage&pid=225
(dead link, is this right: queue.acm.org/detail.cfm?id=864034)?
-- ask google where else that link can be found:
Natural programming languages and environments
Communications of the ACM September 2004/Vol. 47, No. 9 by Brad A. Myers, John F. Pane, Andy Ko
. another contrast to assembly-like imperative instructions
is addm's idea for a high-level interpreter:
instead of translating from an abstract syntax tree (AST)
into a sequence of assembly instructions;
addm, the vm, would try to evaluate an AST . ]
evlan.org describes many addx features:
. Kenton Varda's Evlan has many addx features;
but, it "(currently has no type system);
and, it died in 2007.10.1, when the author "(got a real job)!
. what first caught my was cap-based security:
Most of today's computer platforms operate under. the other main idea like addx
a fundamental assumption that has proven utterly false:
that if a user executes a program,
the user completely trusts the program.
This assumption has been made by just about
every operating system since Unix
and is made by all popular operating systems used today.
This one assumption is arguably responsible for
the majority of end-user security problems.
It is the reason malware -- adware, spyware, and viruses --
are even possible to write,
and it is the reason even big-name software
like certain web browsers
are so hard to keep secure.
Under the classical security model,
when a user runs a piece of software,
the user is granting that software the capability
to do anything that the user can do.
Under capability-based security,
when a user runs a piece of software,
the software starts out with no capabilities whatsoever.
The user may then grant specific capabilities to the program.
For example, the user might grant the program
permission to read its data files.
The user could also control whether or not
the program may access the network, play sounds, etc.
Most importantly,
all of these abilities can be controlled
independently on a per-program basis.
is its being a virtual operating system
which runs on top of any other operating system;
via a virtual machine that is the sole access
to the underlying platform's OS .
. by a complete platform,
he means it is not specialized to be
efficiently accessing all of bloated unix,
but to define what it needs,
and then have the vm access what it needs
for impl'ing the Evlan virtual platform .
adda's secure by language idea:
Even better than implementing security in an operating system,--[. keep in mind that this won't work without
is implementing it in a programming language:
developers are able to control the capabilities available
to each piece of code independently.
Good practice, then, would be to only give each component
the bare minimum capabilities that it needs to
perform its desired operation.
And, in fact, it is easier to give fewer capabilities,
so this is what programmers will usually do.
The result is that if a security hole exists in some part of your program,
the worst an attacker can do is
gain access to the capabilities that were given to that component.
also implementing security at the hardware level;
this is the level where modularity can be enforced;
otherwise, malware could just jump to
what ever code there is with the permissions it needs .
(see qubes doc's) .]
It is often possible to restrict all "dangerous" capabilities[. great to know if your os and every other app
to one very small piece of code within your program.
Then, all you have to do is make sure that
that one piece of code is secure.
When you are only dealing with a small amount of code,
it is often possible to prove that this is the case.
It is, in fact, quite possible to prove
that a program written in a capability-based language
is secure.
-- that means esp'ly your browser --
are all written in this provable language ? ]
see: Capability-based Security is used in the
EROS operating system and the E programming language.
Also, Marc Stiegler et al.
addx's idea that concurrency is the key
to secure speed:
Evlan has strong support for highly concurrent programmingMost of today's systems use imperative languages
being a purely functional language,
Where automatic parallelization is not possible,
there are very lightweight threads (without stacks).
meant for a single CPU; eg, [written in adda:]
processAll(results#t.type).proc:
for(i.int = results`domain):
results#i`= processItem results#i
).
. in c, it would process each item in order,adda aspires to be simple:
from first to last.
It does not give the system any room to
process two items in parallel
or in some arbitrary (most-efficient) order .
It is far easier to remember--[. however he has ada-itis and braggs about
multiple uses for a single construct
than to remember multiple constructs.
having only 19 keywords;
one of which is "(where)
which could have been said with a reused "(:) .]
interoperability done via compatibility layers:functional:
If and when the legacy software is no longer in use,
the compatibility layers can be removed,
leaving only the ultra-efficient Evlan core behind.
Because Evlan is purely functional,
no function call is allowed to have "side effects".
. see www.acmqueue.com/modules.php?name=Content&pa=showpage&pid=225
(dead link, is this right: queue.acm.org/detail.cfm?id=864034)?
-- ask google where else that link can be found:
Natural programming languages and environments
Communications of the ACM September 2004/Vol. 47, No. 9 by Brad A. Myers, John F. Pane, Andy Ko
This article presents findings on natural programming languages.natural programming:
(i.e. natural as in intuitive in the UserCentric way)
It advocates features for programming languages,
that are justified by these findings.
Areas include:
SyntacticSugar, that causes fewer errors
(it more intuitive/natural).
Better integrated debugging-support,
esp. for hypotheses about bug-causes.
which suggests that humans like to write rules of the form
"When condition X is true, do Y".
I am not yet sure to what extent this will
change the current system, however.
By "natural programming" we are aiming forFunctional Bytecode
the language and environment to work the way that
nonprogrammers expect.
We argue that if the computer language were to
enable people to express algorithms and data
more like their natural expressions,
the transformation effort would be reduced.
Thus, the Natural Programming approach
is an application of the standard user-centered design process
to the specific domain of programming languages and environments.
[at that link is a] good short summary of the
Natural Programming project here at CMU.
We're affiliated with the EUSES consortium,
which is a group of universities working on
end-user software engineering.
google's 2007 Update on the Natural Programming Project
Bytecode for the Evlan virtual machine--[11.30:
uses a functional representation of the code.
In contrast, the VMs of Java and .NET
use assembly-like imperative instructions.
. the GNU C Compiler's "Static Single Assignment" phase
is actually very close to being equivalent to
a purely functional language. [no overwrites]
If GCC does this internally just to make optimization easier,
then it doesn't take a huge stretch to imagine a system which
compiles imperative code to functional bytecode .
. another contrast to assembly-like imperative instructions
is addm's idea for a high-level interpreter:
instead of translating from an abstract syntax tree (AST)
into a sequence of assembly instructions;
addm, the vm, would try to evaluate an AST . ]
The purpose of constraining variables to types
is to get the compiler to check your work for bugs
and optimize code as well .
. if you could specify that an index to an array
must be less than the size of the array
-- and if the compiler could then check for that --
then buffer overruns (one of the most common bugs
leading to security holes in modern software)
would be completely eliminated.
(run-time bounds checking can prevent buffer overruns,
but then users see your app shutting down unexpectedly).
Labels:
adda,
addm,
addx,
cap'based,
capabilities,
concurrency,
e-lang,
functional,
microkernel,
security
2011-11-01
bundles obviate folders-first policy option
10.21: adde/fs/mac's bundle idea obviates folders-first policy option:
. instead of having a folders-first policy option,
the file system should be more like
mac's bundle system
where each folder has the option of
declaring itself to be either
a folder or a bundle .
. so then, in a file system window,
folders can be listed before files,
but the bundles are treated like files .
. instead of having a folders-first policy option,
the file system should be more like
mac's bundle system
where each folder has the option of
declaring itself to be either
a folder or a bundle .
. so then, in a file system window,
folders can be listed before files,
but the bundles are treated like files .
user changes can help with localization
10.23: adde/gui/user changes help with localization:
. there should be a wrench icon
by every command
that leads to this message:
"(if you figured out what this does,
and you can think of a better word in your language,
please do tell by emailing here ...)
even better,
the gui should be editable so that when you figure something
you can give it your own words or icons .
. the program keeps a config file of this,
and every time they change it,
you can remind them that when they
feel satisfied with their gui
they can share it with others
(at this email ...).
. make sure they confirm that their understood language
is in fact what they think they're targeting;
and allow comments that include dialects or locations
that they think it's special to ?
-- source of that idea's inspiration:
news: Software Localization Paradox:
Translatewiki.net
. a localisation platform for translation communities,
language communities, and free and open source projects.
-- found by gui translator who refered to
wikimedia's localizion paradox .
from an Interview with Wikimedia’s Amir Aharoni .
. there should be a wrench icon
by every command
that leads to this message:
"(if you figured out what this does,
and you can think of a better word in your language,
please do tell by emailing here ...)
even better,
the gui should be editable so that when you figure something
you can give it your own words or icons .
. the program keeps a config file of this,
and every time they change it,
you can remind them that when they
feel satisfied with their gui
they can share it with others
(at this email ...).
. make sure they confirm that their understood language
is in fact what they think they're targeting;
and allow comments that include dialects or locations
that they think it's special to ?
-- source of that idea's inspiration:
news: Software Localization Paradox:
Like any other translation,
software localization is best done by people who know well
both the original language in which
the software interface was written – usually English,
and the target language.
People who don’t know English strongly
prefer to use software in a language they know.
If the software is not available in their language,
they will either not use it at all
or will have to memorize lots of
otherwise meaningless English strings
and locations of buttons.
People who do know English often prefer to
use software in English
even if it is available in their native language.
The two most frequent explanations for that is
that the translation is bad
and that people who want to use computers
should learn English anyway.
The problem is that for various reasons
lots of people will never learn English
even if it would be mandatory in schools
and useful for business.
They will have to suffer the bad translations
and will have no way to fix it.
So this is the paradox
– to fix localization bugs,
someone must notice them,
and to notice them, more people who know English
must use localized software,
but people who know English
rarely use localized software.
That’s why lately i’ve been evangelizing about it.
Even people who know English well
should use software in their language
– not to boost their national pride,
but to help the people who
speak that language and don’t know English.
They should use the software especially if it’s translated badly,
because they are the only ones who can
report bugs in the translation
or fix the bugs themselves.
(A side note: Needless to say,
Free Software
is much more convenient for localization,
because proprietary software companies are usually
too hard to even approach about this matter;
they only pay translators if they have a reason to believe
that it will increase sales.
This is another often overlooked advantage
of Free Software.)
I am glad to say that i convinced most people to whom i spoke
about it at Wikimania
to at least try to use Firefox in their native language
and taught them where to report bugs about it.
I also challenged them to write at least
one article in the Wikipedia
in their own language,
such as Hindi, Telugu or Kannada
– as useful as the English Wikipedia is
to the world,
Telugu Wikipedia is much more useful
for people who speak Telugu, but no English.
I already saw some results.
I am now looking for ideas and verifiable data
to develop this concept further.
What are the best strategies to convince people
that they should use localized software?
For example:
How economically viable is software localization?
What is cheaper for an education department of a country
– to translate software for schools
or to teach all the students English?
Or: How does the absence of localized software
affect different geographical areas
in Africa, India, the Middle East?
Translatewiki.net
. a localisation platform for translation communities,
language communities, and free and open source projects.
-- found by gui translator who refered to
wikimedia's localizion paradox .
from an Interview with Wikimedia’s Amir Aharoni .
Labels:
adde,
gui,
localisation
Subscribe to:
Posts (Atom)