Go lang at eweek.com and stanford.edu

10.7.30: news.adda/lang"go

. Go is a systems programming language
expressive, concurrent, garbage-collected
. go @ g'code .
. go @ golang.org .

7.30: news.adda/lang"go/eweek's interview of Pike:

paraphrase of eweek.com` Darryl K. Taft 2009-11-13
Pike, Thompson, and Griesemer
are the designers; and,
eweek is interviewing Pike .

. it can take a very long time
to build a program.
Even incremental builds can be slow.
. many of the reasons for that are
just C and C++, and the tools
that everybody used were also slow.
So we wanted to start from scratch .

[. unusual design based on the
Plan 9 compiler suite
Gccgo, written in C++,
generates good code more slowly]
The Plan 9 team produced some
programming languages of its own,
such as Alef and Limbo,
-- in the same family tree
and inspired by the same approach."
... it uses the Plan 9 compiler suite,
linker and assembler,
but the Go compiler is all new.

. One of the big problems is
maintaining the dependencies;
C++ and C in particular
make it very hard to guarantee
that you're not building
anything that you don't need to.
it's the almost dominant reason
why build times are so slow
for large C++ applications.
. Go is very rigid about
dependency specification;
go warns if you try to
pull in an unused dependency .
This guarantees a minimal tree
and that really helps the build.
. we pull the dependency information
up the tree as we go
so that everything
is looked at only once.
and never compiled more than once .
. the typically used lang's
nowadays {C++, Java},
were designed a generation ago
in terms of programming abilities
and understanding.
And they're not aging very well
in the advent of multicore processing
and the dominance of cluster computing
-- things like that really needed a
rethink at the language level.

an unusual type system:
. some have described Go as
oop without objects.
Instead of having a notion of a class
defining all the methods for that class
and then subclassing and so on,
Go is much more orthogonal .
. to write a program that needs
something like a 'write' or a 'sort'
you just say I need something that
knows how to write or sort.
You don't have to explicitly
subclass something from a sorting interface
and build it down.

It's much more compiler-derived
instead of programmer-specified.
And a smaller point,
but it matters to some of us:
you can put an oop`method,
in any type in the system.
it is fully statically typed;
eg, At compile time you know
that everything coming in
can implement the sorting interface
or it would not statically compile .

"But there is dynamic type information
under the covers ...
So it's a statically typed language
with a little bit of
polymorphism mixed in.

Go supports concurrency:
Although most of today's larger machines
have multiple cores,
the common languages out there
don't really have enough support
for using those cores efficiently .

. there are libraries to help with this,
"but they're very difficult to use
and the primitives are fairly clumsy,"

"One of our goals in Go
was to make a language that
could use those processors well,
particularly for the kind of
server-side programming
that is done at Google:
where you have many client requests
coming into a machine
that's multiplexing them .
"It's a parallel Web server with
multiple internal threads of control
for each client request that comes in"

We don't yet have the build
at the kind of scale we need
to say Go is definitely the way to go.
And there's definitely
some more library support needed
for things like scheduling and so on.
But from a starting point
it's a really good place to be."

. programmers who are used to developing
systems software with {C, C++, Java}
will find Go interesting because
it's not much slower than C
and, has a lot of the
nice, light dynamic feel
of those [rapid-dev lang's];
eg, {Python, Ruby (JavaScript?)}

"It's not a Google official language
the way that, say,
Java was an official Sun language,"

a skunkworks project:
"(. one typically developed by a
small and loosely structured group
who research and develop
primarily for the sake of radical innovation.
A skunkworks project often operates with
a high degree of autonomy, in secret,
and unhampered by bureaucracy,
with the understanding that
if the development is successful
then the product will be designed later
according to the usual process.)
. we want people to help us expand it
esp'ly the Windows support; and tools:
esp'ly a debugger, and an IDE;
esp'ly Eclipse support:
It's a fair bit of effort
to make a proper plug-in for Eclipse.

. a few welcomed language features,
like union types, are still needed .

The run-time needs a fair bit of development.
The garbage collector works just fine
but it's not robust or efficient enough
to be a practical solution
for a large-scale server.
. we're very aware of the need to
prevent gc causing pauses
in real-time systems programming .
But there's some very clever work
published at IBM that may be of use
for a much better garbage collector.
. that would be a milestone:
a native compiled language with gc
that works well in a systems environment.

news.adda/lang"go/Another Go at Language Design:

Google's Rob Pike speaking at stanford.edu 2010.4.28
Stanford EE Computer Systems Colloquium

About the talk:
. A while back, it seemed that
type-driven object-oriented languages
such as C++ and Java had taken over.
They still dominate education.
Yet the last few years
have seen a number of
different languages reach prominence,
often of very different styles:
Python, Ruby, Scala, Erlang, Lua,
and many more.
Surely there are enough languages.
Yet new ones keep appearing.
Why? And why now?
In this talk I will explain
some possible reasons
and why they led us to define
yet another language, Go.
Slides: (pdf) [the source of this entry]

Another Go at Language Design:
Russ Cox
Robert Griesemer
Rob Pike
Ian Taylor
Ken Thompson
David Symonds, Nigel Tao, Andrew
Gerrand, Stephen Ma, and others,
many contributions from the
open source community.
I'm always delighted by the
light touch and stillness
of early programming languages.
Not much text; a lot gets done.
Old programs read like quiet conversations
between a well-spoken research worker
and a well-studied mechanical colleague,
not as a debate with a compiler.
Who'd have guessed
sophistication bought such noise?
-Dick Gabriel

If more than one function is selected,
any function template specializations
in the set
are eliminated if the set also contains
a non-template function,
and any given
function template specialization F1
is eliminated if the set contains
a second function template specialization
whose function template
is more specialized than the
function template of F1
according to the
partial ordering rules of
After such eliminations, if any,
there shall remain exactly one selected function.
(C++0x, §13.4 [4])

Which Boost templated pointer type should I use?

public static <I, O> ListenableFuture<O>
  ListenableFuture<I> input
  , Function <? super I, ? extends ListenableFuture
      <? extends O
dear god make it stop
- a recently observed chat status

foo::Foo *myFoo =
new foo::Foo(foo::FOO_INIT)
- but in the original Foo was a longer word

How did we get here?
A personal analysis:
1) C and Unix became dominant in research.
2) The desire for a higher-level language
led to C++,
which grafted the Simula style of oop onto C.
It was a poor fit
but since it compiled to C
it brought high-level programming to Unix.
3) C++ became the language of choice
in {parts of industry, many research universities}.
4) Java arose
as a clearer, stripped-down C++.
5) By the late 1990s,
a teaching language was needed that seemed relevant,
and Java was chosen.
Programming became too hard
These languages are hard to use.
They are subtle, intricate, and verbose.
Their standard model is oversold,
and we respond with add-on models
such as "patterns".
( Norvig:
patterns are a demonstration of
weakness in a language.
Yet these languages are successful and vital.
A reaction
The inherent clumsiness of the main languages
has caused a reaction.
A number of successful simpler languages
(Python, Ruby, Lua, JavaScript, Erlang, ...)
have become popular, in part,
as a rejection of the standard languages.
Some beautiful and rigorous languages
(Scala, Haskell, ...)
were designed by domain experts
although they are not as widely adopted.
So despite the standard model,
other approaches are popular
and there are signs of a growth in
"outsider" languages,
a renaissance of language invention.
A confusion
The standard languages (Java, C++)
are statically typed.
Most outsider languages (Ruby, Python, JavaScript)
are interpreted and dynamically typed.
Perhaps as a result, non-expert programmers
have confused "ease of use" with
interpretation and dynamic typing.
This confusion arose because of
how we got here:
grafting an orthodoxy onto a language
that couldn't support it cleanly.
standard languages#The good:
very strong:
type-safe, effective, efficient.
In the hands of experts? great.
Huge systems and huge companies
are built on them.
. practically work well for
large scale programming:
big programs, many programmers.
standard languages#The bad:
hard to use.
Compilers are slow and fussy.
Binaries are huge.
Effective work needs language-aware tools,
distributed compilation farms, ...
Many programmers prefer to avoid them.
The languages are at least 10 years old
and poorly adapted to the current tech:
clouds of networked multicore CPUs.
Flight to the suburbs:
This is partly why Python et al.
have become so popular:
They don't have much of the "bad".
- dynamically typed (fewer noisy keystrokes)
- interpreted (no compiler to wait for)
- good tools (interpreters make things easier)
But they also don't have the "good":
- slow
- not type-safe (static errors occur at runtime)
- very poor at scale
And they're also not very modern.
A niche
There is a niche to be filled:
a language that has the good,
avoids the bad,
and is suitable to modern computing infrastructure:
statically typed
light on the page
fast to work in
scales well
doesn't require tools, but supports them well
good at networking and multiprocessing
The target:
Go aims to combine the safety and performance
of a statically typed compiled language
with the expressiveness and convenience
of a dynamically typed interpreted language.
. be suitable for modern systems programming.
Hello, world 2.0
Serving http://localhost:8080/world:
package main
import ( "fmt" "http" )
func handler(c *http.Conn, r *http.Request)
{ fmt.Fprintf(c, "Hello, %s.", r.URL.Path[1:]) }
func main()
{ http.ListenAndServe
(":8080", http.HandlerFunc(handler)) }
How does Go fill the niche?
Fast compilation
Expressive type system
Garbage collection
Systems programming capabilities
Clarity and orthogonality
Compilation demo:
Why so fast?
New clean compiler worth ~5X compared to gcc.
We want a millionX for large programs,
so we need to fix the dependency problem.
In Go, programs compile into packages
and each compiled package file
imports transitive dependency info.
If [depends on](A.go, B.go, C.go}:
- compile C.go, B.go, then A.go.
- to compile A.go,
compiler reads B.o but not C.o.
At scale, this can be a huge speedup.
Trim the tree:
Large C++ programs (Firefox, OpenOffice, Chromium)
have huge build times.
On a Mac (OS X 10.5.7, gcc 4.0.1):
. C`stdio.h: 00,360 lines from 9 files
C++`iostream: 25,326 lines from 131 files
Obj'C`Cocoa.h 112,047 lines from 689 files
-- But we haven't done any real work yet!
Go`fmt: 195 lines from 1 file:
summarizing 6 dependent packages.
As we scale,
the improvement becomes exponential.
Wednesday, April 28, 2010
Expressive type system
Go is an oop language, but unusually so.
There is no such thing as a class.
There is no subclassing.
. even basic types, eg,{integers, strings},
can have methods.
Objects implicitly satisfy interfaces,
which are just sets of methods.
Any named type can have methods:
type Day int
var dayName = []string{"Sunday", "Monday", [...]} 
func (d Day) String() string
{ if 0 <= d && int(d) < len(dayName)
    { return dayName[d] }
  return "NoSuchDay"
type Fahrenheit float
func (t Fahrenheit) String() string
{ return fmt.Sprintf("%.1f°F", t) }
Note that these methods
do not take a pointer (although they could).
This is not the same notion as
Java's Integer type:
it's really an int (float).
There is no box.
type Stringer interface { String() string }
func print(args ...Stringer)
{ for i, s := range args
    { if i > 0
       { os.Stdout.WriteString(" ") }
print(Day(1), Fahrenheit(72.29))
=> Monday 72.3°F
Again, these methods do not take a pointer,
although another type might define
a String() method that does,
and it too would satisfy Stringer.
Empty Interface:
The empty interface (interface {})
has no methods.
Every type satisfies the empty interface.
func print(args ...interface{})
{ for i, arg := range args
   { if i > 0 { os.Stdout.WriteString(" ") }
      switch a := arg.(type)
     { // "type switch"
     case Stringer: os.Stdout.WriteString(a.String())
     case int: os.Stdout.WriteString(itoa(a))
     case string: os.Stdout.WriteString(a)
     // more types can be used
     default: os.Stdout.WriteString("????")
print(Day(1), "was", Fahrenheit(72.29))
=> Monday was 72.3°F

Small and implicit
Fahrenheit and Day satisfied Stringer implicitly;
other types might too.
A type satisfies an interface simply by
implementing its methods.
There is no "implements" declaration;
interfaces are satisfied implicitly.
It's a form of duck typing,
but (usually) checkable at compile time.
An object can (and usually does)
satisfy many interfaces simultaneously.
For instance, Fahrenheit and Day satisfy Stringer
and also the empty interface.
In Go, interfaces are usually small:
one or two or even zero methods.
type Reader interface
{ Read(p []byte) (n int, err os.Error) }
// And similarly for Writer
Anything with a Read method implements Reader.
- Sources: files, buffers, network connections, pipes
- Filters: buffers, checksums, decompressors, decrypters
JPEG decoder takes a Reader, so it can decode from disk,
network, gzipped HTTP, ....
Buffering just wraps a Reader:
var bufferedInput Reader =
Fprintf uses a Writer:
func Fprintf(w Writer, fmt string, a ...interface{})

Interfaces can be retrofitted:
Had an existing RPC implementation
that used custom wire format.
Changed to an interface:
type Encoding interface
 { ReadRequestHeader(*Request) os.Error
   ReadRequestBody(interface{}) os.Error
   WriteResponse(*Response, interface{}) os.Error
   Close() os.Error
Two functions (send, recv) changed signature.
func sendResponse
  (sending *sync.Mutex, req *Request
  , reply interface{}
  , enc *gob.Encoder
  , errmsg string) 
After (and similarly for receiving):
func sendResponse
  (sending *sync.Mutex
  , req *Request
  , reply interface{}
  , enc Encoding
  , errmsg string)
That is almost the whole change
to the RPC implementation.

Post facto abstraction:
We saw an opportunity:
RPC needed only Encode and Decode methods.
Put those in an interface
and you've abstracted the codec.
Total time: 20 minutes,
including writing and testing the
JSON implementation of the interface.
(We also wrote a trivial wrapper
to adapt the existing codec
for the new rpc.Encoding interface.)
In Java,
RPC would be refactored
into a half-abstract class,
subclassed to create
JsonRPC and StandardRPC.
In Go,
there is no need to manage a type hierarchy:
just pass in an encoding interface stub
(and nothing else).
Systems software must often manage
connections and clients.
Go provides independently executing goroutines
that communicate and synchronize
using channels.
Analogy with Unix:
processes connected by pipes.
But in Go things are fully typed
and lighter weight.
Start a new flow of control with the go keyword.
Parallel computation is easy:
func main() {
  go expensiveComputation(x, y, z)
  anotherExpensiveComputation(a, b, c)
Roughly speaking, a goroutine is like
a thread, but lighter weight:
- stacks are small, segmented, sized on demand
- goroutines are muxed [multiplexed] by demand
onto true threads
- requires support from
language, compiler, runtime
- can't just be a C++ library
Thread per connection:
Doesn't scale in practice,
so in most languages
we use event-driven callbacks
and continuations.
But in Go,
a goroutine per connection model scales well.
for {
  rw := socket.Accept()
  conn := newConn(rw, handler)
  go conn.serve()
Our trivial parallel program again:
func main() {
  go expensiveComputation(x, y, z)
  anotherExpensiveComputation(a, b, c)
Need to know when the computations are done.
Need to know the result.
A Go channel provides the capability:
a typed synchronous communications mechanism.
Goroutines communicate using channels.
func computeAndSend(x, y, z int) chan int
{ ch := make(chan int)
  go func()
    {ch <- expensiveComputation(x, y, z)}()
  return ch
func main()
{ ch := computeAndSend(x, y, z)
  v2 := anotherExpensiveComputation(a, b, c)
  v1 := <-ch
  fmt.Println(v1, v2)

A worker pool:
Traditional approach (C++, etc.) is to
communicate by sharing memory:
- shared data structures protected by mutexes
Server would use shared memory
to apportion work:
type Work struct
 { x, y, z int assigned
 , done bool
type WorkSet struct
{ mu sync.Mutex
  work []*Work
But not in Go.
Share memory by communicating
In Go, you reverse the equation.
- channels use the
operator to
synchronize and communicate
Typically don't need or want mutexes.
type Work struct { x, y, z int }
func worker(in <-chan *Work, out chan <- *Work)
{ for w := range in
  { w.z = w.x * w.y
    out <- w
func main()
{ in, out := make(chan *Work), make(chan *Work)
  for i := 0; i < 10; i++
     { go worker(in, out) }
  go sendLotsOfWork(in)

Garbage collection:
Automatic memory management
simplifies life.
GC is critical for concurrent programming;
otherwise it's too fussy
and error-prone to track ownership
as data moves around.
GC also clarifies design.
A large part of the design
of C and C++ libraries
is about deciding who owns memory,
who destroys resources.
But garbage collection isn't enough.
Memory safety:
Memory in Go is intrinsically safer:
pointers but no pointer arithmetic
no dangling pointers
(locals move to heap as needed)
no pointer-to-integer conversions
   ( Package unsafe allows this
   but labels the code as dangerous;
   used mainly in some low-level libraries.)
all variables are zero-initialized
all indexing is bounds-checked
Should have far fewer buffer overflow exploits.
Systems language:
By systems language, we mean
suitable for writing systems software.
- web servers
- web browsers
- web crawlers
- search indexers
- databases
- compilers
- programming tools (debuggers, analyzers, ...)
- IDEs
- operating systems (maybe)
Systems programming:
loadcode.blogspot.com 2009:
"[Git] is known to be very fast.
It is written in C.
A Java version JGit was made.
It was considerably slower.
Handling of memory and lack of unsigned types
[were] some of the important reasons."
Shawn O. Pearce (git mailing list):
"JGit struggles with not having
an efficient way to represent a SHA-1.
C can just say "unsigned char[20]"
and have it inline into the
container's memory allocation.
A byte[20] in Java will cost
an *additional* 16 bytes of memory,
and be slower to access
because the bytes themselves
are in a different area of memory
from the container object."
Control of bits and memory:
Like C, Go has
- full set of unsigned types
- bit-level operations
- programmer control of memory layout
type T struct
 { x int
   buf [20]byte [...]
- pointers to inner values
p := &t.buf

Simplicity and clarity:
Go's design aims for being easy to use,
which means it must be easy to understand,
even if that sometimes contradicts
superficial ease of use.
Some examples:
No implicit numeric conversions,
although the way constants work
ameliorates the inconvenience.

No method overloading.
For a given type,
there is only one method
with that name.
There is no "public" or "private" label.
Instead, items with UpperCaseNames
are visible to clients;
lowerCaseNames are not.
Numeric constants are "ideal numbers":
no size or signed/unsigned distinction,
hence no L or U or UL endings.
077 // octal
1 << 100
Syntax of literal determines default type:
1.234e5 // float
1e2 // float
100 // int
But they are just numbers
that can be used at will
and assigned to variables
with no conversions necessary.
seconds := time.Nanoseconds()/1e9
// result has integer type

High precision constants:
Arithmetic with constants is high precision.
Only when assigned to a variable
are they rounded or truncated to fit.
const MaxUint = 1<<32 - 1
const Ln2
= 0.6931471805599453094172321214581\
const Log2E = 1/Ln2 // accurate reciprocal
var x float64 = Log2E // rounded to nearest float64 value
The value assigned to x
will be as precise as possible
in a 64-bit float.

And more:
There are other aspects of Go
that make it easy and expressive
yet scalable and efficient:
- clear package structure
- initialization
- clear rules about how a program
begins execution
- top-level initializing
functions and values
- composite values
var freq =
map[string]float{"C4":261.626, "A4":440}
// etc.
- tagged values
var s = Point{x:27, y:-13.2}
- function literals and closures
go func() { for { c1 <- <-c2 } }()
- reflection
- and more....
Plus automatic document generation and formatting.
Go is different:

Go is object-oriented not type-oriented:
– inheritance is not primary
– methods on any type,
but no classes or subclasses
Go is (mostly) implicit not explicit:
– types are inferred not declared
– objects have interfaces
but they are derived, not specified
Go is concurrent not parallel:
– intended for program structure,
not max performance
– but still can keep all the cores busy
– ... and many programs are
more nicely expressed with concurrent ideas
even if not parallel at all .
The language is designed and usable.
Two compiler suites:
Gc, written in C,
generates OK code very quickly.
- unusual design based on the
Plan 9 compiler suite
Gccgo, written in C++,
generates good code more slowly
- uses GCC's code generator and tools
Libraries good and growing,
but some pieces are still preliminary.
Garbage collector works fine
(simple mark and sweep)
but is being rewritten for more concurrency,
less latency.
Available for Linux etc., Mac OS X.
Windows port underway.
All available as open source.

Go was the 2009 TIOBE "Language of the year"
two months after it was released.
"I have reimplemented a networking project
from Scala to Go.
Scala code is 6000 lines.
Go is about 3000.
Even though Go does not have the power of abbreviation,
the flexible type system
seems to out-run Scala
when the programs start getting longer.
Hence, Go produces much shorter code asymptotically."
- Petar Maymounkov
"Go is unique because of the
set of things it does well.
It has areas for improvement,
but for my needs it is the best match
when compared to: C, C++, C#, D, Java,
Erlang, Python, Ruby, and Scala."
- Hans Stimer

For those on the team,
it's the main day-to-day language now.
It has rough spots but mostly in the libraries,
which are improving fast.
Productivity seems much higher.
(I get behind on mail much more often.)
Most builds take a fraction of a second.
Starting to be used inside Google
for some production work.
We haven't built truly large software in Go yet,
but all indicators are positive.
Try it out:
This is a true open source project
-- Full source, documentation and much more
... you're welcome to fork it too!
(issue#9 considers renaming the project):
Comment 120 by cjaramilu, Nov 11, 2009
"Gol", it is soccer goal in spanish
Comment 134 by stefan.midjich, Nov 11, 2009
Comment 242 by sekour, Nov 11, 2009
Comment 711 by phio.asia, Nov 12, 2009
"Qu", which means "Go" in Chinese :-) .]
Rob Pike, Another Go at Language Design
Wednesday, April 28, 2010

golang's issue#9 -- clashes with go! -- revisited

10.7.30: news.adda/lang"go!/clash with go

. I was reminded of a name dispute
between Google and a language designer,
and couldn't remember the details;
I didn't know how it compared to
Microsoft denying Linux PC's
the name Lindows (mocking their name ?)
so I revisited the matter .

wiki's take needs an update:
"( On the day of the general release,
Francis McCabe, developer of the
Go! programming language,
requested a name change of Google's language
to prevent confusion with his language.)
. that is not quite accurate:
he claimed, point blank, that "(Go)
was exactly the name of his language,
and if you look on his personal filesharing website,
indeed, "(go) is the nickname;
in both his whitepaper (pdf) and his book intro,
the published name is "(go!) not "(go):

– A Multi-paradigm Programming Language
for Implementing Multi-threaded Agents
K.L. Clark Dept. of Computing Imperial College London, UK
F.G. McCabe Fujistu Labs of America Sunnyvale CA, USA

. reading that paper,
you soon find why the name is "(go!)
-- and not go --
. the main purpose of go! is to
reverse some of prolog's faults;
. cleverly,
the reverse of "(go!) is (!og)
as in the "(log) of "(prolog)
which stands for (logic).

. the one thing he wishes most
would go away in prolog
is the "(!) operator (pronounced "(cut))
hence the name "(go!)
as in "(go away, Cut operator):
"( Go! has many features in common with Prolog,
particularly multi-threaded Prolog’s,
there are significant differences
related to transparency of code
and security.
Features of Prolog that mitigate against
such as the infamous cut (!) primitive,
are absent from Go!.)
. meanwhile, the name of
google's go lang'
could be short for "(GOogle),
and is said to be a reference to
the design being motivated by
a need for compilers to go faster:
"( "In Google we spent so long
literally waiting for compilations,
even though we have parallelism
in all of these tools to help;
even incremental builds can be slow.
And we looked at this and realized
many of the reasons for that
are just fundamental to C and C++,
as were the tools that everybody used .
So we wanted to start from scratch .)
. wikipedia has this reference:

Google 'Go' Name Brings Accusations Of 'Evil'
InformationWeek Thomas Claburn November 11, 2009

"( McCabe's Go! programming language
is described in a 2007 book he published
and in a research paper published in 2004 (pdf)) .
"( It is in the tradition of
languages like Prolog.
In particular, my motivation was
bringing some of the discipline
of software engineering
to logic programming.)
. and *that*
is exactly the origin of the "(!)
-- an essential part of the name's genius --
so it's bewildering why he would
want to also claim the "(go) name
-- in addition to the "(go!) name .
some wikipedia authors speculate
that there could be some confusion,
"(McCabe requested a name change
to prevent confusion with his language, Go! .)
indeed, some have used "(yahoo!) as an example:
"( Comment 77 by wrinkles, Nov 11, 2009
Go and Go! are not the same? Great,
I just started a new company called Yahoo.)
you can't use "(Yahoo) as a noun
for something other than Yahoo!
without giving some context .
eg, hearing "(go!) might provoke
the question:
"( did you mean "(go faster),
or "(go away cut-operator) ?
) .]
. there's a link to to go's "(Issue 9) debate
where Frank McCabe wrote up an issue with Go:
"I have already used the name for
*MY* programming language."

all of McCabe's entries in Issue 9 to date:
by fmccabe, Nov 10, 2009
I have been working on a programming language,
also called Go,
for the last 10 years.
There have been papers published on this
and I have a book.
I would appreciate it if
google changed the name of this language;
as I do not want to have to
change my language!

Comment 2 by fmccabe, Nov 10, 2009
If you google (sic) francis mccabe go
you will find some references.
I published the book on lulu.com

Comment 5 by fmccabe, Nov 10, 2009
My language is called Go!.
The book is called Let's Go!.
The issue is not whether or not Google's go
will be well known. It is one of fairness.
. if google throws a party for "(go)
they should invite "(go!),
to make sure in the future
people finding (go!) don't assume it's
some extension of google's (go) ? ]
Comment 300 by fmccabe, Nov 11, 2009
. I am very grateful for the
support I have received on this thread.
It seems to have hit a nerve.
. I want to make one particular point,
some people have suggested that
"I should be grateful"
for the extra advertising.
My response to that is that
I was not actively looking for this advertising.
. doesn't clarification require advertising
when there is such disagreement on
whether (go) clashes with (go!) ?
It was not me who picked a clashing name.
. I fully understand that it is possible that
insufficient search was done before hand.
However, when I picked the name Go!
I did try to find out if anyone else was using it.
In fact, I was kind of surprised that no one was!;
since it was clearly a great name.
. For those interested, Go! is a bi-lingual pun.
-- in Japan, where Prolog is very popular .
My previous work focused on a language called April.
In Japanese, the literal back-translation of April
is "4th Month".
Go is Japanese for 5.)

responses to McCabe's issue (verifying go! exists):

Comment 81 by yarkot1, Nov 11, 2009 [paraphrased]
'Go!' is available on sourceforge.net (networkagent),
and was developed jointly with McCabe and Clark
with commits back in 2000:
"A group of systems for building network-oriented intelligent agents,
consisting an agent communications infrastructure,
April - an agent construction programming language,
Go! - a logic programming language
and DialoX - an XML-based user interface engine".
. go! can be pronounced "(networkagent go)? ]

Comment 243 by andy.arvid, Nov 11, 2009
The Go! Source: homepage.mac.com
[redirects to nk11r10-homepage.mac.com]
april-9-30-07.tgz 2.3 MB
go-9-30-07.tgz 3.8 MB
InstallingGo.rtf 6 KB
ooio-9-30-07.tgz 821 KB
. see the problem here?
in some situ's (like unix) it's not wise*
to use the "(!) character in the name .
... could name it gol to look like (go!) ?
*: Characters you should not use in filenames:
| ; , ! @ # $ ( ) < > / \ " ' ` ~ { } [ ] = + & ^
. ]

Comment 454 by abraham.estrada, Nov 11, 2009
. that link includes these resources:]
Clark, K.L.; McCabe, F.G. (2003).
"Go! for multi-threaded deliberative agents" .
International Conference on Autonomous Agents (AAMAS'03): 964–965.
Clark, K.L.; McCabe, F.G. (2006).
"Ontology oriented programming in go!" .
Applied Intelligence 24 (3): 189–204.
Further reading
Clark, K.L.; McCabe, F.G. (2003).
Ontology Oriented Programming in Go! .
Clark, K.L.; McCabe, F.G. (2004).
"Go!—A Multi-Paradigm Programming Language
for Implementing Multi-Threaded Agents" .
Annals of Mathematics and Artificial Intelligence 41 (2-4): 171–206. .
Comment 222 by keithsthompson, Nov 11, 2009
There's a Go! program on 99-bottles-of-beer.net;
it's been there since 2005.
That site is an excellent place to check for existing language names.

responses to McCabe's issue (resolution attempts):

Comment 6 by zhenshe41, Nov 10, 2009
In Go! , can the IDE know the differences between
Go! and go ?

Comment 40 by patla073, Nov 11, 2009
Why not just name it Golang?
Erlang - "Ericsson Language"
Golang - "Google Language"

Comment 64 by david.kitchen, Nov 11, 2009
@40 Golang looks like a winner...
they're already using the domain golang.org
and it looks like no-one else is using
that as a language name.

Comment 45 by tuxthelinuxdood, Nov 11, 2009 [paraphrased]
It is obvious that "do no evil" Google employees
did not research the name
in terms of existing languages before release.
. how is it obvious?
they could have easily seen C# vs C
as a precident for go vs go! .]

Comment 54 by pierrevm, Nov 11, 2009
First Closure (name-squatting Clojure)
now Go stopping Go! in its tracks.
Just another week in the life of a giant company. [...]

Comment 66 by yless42, Nov 11, 2009
. as you all rightly pointed out,
one is called "go"
and the other is called "Go!".
Am I the only person seeing the similarities
between that and "C" == "C#"?
If you think that having an extra character
is a problem,
you should go speak to Microsoft first.
. I suggest you see these lists:
wikipedia.org, esolangs.org .
. See how many languages there are on those lists
where one name is only separated from another name
by one character? [...]

Comment 70 by pygy79, Nov 11, 2009
@66 the spelling may be different,
but in both cases (Clojure and Go!),
the pronounciation of the
Google newly introduced products
is identical.
not identical in both cases:
. just as C# is pronounced c-sharp;
go! should be pronounced go-bang
or go-cut (prolog`cut-operator) .]

Comment 79 by rnmboon, Nov 11, 2009
They should change the name to "god".
Why hold back on the level of ambition here.

Comment 80 by j...@ww.com, Nov 11, 2009
Google should do the right thing
and change their name, be gracious about it.
. Do no evil, remember ?

Comment 89 by jamesda...@gmail.com, Nov 11, 2009
#77: Why not start a company called "Google!" ?

Comment 90 by AxelSanner, Nov 11, 2009
[wikipedia's don't be evil]
then ... don't
. that page tells about "(don't be evil) being
Google's "(informal corporate motto (or slogan));
the page states that in January 2010,
"(Apple CEO Steve Jobs strongly criticized the slogan,
saying: "We did not enter the search business.
They[Google] entered the phone business.
Make no mistake they want to kill the iPhone.)
-- and then he scoffs at "(don't be evil);
but, he might be just showing off
to shareholders . look:
. Google is in the phone business because
the phone is where much of the googling
could be coming from;
so, they just want to make sure
a mobile platform is out there .
. Apple still has a competitive product:
a phone running on the secure
mac microkernel with solid software
(I don't have a smartphone,
but my mac desktop is a lot more solid
than my linux laptop
-- the linux world scoffs at microkernels
for losing energy, but Ubuntu is more likely
to lose my data ... I need to
take the Android hint
and do more cloud computing!) .]

Comment 157 by mich...@sun-sol.com, Nov 11, 2009
They should name the language "Evil--"
(do no evil...).

Comment 161 by ropers, Nov 11, 2009
[...] appreciating the difference between
terms that merely sound
and terms that actually *are* identical. [...]

Comment 170 by tomhaste, Nov 11, 2009
Some of these comments are plain silly. Heres another to join in;
If Go isnt the same as Go!;
Then Google isnt the same as Google!.

Comment 212 by Mo6eeeB, Nov 11, 2009
"Go!" wouldn't work, because
the only way I can think
of audibly pronouncing the "!"
is "bang" and "Go bang"
just sounds silly.

Comment 219 by lozeno1982, Nov 11, 2009
To those who say "it's like C and C#"
or "it's like C and C++" etc...
No, it's not the same case.
. First, C# and C++ are called this way
because they inherited their {sintax, syntax}
from C
and wanted to express that kind of legacy;
Second, they are actually pronounced DIFFERENTLY:
it's "See" (C), "See-plus-plus" (C++)
and "See-Sharp" (C#).
. Now, how do you pronounce Go and Go! ?
I read them both the same way: "Go".
I can't read a "!" to make a difference.
. this reminds that the reason yahoo
spells their name with a "(!)
is to pronounce the word with enthusiasm;
literally to step up the octave
of the last syllable .]

Comment 255 by insomniac8400, Nov 11, 2009
Go and Go Bang
are as different as C and C Sharp.

Comment 274 by THM...@gmail.com, Nov 11, 2009
(C vs. C++, vs. C# anyone)).

Comment 312 by merkey88, Nov 11, 2009
This reminds me of an old court case
where Yahoo! had to change their name to the previously stated
from Yahoo (with no exclamation).
By contrast, your programming language
already has the exclamation point,
so as long as Google does not use
an exclamation after Go (or go),
there should be no reason to change.

Comment 382 by samterrell, Nov 11, 2009
If Wikipedia can deal with the issue,
[... GML, Go, Go!, GOAL, ...]

Comment 521 by darkhorn, Nov 12, 2009
[...] write a new language called "Google!".'
Comment 525 by ashish.afriend, Nov 12, 2009
@521: There are differences
in a language and a company.

Comment 563 by kikito, Nov 12, 2009
You can read the ! symbol in English.
It is commonly pronounced as "Bang".
"Go Bang" vs "Go"
is exactly the same difference as in
"C Sharp" vs "C", pronuntiation-wise.

Comment 584 by malonsosanchez, Nov 12, 2009
Go and Go! are very similar.

Comment 610 by davidsinger0, Nov 12, 2009
. you yourself said you used the term
"Let's Go!" for a book
which if you google is the name of a
very popular book series.
Are you then infringing on their rights?

Comment 654 by roman.go...@gmail.com, Nov 12, 2009
. Nothing wrong with google using Go,
just like nothing wrong with
C++ using the C,
nor using it in C# by Microsoft.
There is no confusion,
and there is no real difficulty in finding
help for those languages.
. I think this is a non-issue
and is just an excuse for people to
get into the town-hall mentality
of yelling out absurdities
and not looking at the facts.
. Go and Go! can co-exist,
even if !Go comes out.

Comment 741 by victor.petrov, Nov 12, 2009
For those of you who
brought the C -> C++ example:
your example isn't valid
in this situation.
C++ was actually a set of
extensions to the C language.
Google's Go is NOT
an extension to Go!,
nor is Go! an extension to Go.
. but the point is that
they are distinguishable names;
also, C# is not an extension
of either {C, C++},
and people thought that was a great name,
easily pronouncing it c-sharp;
just as we'll easily pronounce
go! as go-Cut .]

Comment 744 by coolboygreatone, Nov 12, 2009
@741: What about C++ and C#
P.S. : Google should not change the name

Comment 828 by julian.notfound, Nov 13, 2009
Forgot to put a link to slashdot discussion .

Comment 843 by davidsarah.hopwood, Nov 13, 2009
. one David-Sarah Hopwood
has been active in the E lang'
and cap'based security
where she's had many pro' conversations
with several google employees:
Ben Laurie [@] benl@google.com (2009)
Mike Stay [@] stay@google.com (2009)
Mark S. Miller [@] erights@google.com (2008...2010)
. ]
. "Go" is a really terrible name
for a programming language.
Besides the two languages,
it has two common meanings in English
(the verb and the name of the board game),
it also means "naked" in Croatian,
-- naked, as in barefoot-fast?
and "him" in Polish.
-- him, as in speed & strength vs longevity?
This many existing meanings
is what you would expect
for such a short word.
. Anyway, who chooses a name for
a programming language these days
without googling "FOO programming language"
to see if it's already taken?
That's impolite at best;
even downright negligent.
but why assume they didn't see "(go!) ?
would go# be ok in a C# world ?
. The paper on the original Go! language,
incidentally, is at
. It's a concurrent-logic dialect of Prolog
with asynchronous message passing
and Hindley-Milner type inference. [link]

Comment 878 by Lucretia...@yahoo.co.uk, Nov 14, 2009
[...] stop trying [to] make C safe
and use Ada instead,
[...] Just a thought, eh Google!?
. what?! Go is hardly a safer C;
it's basically a quicker Python .]
. like most dev'houses, google has to
reach for what the market will offer;
most hot-shot coders will not
touch that verbose language;
c is low-level, but has a huge library,
and your glue code is compact,
not verbose;
your tools are solid and familiar .]

Comment 891 by wrolufsen, Nov 15, 2009
My vote is for Go@, pronounced like 'goat'.

Comment 932 by graham.p...@gmail.com, Nov 16, 2009
Google probably already knew about
"Go!" (or "gobang" as it may be pronounced,
[...] So "Go" without the "!"
is like "C#" without the "#".
Also, I doubt you can trademark "Go"
so legally there's not leg to .

many suggested calling it plan9:
"( The name Plan 9 from Bell Labs
is a reference to the 1959
cult science fiction B-movie
Plan 9 from Outer Space.)

. issue 9 is still open .

. McCabe's old blog's about-page is taking comments .

review of other clashes with "(go):

. another name clashing with "(go)
is a different sort of "(language):
"(the cultural roadmap for the city girl)
... then for a lack of language there is
"(go) family news portal by Disney .

. finally, for some foreign language,
"(Go!) is the english version of weg!:
. an Afrikaans language outdoor and travel magazine;
focuses on affordable destinations in South Africa
and the rest of Africa.


supercomputer power promised in the CHaPeL (Cascade Hi'Productivity Lang)

7.26: news.adda/lang"Chapel(Cascade Hi'Productivity Lang'):

2005.9 PGAS Programming Models Conference/Chapel (pdf):
. the final session of the
Parallel Global Address Space(PGAS)
Programming Models Conference was
devoted to DARPA's HPCS program
(High Productivity Computing Systems):
, X10[verbose java]
, Fortress[greek]
, StarP[slow matlab].
Locality Control Through Domains:
. domains are array subscript objects,
specifying the size and shape of arrays,
the represent a set of subscripts;
and so, applying a domain to an array
selects a set of array elements .
(recall that term"domain is part of
function: domain-->codomain terminology)
. domains make it easy to work with
sparse arrays, hash tables, graphs,
and interior slices of arrays .]
. domains can be distributed across locales,
which generally correspond to CPUs in Chapel.
This gives Chapel its fundamentally
global character, as contrasted to
the process-centric nature of MPI or CAF,
for example.
When operations are performed on
an array whose domain is distributed,
any needed IPC is implicitly carried out,
-- (inter-processor communication) --
without the need for function calls.
Chapel provides much more generality
in the distribution of domains
than high performance Fortran, or UPC,
but, will not take the place of
the complex domain decomposition tools
required to distribute data for
optimum load balance and communication
in most practical parallel programs.
. HPC systems today are overwhelmingly
distributed memory systems,
and the applications tend to require
highly irregular communication
between the CPUs.
This means that good performance
depends on effective locality management,
which minimizes the IPC costs .
productive levels of abstraction
will degrades performance.
Chapel simply allows mixing
a variety of abstraction levels .]

Cray's Chapel intro (pdf):
Chapel (Cascade High Productivity Language)
is Cray's programming language for
supercomputers, like the Cascade system;
part of the Cray Cascade project,
a participant in DARPA's HPCS program
(High Productivity Computing Systems ) .
. iterators:
CLU, Ruby, Python
. latent types:
ML, Scala, Matlab, Perl, Python, C#
. OOP, type safety:
Java, C#:
. generic programming/templates:
. data parallelism, index sets, distributed arrays:
ZPL (Z-level Programming Language)
HPF (High-Performance Fortran),
. task parallelism, synchronization:
Cray's MTA's extensions to Fortran and C.
(Multi-Threaded Architecture)

Global-view vs Fragmented models:
. in Fragmented programming models,
the Programmer's point-of-view
is a single processor/thread;
. the PGAS lang" UPC (unified parallel C),
has a fragmented compute model
and a Global-View data model .
. the shared mem' sytems" OpenMP & pThreads
have a trivially Global View of everything .
. the HPCS lang's, including Chapel,
hava a Global View of everything .

too-low- & too-high-level abstractions
vs multiple levels of design:

. openMP, pthreads, MPI are low-level & difficult;
HPF, ZPL are high-level & inefficient .
Chapel has a mix of abstractions for:
# task scheduling levels:
. work stealing; suspendable tasks;
task pool; thread per task .
# lang' concept levels:
. data parallelism; distributions;
task parallelism;
base lang; locality control .
# mem'mgt levels:
. gc; region-based;
manual(malloc,free) .
. chapel`downloads for linux and mac .
readme for Chapel 1.1
The highlights of this release include:
parallel execution of all
data parallel operations
on arithmetic domains and arrays;
improved control over the
degree and granularity of parallelism
for data parallel language constructs;
Block and Cyclic distributions;
simplified constructor calls for
Block and Cyclic distributions;
support for assignments between,
and removal of indices from,
sparse domains;
more robust performance optimizations on
aligned arithmetic domains and arrays;
many programmability
and correctness improvements;
new example programs demonstrating
task parallel concepts and distributions;
wide-ranging improvements
to the content and organization of the
language specification.
This release of Chapel contains
stable support for the base language,
and for task and regular data parallelism
using one or multiple nodes.
Data parallel features
on irregular domains and arrays
are supported via a single-threaded,
single-node reference implementation.
impl' status:
No support for inheritance from
multiple or generic classes
Incomplete support for user-defined constructors
Incomplete support for sparse arrays and domains
Unchecked support for index types and sub-domains
No support for skyline arrays
No constant checking for domains, arrays, fields
Several internal memory leaks
Task Parallelism
No support for atomic statements
Memory consistency model is not guaranteed
Locality and Affinity
String assignment across locales is by reference
Data Parallelism
Promoted functions/operators do not preserve shape
User-defined reductions are undocumented and in flux
No partial scans or reductions
Some data parallel statements are serialized
Distributions and Layouts
Distributions are limited to Block and Cyclic
User-defined domain maps are undocumented and in flux
7.27 ... 28:
IJHPCA` High Productivity Languages and Models
(Internat' J. HPC App's, 2007, 21(3))

Diaconescua, Zima 2007`An Approach to Data Distributions in Chapel (pdf):
--. same paper is ref'd here:
Chapel Publications from Collaborators:
. they note:
"( This paper presents early exploratory work
in developing a philosophy and foundation for
Chapel's user-defined distributions ).
User-defined distributions
are first-class objects:
placed in a library,
passed to functions,
and reused in array declarations.
In the simplest case,
the specification of a
new distribution
can consist of just a few lines
of code to define mappings between
the global indices of a data structure
and memory;
in contrast, a sophisticated user
(or distribution writer)
can control the internal
representation and layout of data
to an almost arbitrary degree,
allowing even the expression of
auxiliary structures
typically used for distributed
sparse matrix data.
our distribution framework is designed to
• The mapping of arbitrary data collections
to units of locality,
• the specification of user-defined mappings
exploiting knowledge of data structures
and their access patterns,
• the capability to control the
layout (allocation) of data
within units of locality,
• orthogonality between
distributions and algorithms,
• the uniform expression of computation
for both dense and sparse data structures,
• reusability and extensibility
of the data mapping machinery itself,
as well as of the common data mapping patterns
occurring in various application domains.

Our approach is the first
that addresses these issues
completely and consistently
at a high level of abstraction;
in contrast to the current programming paradigm
that explicitly manages data locality
and the related aspects of synchronization,
communication, and thread management
at a level close to what assembly programming
was for sequential languages.
The challenge is to allow the programmer
high-level control of data locality
based on the knowledge of the problem
without unnecessarily burdening the
expression of the algorithm
with low-level detail,
and achieving target code performance
similar to that of
manually parallelized programs.

Data locality is expressed via
first-class objects called distributions.
Distributions apply to collections
of indices represented by domains,
which determine how arrays
associated with a domain
are to be mapped and allocated across
abstract units of uniform memory access
called locales.
Chapel offers an open concept of distributions,
defined by a set of classes
which establish the interface between
the programmer and the compiler.
Components of distributions
are overridable by the user,
at different levels of abstraction,
with varying degrees of difficulty.
Well-known regular standard distributions
can be specified along with
arbitrary irregular distributions
using the same uniform framework.
There are no built-in distributions
in our approach.
Instead, the vision is that
Chapel will be an open source language,
with an open distribution interface,
which allows experts and non-experts
to design new distribution classes
and support the construction of
distribution libraries that can be
further reused, extended, and optimized.
Data parallel computations
are expressed via forall loops,
which concurrently iterate over domains.

. the class of PGAS languages
including Unified Parallel C (UPC)
provide a reasonable improvement
over lower-level communications with MPI.
. UPC`threads support block-cyclic
distributions of one-dimensional arrays
over a one-dimensional set of processors .
and a stylized upc forall loop
that supports an affinity expression
to map iterations to threads.

. the other DARPA`HPCS lang's
provide built-in distributions
as well as the possibility to create
new distributions from existing ones.
they do not contain features
for specifying user-defined
distributions and layouts.
X10’s locality rule requires
an explicit distinction between
local and remote accesses
to be made by the programmer
at the source language level.
The key differences between
existing work and our approach
can be summarized as follows.
we provide a general oop framework
for the specification of
user-defined distributions,
integrated into an advanced
high-productivity parallel language.
our framework allows the
flexible formulation
of data distributions,
locale-internal data arrangements,
and associated control mechanisms
at a high level of abstraction,
tuned to the properties of
architectures and applications.
This ensures
target code performance
that is otherwise achievable only via
low-level control.
. Chapel Publications and Papers .
. Chapel Specification [current version (pdf)] .

Parallel Programmability and the Chapel Language (pdf)
bradc@cray.com, d.callahan@microsoft.com, zima@jpl.nasa.gov`
Int.J. High Performance Computing App's, 21(3) 2007

This paper serves as a good introduction
to Chapel's themes and main language concepts.
7.28: adda/concurrency/chapel/Common Component Architecture:
Common Component Architecture (CCA) Forum

2005 CCA Whitepaper (pdf):
. reusable scientific components
and the tools with which to use them.
In addition to developing
simple component examples
and hands-on exercises as part of
CCA tutorial materials,
we are growing a CCA toolkit of components
that is based on widely used software packages,
ARMCI (one-sided messaging),
CUMULVS (visualization and parallel data redistribution),
CVODE (integrators), DRA (parallel I/O),
Epetra (sparse linear solvers),
Global Arrays (parallel programming),
GrACE (structured adaptive meshes),
netCDF and parallel netCDF (input/output),
TAO (optimization), TAU (performance measurement),
and TOPS (linear and nonlinear solvers).
Babel (inter-lang communication):
Babel is a compiler
that generates glue code from
SIDL interface descriptions.
(Scientific Interface Description Language)
SIDL features support for
complex numbers, structs,
and dynamic multidimensional arrays.
SIDL provides a modern oop model,
with automatic ref'counting
and resource (de)allocation.
-- even on top of traditional
procedural languages.
Code written in one language
can be called from any of the
supported languages.
Full support for
Remote Method Invocation (RMI)
allows for parallel distributed applications.

Babel focuses on high-performance
language interoperability
within a single address space;
It won a prestigious R&D 100 award in 2006
for "The world's most
rapid communication among
many languages in a single application."

. Babel currently fully supports
C, C++ Fortran, Python, and Java.
-- Chapel is coming soon:
CCA working with chapel 2009:
Babel migration path for chapel:
Collaboration Status: Active
TASCS Contact: Brad Chamberlain, Cray mailto:bradc@cray.com
Collaboration Summary:
Cray is developing a Chapel language binding
to the Babel interoperability tool.
The work is purely exploratory
(source is not publicly available yet)
and Babel is providing
whatever consulting and training services
needed to facilitate.
Common Component Architecture Core Specification
Babel Manuals:
User's Guide (tar.gz)
Implement a Protocol for BabelRMI (pdf)
Understanding the CCA Specification Using Decaf (pdf)
CCA toolkit
CCA tut's directory
CCA Hands-On Guide 0.7.0 (tar.gz)
Our language interoperability tool, Babel,
makes CCA components interoperable
across languages and CCA frameworks.
Numerous studies have demonstrated
that the overheads of the CCA environment
are small and easily amortized
in typical scientific applications.

specification of components.
. using SIDL,
The current syntactic specification
can be extended to capture
more of the semantics
of component behavior.
For example,
increasing the expressiveness
of component specifications
(the “metadata” available about them)
makes it possible to catch
certain types of errors automatically
. we must leverage the
unique capabilities of
component technology
to inspire new CS research directions.
For example,
the CCA provides a dynamic model
for components,
allowing them to be
reconnected during execution.
This model allows an application to
monitor and adapt itself
by swapping components for others.
This approach, called
computational quality of service,
can benefit numerical, performance,
and other aspects of software.
Enhanced component specifications
can provide copious information
that parallel runtime environments
could exploit to provide
the utmost performance.
. the development and use of
new runtime environments
could be simplified by integrating them
with component frameworks.

PGAS replacing MPI in supercomputer lang's

7.26: news.adda/concurrency/PGAS (partitioned global address space):

. PGAS is a parallel programming model
(partitioned global address space) [7.29:
where each processor
has its own local memory
and the sharable portion of local space
can be reached from other processors
by pointer rather than by
the slower MPI (message passing interface) .
. since a shared space has an
"(affinity for a particular processor),
things can be arranged so that
local shares can be
accessed quicker than remote shares,
thereby "(exploiting
locality of reference).]

The PGAS model is the basis of
Unified Parallel C,
and all the lang's funded by DARPA`HPCS
(High Productivitiy Computing Systems)
{ Sun`Fortress, Cray`Chapel, IBM`X10 }.
[@] adda/concurrency/NUCC/lang's for supercomputer concurrency
The pgas model -- also known as
the distributed shared address space model,[7.29:
provides more of both performance
and ease-of-programming
than MPI (Message Passing Interface)
which uses function calls
to communicate across clustered processors .]

As in the shared-memory model,
one thread may directly read and write
memory allocated to another.
At the same time, [7.29:
the concept of local yet sharable]
is essential for performance .

7.26: news.adda/lang"upc (Unified Parallel C):
The UPC language evolved from experiences with
three other earlier languages
that proposed parallel extensions to ISO C 99:
AC, Split-C, and Parallel C Preprocessor (PCP). [7.29:

AC (Distributed Data Access):
. AC modifies C to support
a shared address space
with physically distributed memory.
. the nodes of a massively parallel processor
can access remote memory
without message passing.
AC provides support for distributed arrays
as well as pointers to distributed data.
Simple array references
and pointer dereferencing
are sufficient to generate
low-overhead remote reads and writes .
. supports efficient access to a
global address space
on distributed memory multiprocessors.
It retains the "small language" character of C
and supports careful engineering
and optimization of programs
by providing a simple, predictable
cost model. -- in stark contrast to
languages that rely on extensive
compile-time transformations
to obtain performance on parallel machines.
Split-C programs do what the
programmer specifies;
the compiler takes care of
addressing and communication,
as well as code generation.
Thus, the ability to exploit
parallelism or locality
is not limited by the compiler's
recognition capability,
nor is there need to second guess
the compiler transformations
while optimizing the program.
The language provides a small set of
global access primitives
and parallel storage layout declarations.
These seem to capture
most of the useful elements
of shared memory, message passing,
and data parallel programming
in a common, familiar context.
Parallel C Preprocessor (pcp):
. a parallel extension of C for multiprocessors
(eg, scalable massively parallel machine, the BBN TC2000)
for sharing memory between processors.
The programming model is split-join
rather than fork-join.
Concurrency is exploited to use a
fixed number of processors more efficiently
rather than to exploit more processors
as in the fork-join model.
Team splitting, a mechanism to
split the processors into subteams
to handle parallel subtasks,
provides an efficient mechanism
for exploiting nested concurrency.
We have found the split-join model
to have an inherent
implementation advantage,
compared to the fork-join model,
when the number of processors becomes large .]
GCC Unified Parallel C (GCC UPC):
UPC 1.2 specification compliant, Based on GNU GCC 4.3.2
Fast bit packed shared pointer support
Configurable shared pointer representation
Pthreads support
GASP support, a performance tool interface
for Global Address Space Languages
Run-time support for UPC collectives
Support for uniprocessor
and symmetric multiprocessor systems
Support for UPC thread affinity
via linux scheduling affinity and NUMA package
Compatible with Berkeley UPC run-time version 2.8 and up
Support for many large scale machines and clusters
in conjunction with Berkeley UPC run-time
Binary packages for x86_64, ia64, x86, mips
Binary packages for Linux Fedora, SuSe, CentOS, Mac OS X, IRIX
. for uniprocessor and symmetric multiprocessor systems:
Intel x86_64 Linux (Fedora Core 11)
Intel ia64 (Itanium) Linux (SuSe SEL 11)
Intel x86 Linux (CentOS 5.3)
Intel x86 Apple Mac OS X (Leopard 10.5.7+ and Snow Leopard 10.6)
. Programming in the pgas Model at SC2003 (pdf) .

Programming With the Distributed Shared-Memory Model at SC2001 (pdf):
. Recent developments have resulted in
viable distributed shared-memory languages
for a balance between ease-of-programming
and performance.
As in the shared-memory model,
programmers need not explicitly specify
data accesses.
Meanwhile, programmers can exploit data locality
using a model that enables the placement of data
close to the threads that process them,
to reduce remote memory accesses.
. fundamental concepts associated with
this programming model include
execution models, synchronization,
workload distribution,
and memory consistency.
We then introduce the syntax and semantics
of three parallel programming language
instances with growing interest:
Cray's CAF(Co-Array FORTRAN),
Berkeley's Titanium JAVA
and (IDA, LLNL, UCB) 1999` UPC (Unified Parallel C)
-- upc`history (pdf):
. IDA Center for Computing Sciences
University of California at Berkeley,
Lawrence Livermore National Lab,
... and the consortium refined the design:
Academia: GWU, MTU, UCB
Vendors: Compaq, CSC, Cray, Etnus, HP, IBM, Intrepid, SGI, Sun,
It will be shown through
experimental case studies
that optimized distributed shared memory
can be competitive with
message passing codes,
without significant departure from the
ease of programming
provided by the shared memory model .
. the openMP model is
all the threads on shared mem';
it doesn't allow locality exploitation;
modifying shared data
may require synchronization
(locks, semaphores) .
. in contrast,
the upc model is
distributed shared mem';
it differs from threads sharing mem' by
each thread has its own partition slice;
and within that slice, there's a
{shared, private} division:
slice#i has affinity to thread#i
-- that means, a thread tries to
keep most of its own obj's
within its own slice;
but, it's share.pointers can target
any other thread's sharable slice .
"(exploit locality of references) .
. message-passing as a
sharing mechanism
isn't a good fit for many app's in
math, science, and data mining .
. a dimension of {value, pointer} types is
{ shared -- can point in shared mem' .
, private -- points only to thread's private mem .
} -- both can access either {dynamic, static } mem .
. all scalar (non-array) shared objects
have affinity with thread#0 .
. pointers have a self and a target,
both of which can
optionally be in shared mem';
but it's unwise
to have a shared pointer
accessing a private value;
upc disallows casting of
private pointer to shared .
. casting of shared to private
is defined only if the shared pointer
has affinity with
the thread performing the cast,
since the cast doesn't preserve
the pointer`thread.identifier .

attributes of shared pointers:
. int upc_threadof(shared void *ptr);
-- thread that has affinity to pointer
int upc_phaseof(shared void *ptr);
-- pointer's index (pos' within the block)
void* upc_addrfield(shared void *ptr);
-- address of the targeted block
other upc-specific attributes:
upc_localsizeof(type-name or expression);
-- size of the local portion of a shared object.
upc_blocksizeof(type-name or expression);
-- the blocking factor associated with the argument.
upc_elemsizeof(type-name or expression);
-- size (in bytes) of the left-most type
that is not an array.
Berkeley UPC intro:
array` affinity granularities:
# cyclic (per element)
- successive elements of the array
have affinity with successive threads.
# blocked-cyclic (user-defined)
- the array is divided into
user-defined size blocks
and the blocks are cyclically distributed
among threads.
# blocked (run-time)
- each thread has affinity to a
tile of the array.
The size of the contiguous part
is determined in such a way
that the array is
"evenly" distributed among threads.

To define the interaction between
memory accesses to shared data,
UPC provides two user-controlled
consistency models { strict, relaxed }:
# "strict" model:
the program executes in a
Lamport`sequential consistency model .
This means that
it appears to all threads that the
strict references within the same thread
appear in the program order,
relative to all other accesses.
# "relaxed" model:
it appears to the issuing thread
that all shared references within the thread
appear in the program order.

The UPC execution model
is similar to the SIMD used by the
(Single Instruction Multiple Data)
message passing style (MPI or PVM).
-- an explicitly parallel model.
In UPC terms, the execution vehicle
for a program is called a thread.
The language defines a private variable
- MYTHREAD - to distinguish between
the threads of an UPC program.
The language does not define any
correspondence between
a UPC thread and its OS-level
nor does it define any
mapping to physical CPU's.
Because of this,
UPC threads can be implemented
either as full-fledged OS processes
or as threads (user or kernel level).
On a parallel system,
the UPC program running with shared data
will contain at least
one UPC thread per physical processor .


what are delegates? not oop!

7.25: web.adda/delegates:

replacing delegates with inner classes:
. Sun believes bound method references [delegates]
are unnecessary because
another design alternative, inner classes,
provides equal or superior functionality.
In particular,
inner classes fully support the requirements
of user-interface event handling,
Programs that use delegation patterns
are easily expressed with inner classes.
In the AWT event model,
user-interface components send
event notifications by invoking methods of
one or more delegate objects
that have registered themselves with the component.
A button is a very simple GUI component
that signals a single event when pressed.
Here is an example using the JButton class:
[. this is my conversion of Java to adda:]
SimpleExample.class extends JPanel:
  button.JButton = ("Hello, world")
  ButtonClick.class implements ActionListener:
  ( actionPerformed(e.ActionEvent).proc:
       println("Hello, world!")
  )  ...
    ( x`button`addActionListener(anon.x`ButtonClick)
    ; add(x`button)
** vs delegates: **
SimpleExample.class extends JPanel:
  button.JButton = ("Hello, world")
    println("Hello, world!")
    ; add(button)
. in this case, the delegate is an enclosing object.
Keeping the identifiers the same
to make the remaining differences stand out clearly:
When the user clicks on the button,
it invokes the actionPerformed method
of its delegate,
which can be of any class that implements
the ActionListener interface.
In a language that uses
bound method references,[delegates]
the corresponding code
would be quite similar. Again,
a button delegates a click to an action method.
The key conclusion is quite simple:
Any use of [a delegate] to a private method
can be transformed,
in a simple and purely local way,
into an equivalent program
using a one-member private class.
Unlike the original,
the transformed program will compile under
any JDK 1.1-compliant Java compiler.

[. delegates] require the programmer to
flatten the code of the application
into a single class
and choose a new identifier for each action method,
the use of adapter objects
requires the programmer to nest the action method
in a subsidiary class
and choose a new identifier for the class name.

exceptions of Golang and Vala

7.17: news.adda/google'golang/design/exceptions:
. coupling exceptions to a
control structure,
as in the try-catch-finally idiom,
results in convoluted code.
. a routine that signals an exception
will then run a recovery routine
as it is closing for the return
-- sufficient to handle catastrophe
but requires no extra control structures
and, when used well,
can result in clean error-handling code.
7.25: news.adda/exceptions/Vala:

. vala has checked exceptions, not class-based, no wrapping
[. I changed the lexicon to match Ada's,
and much of the syntax to match adda's:]

. error domain with multiple error codes:
MyError.exception: { FOO, BAR }

. must be declared in method signature,
part of the contract:
method().proc raises MyError:
( raise FOO (message: "not enough foo") );

. must be handled or propagated:
[this is a declare block
(having an ex.label (exception)
defines a try-catch block)]

method ();
. Although the compiler emits
warnings for ignored errors
it does not abort the compilation process.
This allows prototyping without proper error handling
and will hopefully prevent
forgotten empty catch blocks.

naming syntax struggles

7.18: sci.adda/type naming confused with file names:
. if type names and file paths
can be used in the same places,
then the type name"(/.T) (ptr to type"T)
could be confused with
a root folder containing
a file having a null name and ext"type .

7.21: adda/url syntax:
. my revised url syntax differed by
using dos drive letters as domain names,
C:\ was //C.drive/
and anything in the .drive domain
was a subdomain of .local;
eg, //mypc.local/E.drive
is a typical dos volume .
. while dos does auto-assign
labels to drives,
the software on dos can also
assign labels to a
subfolder on a drive .
. since std url's use the "file " protocol
to mean a local space,
could file be the name for {drives, subdir's}?
eg, //c.file/ ?
file is an obvious protocol only within
the syntax of file:// .
. if you change the syntax or omit it,
the meaning naturally reverts to std english:
eg, c.file is a file!
. even if a label is pointing to a subdir,
app's still think it's a drive,
so stick with the use of .drive .
. whether unix or dos,
the machine can be reached by .local;
if dos and a drive is not specified
then c.drive is assumed to be root .

web.adda/golang/case-for-visibility rule:
. an exported identifier must
begin with an upper-case letter .
7.25: web: case-for-visibility advantages
> In my brain:
> - lower case = Common, normal things. Normal words.
> - Upper Case = Unique, special things. Name of unique thing.
> - ALL UPPERCASE = Things need special care, a kind of warning.
"Rob 'Commander' Pike" Date: Sat, 9 Jan 2010:
. in English, capital words are "proper nouns".
They're important, public things
like names and places.
The analogy with
public and private names inside a package
is a bit of a stretch
but does make sense.
Public things are more important,
and so they're capital.
The ability to
glance at a name in a package
and know, without finding its declaration
(or some keyword somewhere near the declaration)
that it is a publicly visible name,
is a great thing
that more than makes up for
the inability to use some of the styles you mention.

adda's subfile support

7.17: adda/subfile support:
. adda can use adde ml to define subfiles
and can also treat any of html's
hierarchical structures as subfiles .
[7.27: intro:
. the idea of subfiles as chunks of text,
is to have an documents arranged like
folder trees of files
without depending on the platform's
native folder and file system .
. files usually contain several subfiles,
and in this way act as folders .
. conversely, a folder of files
may be identified as one document .
. how subfiles are packed into files
depends on how the subfiles are used:
documents that are frequently modified
will be packed into more files .
. very large doc's will be packed into
more folders .
. adde ml (markup language) is a
simplified form of html that uses
an exclamation point in brackets
as an escape for text about text,
such as identifying the title of a subfile .]

Golang's targeted platforms include mac

7.17: web.adda/google'golang/how many platforms targeted?:

"( we considered using LLVM for 6g
but we felt it was too large and slow
to meet our performance goals.
The Go tool chain is written in C.
. you need to have GCC, the standard C libraries,
the parser generator Bison, make, awk,
and the text editor ed installed.
On OS X, they can be installed as part of Xcode.
On Linux, use
$ sudo apt-get install bison gcc libc6-dev ed gawk make
Why doesn't Go run on Windows yet?
We understand that
a significant fraction of computers in the world
run Windows
and it would be great if those computers
could run Go programs.
A group of volunteers has made significant progress
toward porting Go to MinGW.
You can follow their progress
at the Go Wiki's WindowsPort page.) .
. a tutorial shows xp safely sandboxed
in a mac {vmware, parallels} virtual machine .

D language

7.16: news.adda/lang"D:

. reasons to rewrite C to D?

. C is missing module support?
-- that feature helps figure out
where a function is defined;
but, cscope does that too .

. D was designed to remove
most of the flaws from C and C++,
while keeping the same syntax,
and even binary compatibility with C...
and almost binary compatibility with C++.
[. actually,
C syntax is a flaw too,
having decl's only a compiler could love .]

. D is also a mix of low-level
(x86 assembly is part of the language)
and higher-level characteristics
(optional gc, modules,
a fix of C++ templates, etc.).
. implementations include
front ends for LLVM and gcc .
. tango is the community replacement for
digitalmars`Phobos system library .
. unless there's something your lang needs
that simply can't be done in C,
the most simple and portable way
to enjoy working in your own lang,
is to just translate your lang to C .
. that route is esp'ly handy for
platforms like Apple's,
where you really need to be
using their tools .
. having translated to C,
I'm already half-way toward
translating for obj'c and cocoa .]

dependency versioning

7.10: adda/lib'mgt/dependency versioning:

[7.27: summary:
. these are rough ideas for
how to keep a library of software components
modifiable and evolvable,
without having problems with version conflicts .]

. intro to versions conflicting with portability:
Forth was interesting in that
not only it was written in itself,
but all the internals were available
to any programs written in it,
and, furthermore, could be changed at will.
The thing is, Forth lacked abstraction.
Yes, you could change the compiler.
But, of course, you’d have to know
how the specific compiler worked:
eg, for threaded code, variants included:
generated {indirectly, directly, subrouting},
and generated {bytecode, machine coded}.
So it had no portability at all.
--. that inspired these ideas:

. in addition to there being
versions for systems;
any component that allows being changed,
must have it's own version number
for both the spec' or interface changes,
and the impl' or build changes .
. if the author (ie, the name of a fork)
is not specified with the version number;
then it's assumed to be from the main source .

. to be portable with forks,
the original site must provide
version numbers for
all components, and all forks .

. in order for a fork to use
the original project's name,
it must subscribe to the original's
versioning protocol:
a component lists its dependencies
which includes the release version
of a particular fork
modified with the release versions
for particular components .

. that means when a library or app
is introduced to a platform's lib mgt,
it gives the expected release versions:
this is described economically by saying
most components' versions are that of
some author's system release;
while a few components vary from that release,
and are described individually .
. the lib'mgt then checks these dependencies,
and tries to procure them if not present .

. the user should also be notified
when an installation has
many unique dependencies;
at least if the practical size
is significantly larger
than the app's stated size .

. each version requires its own
space in the library,
along with pointers to
whatever other units are using it;
so then, if you uninstall a unit,
all its bloat goes with it,
unless a dependency is still used
by other units .