Quantcast
Channel: Hacker News 50
Viewing all 9433 articles
Browse latest View live

Interviewing a front-end developer

$
0
0

Comments:"Interviewing a front-end developer"

URL:http://blog.sourcing.io/interview-questions


Part of my role at Twitter and Stripe involved interviewing front-end engineering candidates. We were given a fair amount of discretion on how we went about interviewing, and I developed a few different sets of questions that I thought would be interesting to share.

I'd like to prefix all of this with the caveat that hiring is extremely hard, and figuring out if someone is a good fit within 45 minutes is a demanding task. The problem with interviews is that everyone tries to hire themselves. Anybody that aces my interview probably thinks a lot like me, and that's not necessarily a good thing. As it is, I've been pretty hit and miss in my decisions so far. However, I believe this approach to be a good start.

Ideally the candidate has a really full GitHub 'resume', and we can just go back through their open source projects together. I usually browse their code and ask them questions about particular design decisions. If the candidate has an excellent track record, then the interview largely moves into whether they'd be a good social fit for the team. Otherwise, I move on to the coding questions.

My interviews are very practical and are entirely coding. No abstract or algorithmic questions are asked — other interviewers can cover those if they choose to, but I think their relevance to front-end programming is debatable. The questions I ask may seem simple, but each set is designed to give me an insight into a particular aspect of JavaScript knowledge.

No whiteboards are used. If the candidate brings their own laptop they can type away on that. Otherwise they can just use mine. They can use whatever editor they'd like, and I often test the output of their programs directly in Chrome's console.

Section 1: Object Prototypes

We start out simple. I ask the candidate to define a spacify function which takes a string as an argument, and returns the same string but with each character separated by a space. For example:

spacify('hello world') // => 'h e l l o w o r l d'

Although this question may seem rather simple, it turns out that it's a good place to start, especially with unvetted phone candidates — some of whom claimed to know JavaScript, but in actuality didn't know how to write a single function.

The correct answer is the following. Sometimes candidates would use a for loop, also an acceptable answer.

function spacify(str) {
 return str.split('').join(' ');
}

The follow-up question to this, is to ask candidates to place the spacify function directly on the String object, for example:

'hello world'.spacify();

Asking this question gives me an insight into the candidate's basic knowledge of function prototypes. Often this would result in an interesting discussion about the perils of defining properties directly on prototypes, especially on Object. The end result looks like this:

String.prototype.spacify = function(){
 return this.split('').join(' ');
};

At this point, I usually ask candidates to explain the difference between a function expression and a function declaration.

Section 2: Arguments

Next I'd ask a few simple questions designed to show me how well candidates understood the arguments object. I'd start off by calling the as-yet undefined log function.

log('hello world')

Then I'd ask the candidates to define log so that it proxies its string argument to console.log(). The correct answer is something along these lines, but better candidates will often skip directly to using apply.

function log(msg){
 console.log(msg);
}

Once that's defined I change the way I call log, passing in multiple arguments. I make clear that I expect log to take an arbitrary number of arguments, not just two. I also hint to the fact that console.log also takes multiple arguments.

log('hello', 'world');

Hopefully your candidate will jump straight to using apply. Sometimes they'll get tripped up on the difference between apply and call, and you can nudge them to the correct direction. Passing the console context is also important.

function log(){
 console.log.apply(console, arguments);
};

I next ask candidates to prefix all logged messages with "(app)", for example:

'(app) hello world'

Now this is where it gets a bit trickier. Good candidates will know that arguments is a pseudo array, and to manipulate it we need to convert it into a standard array first. The common pattern for this is using Array.prototype.slice, like in the following:

function log(){
 var args = Array.prototype.slice.call(arguments);
 args.unshift('(app)');
 console.log.apply(console, args);
};

Section 3: Context

The next set of questions are designed to reveal a candidate's knowledge of JavaScript context and this. I first define the following example. Notice the count property is being read off the current context.

var User = {
 count: 1,
 getCount: function() {
 return this.count;
 }
};

I then define the following few lines, and ask the candidate what the output of the log will be.

console.log(User.getCount());
var func = User.getCount;
console.log(func());

In this case, the correct answer is 1 and undefined. You'd be amazed how many people trip up on basic context questions like this. func is called in the context of window, so loses the count property. I explain this to the candidate, and ask how we could ensure the context of func was always bound to User, so that it would correctly return 1.

The correct answer is to use Function.prototype.bind, for example:

var func = User.getCount.bind(User);
console.log(func());

I usually then explain that this function isn't available in older browsers, and ask the candidate to shim it. A lot of weaker candidates will struggle with this, but it's important that anyone you hire has a comprehensive knowledge of apply and call.

Function.prototype.bind = Function.prototype.bind || function(context){
 var self = this;
 return function(){
 return self.apply(context, arguments);
 };
}

Extra points if the candidate shims bind so that it uses the browser's native version if available. At this point, if the candidate is doing really well, I'll ask them to implement currying arguments.

Section 4: Overlay library

In the last part of the interview I ask the candidates to build something practical, usually an 'overlay' library. I find this useful, as it demonstrates a full front-end stack: HTML, CSS and JavaScript. If a candidate excels at the previous part of the interview, I move to this question as soon as possible.

It's up to the candidate as to the exact implementation, but there are a couple of key things to look out for:

It's much better to use position: fixed instead of position: absolute for overlay covers, which will ensure that the overlay encompasses the entire window even if it's scrolled. I'd definitely prompt for this if the candidate misses it, and ask them the difference between fixed and absolute positioning.

.overlay {
 position: fixed;
 left: 0;
 right: 0;
 bottom: 0;
 top: 0;
 background: rgba(0,0,0,.8);
}

How they choose to center content inside the overlay is also revealing. Some of the candidates might choose to use CSS and absolute positioning, which is possible if the content is a fixed width and height. Otherwise may choose to use JavaScript for positioning.

.overlay article {
 position: absolute;
 left: 50%;
 top: 50%;
 margin: -200px 0 0 -200px;
 width: 400px;
 height: 400px;
}

I also ask them to ensure the overlay is closed if it's clicked, which serves as a good segway into a discussion about the different types of event propagation. Often candidates will slap a click event listener directly on the overlay, and call it a day.

$('.overlay').click(closeOverlay);

This works, until you realize that click events on the overlay's children will also close the overlay — behavior that's definitely not desirable. The solution is to check the event's targets, and make sure the event wasn't propagated, like so:

$('.overlay').click(function(e){
 if (e.target == e.currentTarget)
 closeOverlay();
});

Other ideas

Clearly these questions only cover a tiny slice of front-end knowledge, and there are a lot of other areas you could be asking about, such as performance, HTML5 APIs, AMD vs CommonJS modules, constructors, types and the box model. I often mix and match question topics, depending on the interviewees interests.

I also recommend looking at the excellent Front-end Developer Interview Questions repository for ideas, and also any of the JavaScript behavior documented in JavaScript Garden.


Introducing Paper

PayPal Denies Providing Payment Information to Twitter Username Hacker

$
0
0

Comments:"PayPal Denies Providing Payment Information to Twitter Username Hacker"

URL:http://thenextweb.com/insider/2014/01/29/paypal-denies-providing-payment-information-hacker-hijacked-50000-twitter-username/#!tXHD5


PayPal today denied the allegations made in the viral story “How I lost my $50,000 Twitter username” by Naoki Hiroshima, saying it immediately investigated the situation and has found it was not at fault. The company said its policies prohibit the discussion of “details related to our customers’ accounts,” but it wants to set the record straight as best as it can.

PayPal is making the following assertions:

  • PayPal says it carefully reviewed its records and can confirm that there was a failed attempt made to gain this customer’s information by contacting PayPal.
  • PayPal did not divulge any credit card details related to this account.
  • PayPal did not divulge any personal or financial information related to this account.
  • This individual’s PayPal account was not compromised.

“At PayPal the security of your personal and financial information is our top priority,” the company said. “Our customer service agents are well trained to prevent, social hacking attempts like the ones detailed in this blog post.”

PayPal also said it is reaching out to the affected customer and will offer assistance. It’s unclear that the company will be able to do much for Hiroshima, given that the deed is already done.

Since PayPal isn’t providing a recording of any phone calls the hacker reportedly made to the company, there is no way to verify if PayPal employees followed the correct protocols. At the same time, the hacker in question could easily not have told Hiroshima the truth about how he or she gained access.

GoDaddy has yet to issue its own statements, but even then it appears this story will remain a “he said, she said” tale. We may never find out exactly what happened.

Update: GoDaddy accepts partial responsibility in social engineering attack of @N’s customer account

See also – PayPal president is fascinated by Bitcoin, says company is ‘thinking about’ including the virtual currency and PayPal and Samsung partner to make it easier to pay and get paid for apps, games, music, movies, and more

Top Image Credit: Eric Piermont/AFP/Getty Images

How to compile with continuations

$
0
0

Comments:"How to compile with continuations "

URL:http://matt.might.net/articles/cps-conversion


Continuation-passing style

If you're new to continuation-passing style, I recommend my earlier post on continuation-passing style by example.

For the first three transforms, the input language is the lambda calculus:

<expr> ::= (λ (<var>) <expr>)
 | <var>
 | (<expr> <expr>)

and, they will all target the same CPS form:

<aexp> ::= (λ (<var>*) <cexp>)
 | <var><cexp> ::= (<aexp> <aexp>*)

The CPS world has two kind of expressions: atomic and complex.

Atomic expressions always produce a value and never cause side effects.

Complex expressions may not terminate, and they may produce side effects.

For the fourth transform, it will become partitioned CPS, and for the final transform, it will be a more realistic intermediate language with side effects, conditionals, basic values and explict recursion.

The naive transformation

The naive transformation likely dates to Plotkin's earliest work.

It is the transformation that newcomers often discover for themselves.

In this transformation, we have two functions, M and T:

  • M : expr => aexp converts an atomic value (a variable or a lambda term) into an atomic CPS value; and
  • T : expr × aexp => cexp takes an expression and a syntactic continuation, and applies the continuation to a CPS-converted version of the expression.

The expression (T expr cont) might be read "the transformation of expr into continuation-passing style, such that cont will be invoked on its result."

The M function only has to watch for lambda terms. When it sees a lambda term, it adds a fresh continuation parameter, $k, and then transforms the body of the lambda term into continuation passing style, asking it to invoke $k on the result. Variables are unchanged:

(define (M expr)
 (match expr
 [`(λ (,var) ,expr)
 ; =>
 (define $k (gensym '$k))
 `(λ (,var ,$k) ,(T expr $k))]
 
 [(? symbol?) #;=> expr]))

The transform (T expr cont) will transform expr into a CPS value, and then construct a call site that applies the term cont to that value:

(define (T expr cont)
 (match expr
 [`(λ . ,_) `(,cont ,(M expr))]
 [ (? symbol?) `(,cont ,(M expr))]
 [`(,f ,e) 
 ; =>
 (define $f (gensym '$f))
 (define $e (gensym '$e))
 (T f `(λ (,$f)
 ,(T e `(λ (,$e)
 (,$f ,$e ,cont)))))]))

In the function-application transform, the values of both the function and the argument have to be converted into CPS.

The transform converts each with T, and then catches their results in newly-created continuations. (Both of those lambda terms are continuations.)

Examples

Even while this transformation is simple, its results are poor.

For example, the following:

 (T '(g a) 'halt) 

produces:

 ((λ ($f1445) 
 ((λ ($e1446) 
 ($f1445 $e1446 halt)) a)) g) 

when, clearly, it would have been better to produce:

 (g a halt)

The transformation of function application is the main culprit: the transform assumes that the function and its argument are complex expressions, even though most of the time, they will be atomic.

The higher-order transform

The higher-order CPS transform is a response to the drawbacks of the previous CPS transform.

The wrinkle in the previous transform was that it forced function application to bind its function and its arguments to variables, even if they were already atomic.

If the transform receives a real function expecting the atomic version of a the supplied expression, then the transform can check whether it is necessary to bind it to a variable.

In the higher-order transform, the function T : expr × (aexp => cexp) => cexp will receive a function instead of a syntactic continuation; this callback function will be passed an atomized version of the expression, and it is expected to produce a complex CPS form that utilizes it:

(define (T expr k)
 (match expr
 [`(λ . ,_) (k (M expr))]
 [ (? symbol?) (k (M expr))]
 [`(,f ,e) 
 ; =>
 (define $rv (gensym '$rv))
 (define cont `(λ (,$rv) ,(k $rv)))
 (T f (λ ($f)
 (T e (λ ($e)
 `(,$f ,$e ,cont)))))]))
 
(define (M expr)
 (match expr
 [`(λ (,var) ,expr)
 ; =>
 (define $k (gensym '$k))
 `(λ (,var ,$k) ,(T expr (λ (rv) `(,$k ,rv))))]
 
 [(? symbol?) #;=> expr]))

This simple shift in perspective is economical: if the expression to be transformed is already atomic, it need not be bound to a fresh variable.

Example

The transform (T '(g a) (λ (ans) `(halt ,ans))) now produces:

 (g a (λ ($rv1) (halt $rv1))) 

This is two steps forward, and one step back: the higher-order transform eliminated the redundant bindings, but introduced an η-expansion around the continuation.

A hybrid transform

Combining the naive and higher-order transforms provides the best of both worlds.

The transform now has three principal functions:

  • T-c : expr × aexp => cexp
  • T-k : expr × (aexp => cexp) => cexp
  • M : expr => aexp

The transforms can call the function most appropriate in each context:

(define (T-k expr k)
 (match expr
 [`(λ . ,_) (k (M expr))]
 [ (? symbol?) (k (M expr))]
 [`(,f ,e) 
 ; =>
 (define $rv (gensym '$rv))
 (define cont `(λ (,$rv) ,(k $rv)))
 (T-k f (λ ($f)
 (T-k e (λ ($e)
 `(,$f ,$e ,cont)))))]))
(define (T-c expr c)
 (match expr
 [`(λ . ,_) `(,c ,(M expr))]
 [ (? symbol?) `(,c ,(M expr))]
 [`(,f ,e) 
 ; =>
 (T-k f (λ ($f)
 (T-k e (λ ($e)
 `(,$f ,$e ,c)))))]))
 
(define (M expr)
 (match expr
 [`(λ (,var) ,expr)
 ; =>
 (define $k (gensym '$k))
 `(λ (,var ,$k) ,(T-c expr $k))]
 
 [(? symbol?) #;=> expr]))

Example

With this hybrid transform, (T-c '(g a) 'halt) nails it:

 (g a halt)

Partitioned CPS

At first glance, it seems that CPS destroys the stack.

All calls become tail calls, so in effect, there is no stack.

(There are some advantages [and disadvantages] to stackless compilation.)

Yet, if the transform tags variables, call forms and lambdas as being user or continuation, the stack is recoverable.

To start, split the grammar:

<uexp> ::= (λ (<uvar> <kvar>) <call>)
 | <uvar><kexp> ::= (κ (<uvar>) <call>)
 | <kvar><call> ::= ucall | kcall<ucall> ::= (<uexp> <uexp> <kexp>)<kcall> ::= (<kexp> <uexp>)

A κ form is equivalent to a λ form, but it indicates that this procedure is a continuation introduced by the transform.

To generate a fresh variable bound to a user value, the transform will use genusym, and for a fresh variables bound to a continuation, the transform will use genksym:

(define (T-k expr k)
 (match expr
 [`(λ . ,_) (k (M expr))]
 [ (? symbol?) (k (M expr))]
 [`(,f ,e) 
 ; =>
 (define $rv (genusym '$rv))
 (define cont `(κ (,$rv) ,(k $rv)))
 (T-k f (λ ($f)
 (T-k e (λ ($e)
 `(,$f ,$e ,cont)))))]))
(define (T-c expr c)
 (match expr
 [`(λ . ,_) `(,c ,(M expr))]
 [ (? symbol?) `(,c ,(M expr))]
 [`(,f ,e) 
 ; =>
 (T-k f (λ ($f)
 (T-k e (λ ($e)
 `(,$f ,$e ,c)))))]))
 
(define (M expr)
 (match expr
 [`(λ (,var) ,expr)
 ; =>
 (define $k (genksym '$k))
 `(λ (,var ,$k) ,(T-c expr $k))]
 
 [(? symbol?) #;=> expr])) 

Recovering the stack

Because continuations are used in a last-allocated, first-invoked fashion, we can implement them as a stack.

We can even use the stack pointer register.

When a continuation gets allocated, bump the stack pointer.

When a continuation gets invoked, deallocate its space by resetting the stack pointer to that continuation.

In the absence of call/cc, this is provably safe.

And, even with call/ec in use, this is provably safe.

Scaling to real language features

The lambda calculus makes a nice platform for studying the architecture of a program transformation.

Ultimately, however, that transformation must run on real code.

Fortunately, the hybrid CPS transform readily adapts to features like basic values, conditionals, side effects, sequencing and explicit recursion.

Consider an expanded input language:

<aexpr> ::= (λ (<var>*) <expr>)
 | <var>
 | #t | #f
 | <number>
 | <string>
 | (void)
 | call/ec | call/cc<expr> ::= <aexpr>
 | (begin <expr>*)
 | (if <expr> <expr> <expr>)
 | (set! <var> <expr>)
 | (letrec ([<var> <aexpr>]*) <expr>)
 | (<prim> <expr>*)
 | (<expr> <expr>*)<prim> = { + , - , / , * , = }

And an expanded CPS:

<aexp> ::= (λ (<var>*) <cexp>)
 | <var>
 | #t | #f
 | <number>
 | <string>
 | (void)<cexp> ::= (if <aexp> <cexp> <cexp>)
 | (set-then! <var> <aexp> <cexp>)
 | (letrec ([<var> <aexp>]*) <cexp>)
 | ((cps <prim>) <aexp>*)
 | (<aexp> <aexp>*)

The transform ends up with four principal functions:

  • T-c : expr × aexp => cexp
  • T-k : expr × (aexp => cexp) => cexp
  • T*-k : expr* × (aexp* => cexp) => cexp
  • M : expr => aexp

For instance, T-c adds about one case per construct:

(define (T-c expr c)
 (match expr
 [ (? aexpr?) #;=> `(,c ,(M expr))]
 [`(begin ,expr) (T-c expr c)]
 [`(begin ,expr ,exprs ...)
 (T-k expr (λ (_)
 (T-c `(begin ,@exprs) c)))]
 
 [`(if ,exprc ,exprt ,exprf)
 ; We have to bind the cont to avoid
 ; a possible code blow-up:
 (define $k (gensym '$k))
 `((λ (,$k)
 ,(T-k exprc (λ (aexp)
 `(if ,aexp 
 ,(T-c exprt $k)
 ,(T-c exprf $k)))))
 ,c)]
 
 [`(set! ,var ,expr)
 (T-k expr (λ (aexp)
 `(set-then! ,var ,aexp
 `(,c (void)))))]
 
 [`(letrec ([,vs ,as] ...) ,expr)
 `(letrec (,@(map list vs (map M as))) 
 ,(T-c expr c))]
 
 [`(,(and p (? prim?)) ,es ...)
 ; =>
 (T*-k es (λ ($es)
 `((cps ,p) ,@$es ,c)))]
 
 [`(,f ,es ...) 
 ; =>
 (T-k f (λ ($f)
 (T*-k es (λ ($es)
 `(,$f ,@$es ,c)))))]))

The new cases for T-k look almost the same.

The new helper T*-k converts a sequence of expressions simultaneously:

(define (T*-k exprs k)
 (cond
 [(null? exprs) (k '())]
 [(pair? exprs) (T-k (car exprs) (λ (hd)
 (T*-k (cdr exprs) (λ (tl)
 (k (cons hd tl))))))]))

And, it is straightforward to desugar call/cc or call/ec; both become:

 (λ (f cc) (f (λ (x _) (cc x)) cc)) 

Code

More resources

ryanfunduk.com » Our Culture of Exclusion

$
0
0

Comments:"ryanfunduk.com » Our Culture of Exclusion"

URL:http://old.ryanfunduk.com/culture-of-exclusion/


Our Culture of Exclusion  April 2nd, 2012

Or, why I'm not going to *conf

Update October 25, 2013:

Mission Accomplished?

It's been over a year and a half since I wrote this post. I've received several dozen emails and tons of tweets about it (both hate and support). It's also spawned quite a few response blog posts and I've been ridiculed in podcasts and surely plenty of other places I've never come across. I've definitely come to terms with having 'fallen on my sword' for my 'cause', and probably won't ever go to another conference again.

So was it worth it? As hard as it was to write and as imperfect as it came out, ultimately I think it's done what I intended it to do – get people talking. Discussion around the impact of a focus on alcohol and other cliquey/exclusionary practices in the tech scene is at an all time high. JSConf in particular recently instituted changes that de-emphasize alcohol, not just in the official advertising type material but also more fundamentally at the events themselves by planning a much more diverse set of activities.

A final note: Since it's been so long now, I've attempted to anonymize the examples-of-prolific-alcohol-in-tech in the post (names and links have been replaced with xxxxx).

Lately there have been a lot of great articles being written and discussion happening around sexism in the tech industry. And the flames are being fanned byseveralhighprofileincidents of people saying and doing just plain stupid things.

It reminded me of this draft post just sitting here, uncommitted. For quite a while I've been collecting links, tweets and other stuff to illustrate another problem that's been affecting me (and other people, surely). I thought it was finally time to write the post and bring this up because, honestly, I feel excluded too.

The Alcohol Clique

It's the booze. You can't go anywhere, do anything or talk to anyone in the tech industry these days without a drink in your hand. If you try to fake it with a soda water you may as well give up trying to have insightful conversations after the first hour, because everyone else is wasted.

xxxx thinks you should just go out with the bingers and act like a crazy person right along with them – they won't know the difference! Fair enough, but I'm not interested in 'partying hard', I want to talk with like-minded people about subjects I don't necessarily get to talk about at the office. For example, if we don't use some new technology at work – I can go to a conference to chat and learn about it in a casual atmosphere. Except I don't get to do that. It's always the same: talks, then binge time.

In this post I hope to put a bunch of unfortunate examples of this in writing, back to back, to demonstrate the severity of the issue.

Disclaimer

But, before I go any further, I'd like to catch some obvious backlash points early.

I'm the last person who will tell anyone else what they should do with their time or their body. This article isn't supposed to call out anyone specific and say they are the problem, and I'm not trying to tell people their events suck, or that they shouldn't be having fun at the drunken parties.

Also, please allow me to be blunt for a moment and say that I'm in no way trying to say that this situation compares with the sexism problems mentioned earlier. I'm not being oppressed or feel unsafe or objectified or anything serious like that. This is very#firstworldproblems, indeed. However, I think that this situation I'm about to get into does play a part in the various other kinds of exclusion going on – or at least it can't be helping.

I'm posting this to try and show another perspective, another side – one that might be relevant or contributing to other issues we already know we have.

Formalities out of the way, it's rant time.

BingeConf

It's possible you don't even realize what a big deal this is. Practically every single event, and a huge percentage of the online discussion about these events revolves aroundbinge drinking.

Here's some examples:

  • xxxxxxxxx says that, as a speaker, he doesn't want to be in some private 'speaker room', he wants to be in the 'attendee room'. He qualifies what he means by 'attendee room': Sounds about right.

  • xxxxxxx shows us what really goes on at 'hacknights':

    Update: I spoke to the owner of the company that runs these particular hacknights. He asked me to clarify, and I'm happy to, that these beers were drunk over several weeks and by many people. I think it's fair to say that not all hacknights are just about drinking. Still, I think this tweet illustrates that alcohol is a pervasive and often glorified aspect of our culture.

  • xxxxxxxx at the beginning of a recent talk points out that he works for xxxxx, "you know, the ones who paid for the drinks last night". At that the crowderupts with cheers and clapping. He goes on to say:

    There were some of you who I saw, last night, there were some of you who really took advantage of that... Which is awesome.

    I don't think it's awesome at all.

  • xxxxxxxxxx explains the 'alcohol situation' to his girlfriend:

    xxxxxxxxx. Here's another great one for good measure:

    Everyone else stays home? :/

  • xxxxxxxxx is assuring us there will be free beers... What could be more enticing than the insightful conversation being had by 50 drunk introverts telling "that's what she said" jokes?

  • xxxxxxx decides what to do about 'space for hacking':

    Q: Is there space set up for hacking? A: We have done this for previous conferences, but to be honest people were having too much social fun to really take advantage of the space.

    Translation: Y U NO DRINKING!?

  • xxxxxxxx is anticipating some serious over-indulgence! On the itinerary, especially for April 4th:

    Nursing a hangover? Aren't we all...

    Good thing there's a timeslot specifically dedicated to something called 'Hangover Cafe' from 9am to 1pm. That's some hangover.

  • xxxxxxx has a nice writeup about what it takes to run a conference. First up on the list of tips: "Do it for Love, not Profit". ++

    Second on the list is making sure everyone gets mangled:

    Drinks are always free for all which certainly helps in making the parties great.

    For me, it helps in making the parties a nightmare.

I could go on...

...but you get the poi– actually, I will go on... It's for your own good, you need to hear this. It's not only conferences and events, it's everywhere!

  • While on the subject ofxxxxx,xxxxxxxx was on a podcast a while ago and made the following generalization:

    [You should say] "hey, I see that you have an issue... You wanna go get a beer? ...and we'll chat about it." And that always works. Nobody's ever been offered a beer and said grumbling "No, I don't want a beer." ...I would contend that's the way to fight back, is just to offer a beer.

    Let's be clear, I will turn down a beer any day of the week!

    He also said in that podcast that the community we're in has an epidemic problem (with negativity) – which I think is fair to say, except I'd argue we have more than just the one.

  • xxxxxxx is hiring. Perks for working for them include dental coverage, and 'weekly happy hour'. Those who don't want to participate in getting sloshed regularly... need not apply?

  • NPM goes down...

  • xxxxxxxx on his blog wrote this very interesting and informative post on reinventing the office. Lots of really interesting and useful stuff in here, a great read. Until we get to the end:

    We also serve free beer and red wine on Fridays.

    Why? Because it can behealthy!? The linked article cites that alcohol consumption in moderation (read: average of 1 drink per day) can lead to increased HDL levels. Wow! Have we finally found a miracle cure for heart disease? Right under our noses this whole time?

    Wait, I wonder what else can contribute to increased HDL levels.

    Wikipedia weighs in:

    • Soluble fiber in your diet
    • Stop smoking
    • Removal of trans fatty acids from your diet
    • Aerobic exercise

    ...and other such unobtrusive methods which don't involve getting inebriated and all rosy-cheeked at the office. Sign me up for those.

    Come on, should we also have complimentary joints available? You know marijuana use can be linked to reduced stress and studies suggest it can be useful in treating depression! Please. This doesn't belong in the workplace!

  • Lastly, I lovexxxxxx, I think it's one of the best things to happen to developers in a long time and I use it every day. Naturally I follow their blog, and I notice a lot of posts about these 'drinkup' events. How many? Surely only here and there, right?...

    Well, I wrote a script to crawl the blog and figure out the percentage of blog posts that mention beer or these events.

    All in all approximately 10% of all blog posts on the blog passed my script's test. One in ten posts is saying "just a reminder, you need to be drinking basically all the time".

    As much as I love xxxxx and think I'd love to do the kind of work they do. I can't imagine actually going into that office every day, confronted with people drinking out of kegs.xxxxx people, this is not healthy – physically or mentally!

Some Personal Experience

I'm lucky enough to work for an awesome company that doesn't perpetuate nonsense like insisting everyone go out to get hammered with new candidates before offering them the job, or perform head tilts with "are you a weirdo?" looks when someone 'inexplicably' turns down an offer to go sit in a loud dark bar for a few hours after work.

So, why do I care about this?

Back in 2008 I decided to leave my boring cube job at Research in Motion and move to Toronto to work for a startup. I remember thinking to myself: "Self, don't just stay home and stare at your laptop! Get out there to events and stuff and meet people. It's not what you know, it's who you know!" Hey I was just out of school gimme a break.

I figured it couldn't be too hard. Toronto is big, pick an event and just go. Lucky for mexxxxx was right around the corner. Perfect timing as I prepare to move, so off I go to the party on opening night atAmsterdam Brewery.

The music is absolutely blasting. It's practically pitch black. What have I gotten myself into...

The next day, there are some killer talks. Then another boozefest. More awesome talks. And a last rooftop party which I decide to just skip.

Funnily enough, this is almost exactly the same formula asxxxxxx about a year later:

Awesome talks rudely interrupted by an 'epic' drunken party in some kind of underground plane-turned-bar where I attempt to have a top-of-my-lungs conversation with a guy who had interesting things to say (I think?) about Clojure. I lasted about 15 minutes at the party the next day and instead walked around, sober, talking and enjoying the great weather with my beautiful wife.

Over the next 2 years or so I'd go to a meetup here or there with mostly the same experience, except of course usually without the high caliber talks. Needless to say, I stopped going to these things.

Recently I was intrigued byxxxxxxxxx– oh boy am I ever into client side frameworks right now! The website makes it sound innocent enough:

...we run you and the rest of your warrior class through the all-inclusive fun gamut each and every evening.

Oh no no, wait a minute, I'm not falling for that again. I know what 'fun gamut' means. It means everyone getsshitfaced!

How can I justify spending $650 on something like that? It must be a huge portion of my ticket that goes into these elaborate parties (Edit: apparently not? It matters little). Can I buy a ticket that only includes entry into... you know... the conference?

The organizers can have the best intentions, and I'm absolutely sure that most do (from an FAQ: "We really bend over backwards to make sure that everyone is comfortable and having a good time."), but this is bigger than that – as xxxxx might put it – it's systemic. You can't just say "we'll make sure you have a good time". How are you going to do that?

The simple truth is all you can do is just opt out of going to these parties... or put another way, you can opt to exclude yourself.

It's Attracting the Brogrammers

Let's change gears for a moment. I think it actually runs deeper than I've been referring to so far. These parties have nothing to do with JavaScript or client side frameworks. And, in my opinion, they encourage behavior that ultimately leads to tweets like this: ...which I think are grossly underestimating the portion of the industry that is excluded!

Are we really shocked about thisbrogrammer trend?

If you buy crap like this to 'erase the night before', find yourself discussing hangover cures (here's a tip from my past self, avoid caffeine) with other conference attendees or sufferacute liver failure...you might be a brogrammer, and it just might be time to 'detox'.

I for one do not like this one bit, and no one wants to talk about it. Here's what I hoped might be the start of a conversation:

And there's nothing but dead air. No reply, and not a single rewteet or anything. Well, not for my tweet anyway. 50+ retweets for the OP, presumably by people who think some entirely fluffy, meaningless term like 'ninja' (remind anyone of 'guru'?) is a problem, and that's why there are brogrammers. For crying out loud, this has to be a joke right?

Screw You Guys, I'm Going Home

I've stopped going to 'community events', and I've made a personal decision to leave the city – where I thought I needed to be to grow my career. Also you can often watch the talks from conferences later (via Confreaks for example). So I've mostly made my peace with the whole situation at this point.

But with all the talk of people being excluded, maybe it's time we look at the overall attitude pervasive at these events. Maybe it's not just subtle, passive, even unintentional sexist and racist comments. Maybe it's not just treating PHP programmers and Windows users like they're inferior.

Maybe we should take a step back and realize that lots of people are probably feeling excluded from this cliquey club of bar crawls.

Perhaps it would be easier to educate people on appropriate conduct (you may have noticed the 'fine print' approach isn't really working...) when you don't turn around and encourage them to drink their inhibitions away in what should be a professional setting... Don't you think it would be easier for under-represented groups to participate when they can be comfortable attending meetups and events?

I don't want to speak for any group I'm not a part of, since I don't know what they go through or how they feel. But I know that I feel extremely uncomfortable at these drinking parties, andI fit the profile for the average attendee (let's not beat around the bush, that means: young + white + male). It's not hard to imagine how many who don't fit the profile would feel like they don't fit in. And I think the reason is obvious: because everything has been specifically constructed and tailored for that single group.

To An Outsider

In writing this post I asked my wife to do some proof-reading (she can pick out an 'and and' from 10,000ft!) and give me some suggestions.

During her review she said to me:

Wow I'm so glad I'm not a programmer. Seems like soon 'programmer' will be considered just as douchey a profession as being a banker on Wall Street.

Harsh. We look like a bunch of assholes.

I'll Say It Straight

It's sort of like high school is repeating itself. We have an isolated population, and within it we've got the cool kids making life (real life, this time)difficult, frustrating and miserable for people who don't deserve to be walked all over.

Consider for a moment that while you might love binge drinking – and listen, I've done my share in the past... so I know it can be a blast – not everyone is into it, and it has nothing to do with code.

These planned binges sound as strange to some as the conference organizer going up on stage and saying "Ok everyone, off to church for evening prayer!" or "We've spared no expense on our skinny dipping venue for tonight!"

Leave the lifestyle choice stuff out of the official programme.

Keep JavaScript conferences about JavaScript,
Ruby conferences about Ruby,
* conferences about *.

Final Thoughts

So it's time for some concrete suggestions for what to do about this... The way I see it, it would be pretty simple to make a positive impact:

  • Meetups: host these in co-working spaces or coffee shops (you can get tea or water at a coffee shop and no one will think it's weird). Added bonus to this is that you'll actually be able to talk to people, and then the next day you'll remember everything.

  • For conferences: don't plan elaborate drinking parties and put them on the itinerary. Some people who want to go out to the club can still do so, they don't need you to schedule it for them. Yes this means you'll probably need to come up with something else to do in the evenings... maybe real hacknights? Codingcompetitions/contests or maybe a 'DemoCamp' style thing – but not at a bar.

    If you absolutely must plan an open bar type event, offer a ticket type that is just for the conference track.xxxxxx has it backwards, you can buy party only tickets! WTF!? (And at a price no doubt subsidized by the sold out conference tickets. Edit: again, apparently not)

  • Every day at the office: No company provided alcohol (no piles of meat, bongs or lube either – none of this belongs in a place of business).

  • Online: If your project refers to drinking in a way more forceful than, say, homebrew does – you can help by just toning it down a bit. Your project doesn't actually have anything to do with booze. Perfection is achieved when there is nothing left to take away.

What do you think?

Have you also experienced this? Or maybe I should lighten up?

I don't have comments on my blog but I'd love to hear from you if you feel the same way (or not).Tweet at me, discuss on HN/etc, or pick some other method and I'd be happy to chat about it, just not over a beer :)

What's in my iOS Toolbox?

Become an exceptional programmer by learning to ship — Rendered Text

Machine Learning in Javascript: Introduction | Burak Kanber's Blog

$
0
0

Comments:"Machine Learning in Javascript: Introduction | Burak Kanber's Blog"

URL:http://burakkanber.com/blog/machine-learning-in-other-languages-introduction/


On September 3, 2012

I love machine learning algorithms. I’ve taught classes and seminars and given talks on ML. The subject is fascinating to me, but like all skills fascination simply isn’t enough. To get good at something, you need to practice!

I also happen to be a PHP and JavaScript developer. I’ve taught classes on both of these as well — but like any decent software engineer I have experience with Ruby, Python, Perl, and C. I just prefer PHP and JS. (Before you flame PHP, I’ll just say that while it has its problems, I like it because it gets stuff done.)

Whenever I say that Tidal Labs’ ML algorithms are in PHP, they look at me funny and ask me how it’s possible. Simple: it’s possible to write ML algorithms in just about any language. Most people just don’t care to learn the fundamentals strongly enough that they can write an algorithm from scratch. Instead, they rely on Python libraries to do the work for them, and end up not truly grasping what’s happening inside the black box. Other people only know ML academically, using Octave or Matlab.

Through this series of articles, I’ll teach you the fundamental machine learning algorithms using Javascript — not Python or Octave — as the example language. Originally I intended to write these articles in a variety of languages (PHP, JS, Perl, C, Ruby), but decided to stick with Javascript for the following reasons:

  • If you’re a web developer you probably already know JS, regardless of your backend expertise.
  • Javascript has JSFiddle, a great tool that lets me embed executable Javascript right in my posts (hard to do that with C or Perl!)
  • Several people asked me to stick to just one language.

While I’ll be writing these articles with Javascript in mind, please re-write the examples in your language of choice as homework! Practice is how you get better, and writing the same algorithm several times in different languages really helps you understand the paradigms better.

It’s possible to get excellent performance out of ML algorithms in languages like PHP and Javascript. I advocate writing ML algorithms in other languages because the practice of writing ML algorithms from scratch helps you learn them fundamentally, and it also helps you unify your backend by not requiring a Python script to do processing in the middle of a PHP application. You can do it in PHP, and cut out the (mental and computational) overhead of using another language.

… well, most of the time. There are some things you really can’t do in PHP or Javascript, but those are the more advanced algorithms that require heavy matrix math. While you can do matrix math in JS, there is a big difference between simply “doing matrix math” and doing it efficiently. The advantage of NumPy or Matlab is not in their ability to do matrix operations, it’s in the fact that they use optimized algorithms to do so — things you wouldn’t be able to do yourself unless you dedicate yourself to learning computational linear algebra. And that’s not my field, so we’ll just stick to the ML that doesn’t require the advanced matrix math. You could try brute-forcing the matrix operations, but you’ll end up with a relatively inefficient system. It’s great for learning, so I’m not discouraging it — I would just be wary of doing that in a production environment.

Keep in mind that most of the algorithms we’ll look at can be solved both with and without matrix math. We’ll use iterative or functional approaches here, but most of these algorithms can be done with linear algebra as well. There’s more than one way to skin a cat! I encourage you to also go and learn (or figure out) the linear algebra approaches, but since that’s not my strong suit I’ll use other approaches.

Here are some of the algorithms I intend to cover. I’ll update this list with links to the relevant articles as they’re published:

Happy learning!

Please enable JavaScript to view the comments powered by Disqus.

iSICP 1.1 - The Elements of Programming

$
0
0

Comments:"iSICP 1.1 - The Elements of Programming"

URL:http://xuanji.appspot.com/isicp/1-1-elements.html


The Elements of Programming

A powerful programming language is more than just a means for instructing a computer to perform tasks. The language also serves as a framework within which we organize our ideas about processes. Thus, when we describe a language, we should pay particular attention to the means that the language provides for combining simple ideas to form more complex ideas. Every powerful language has three mechanisms for accomplishing this:

  • primitive expressions, which represent the simplest entities the language is concerned with,

  • means of combination, by which compound elements are built from simpler ones, and

  • means of abstraction, by which compound elements can be named and manipulated as units.

In programming, we deal with two kinds of elements: procedures and data. (Later we will discover that they are really not so distinct.) Informally, data is “stuff ” that we want to manipulate, and procedures are descriptions of the rules for manipulating the data. Thus, any powerful programming language should be able to describe primitive data and primitive procedures and should have methods for combining and abstracting procedures and data.

In this chapter we will deal only with simple numerical data so that we can focus on the rules for building procedures. In later chapters we will see that these same rules allow us to build procedures to manipulate compound data as well.

Expressions

One easy way to get started at programming is to examine some typical interactions with an interpreter for the Scheme dialect of Lisp. Imagine that you are sitting at a computer terminal. You type an expression, and the interpreter responds by displaying the result of its evaluating that expression.

One kind of primitive expression you might type is a number. (More precisely, the expression that you type consists of the numerals that represent the number in base 10.) If you present Lisp with a number

486

the interpreter will respond by printing

Expressions representing numbers may be combined with an expression representing a primitive procedure (such as + or *) to form a compound expression that represents the application of the procedure to those numbers. For example:

(+ 137 349)

(- 1000 334)

(* 5 99)

(/ 10 5)

(+ 2.7 10)

Expressions such as these, formed by delimiting a list of expressions within parentheses in order to denote procedure application, are called combinations. The leftmost element in the list is called the operator, and the other elements are called operands. The value of a combination is obtained by applying the procedure specified by the operator to the arguments that are the values of the operands.

The convention of placing the operator to the left of the operands is known as prefix notation, and it may be somewhat confusing at first because it departs significantly from the customary mathematical convention. Prefix notation has several advantages, however. One of them is that it can accommodate procedures that may take an arbitrary number of arguments, as in the following examples:

(+ 21 35 12 7)

(* 25 4 12)

No ambiguity can arise, because the operator is always the leftmost element and the entire combination is delimited by the parentheses.

A second advantage of prefix notation is that it extends in a straightforward way to allow combinations to be nested, that is, to have combinations whose elements are themselves combinations:

(+ (* 3 5) (- 10 6))

There is no limit (in principle) to the depth of such nesting and to the overall complexity of the expressions that the Lisp interpreter can evaluate. It is we humans who get confused by still relatively simple expressions such as

(+ (* 3 (+ (* 2 4) (+ 3 5))) (+ (- 10 7) 6))

which the interpreter would readily evaluate to be 57. We can help ourselves by writing such an expression in the form

(+ (* 3 (+ (* 2 4) (+ 3 5))) (+ (- 10 7) 6))

following a formatting convention known as pretty-printing, in which each long combination is written so that the operands are aligned vertically. The resulting indentations display clearly the structure of the expression.

Even with complex expressions, the interpreter always operates in the same basic cycle: It reads an expression from the terminal, evaluates the expression, and prints the result. This mode of operation is often expressed by saying that the interpreter runs in a read-eval-print loop. Observe in particular that it is not necessary to explicitly instruct the interpreter to print the value of the expression.

Exercise 1.1.1. Below is a sequence of expressions. What is the result printed by the interpreter in response to each expression? Assume that the sequence is to be evaluated in the order in which it is presented.

10

(+ 5 3 4)

(- 9 1)

(/ 6 2)

(+ (* 2 4) (- 4 6))

Exercise 1.2. Translate the following expression into prefix form. $$ \frac{5 + 4 + (2 - (3 - (6 + \frac{4}{5})))}{3(6-2)(2-7)} $$

Naming and the Environment

A critical aspect of a programming language is the means it provides for using names to refer to computational objects. We say that the name identifies a variable whose value is the object.

In the Scheme dialect of Lisp, we name things with define. Typing

(define size 2)

causes the interpreter to associate the value 2 with the name size. Once the name size has been associated with the number 2, we can refer to the value 2 by name:

size

(* 5 size)

Here are further examples of the use of define:

(define pi 3.14159) (define radius 10)

(* pi (* radius radius))

(define circumference (* 2 pi radius))

circumference

define is our language's simplest means of abstraction, for it allows us to use simple names to refer to the results of compound operations, such as the circumference computed above. In general, computational objects may have very complex structures, and it would be extremely inconvenient to have to remember and repeat their details each time we want to use them. Indeed, complex programs are constructed by building, step by step, computational objects of increasing complexity. The interpreter makes this step-by-step program construction particularly convenient because name-object associations can be created incrementally in successive interactions. This feature encourages the incremental development and testing of programs and is largely responsible for the fact that a Lisp program usually consists of a large number of relatively simple procedures.

It should be clear that the possibility of associating values with symbols and later retrieving them means that the interpreter must maintain some sort of memory that keeps track of the name-object pairs. This memory is called the environment (more precisely the global environment, since we will see later that a computation may involve a number of different environments).

Exercise 1.1.2. Below is a sequence of expressions. What is the result printed by the interpreter in response to each expression? Assume that the sequence is to be evaluated in the order in which it is presented.

(define a 3)

(define b (+ a 1))

(+ a b (* a b))

(= a b)

Evaluating Combinations

One of our goals in this chapter is to isolate issues about thinking procedurally. As a case in point, let us consider that, in evaluating combinations, the interpreter is itself following a procedure.

To evaluate a combination, do the following: Evaluate the subexpressions of the combination. Apply the procedure that is the value of the leftmost subexpression (the operator) to the arguments that are the values of the other subexpressions (the operands).

Even this simple rule illustrates some important points about processes in general. First, observe that the first step dictates that in order to accomplish the evaluation process for a combination we must first perform the evaluation process on each element of the combination. Thus, the evaluation rule is recursive in nature; that is, it includes, as one of its steps, the need to invoke the rule itself.

Notice how succinctly the idea of recursion can be used to express what, in the case of a deeply nested combination, would otherwise be viewed as a rather complicated process. For example, evaluating

(* (+ 2 (* 4 6)) (+ 3 5 7))

requires that the evaluation rule be applied to four different combinations. We can obtain a picture of this process by representing the combination in the form of a tree, as shown in figure 1.1.

Figure 1.1 : Tree representation, showing the value of each subcombination.

Each combination is represented by a node with branches corresponding to the operator and the operands of the combination stemming from it. The terminal nodes (that is, nodes with no branches stemming from them) represent either operators or numbers. Viewing evaluation in terms of the tree, we can imagine that the values of the operands percolate upward, starting from the terminal nodes and then combining at higher and higher levels. In general, we shall see that recursion is a very powerful technique for dealing with hierarchical, treelike objects. In fact, the "percolate values upward" form of the evaluation rule is an example of a general kind of process known as tree accumulation.

Next, observe that the repeated application of the first step brings us to the point where we need to evaluate, not combinations, but primitive expressions such as numerals, built-in operators, or other names. We take care of the primitive cases by stipulating that

  • the values of numerals are the numbers that they name,
  • the values of built-in operators are the machine instruction sequences that carry out the corresponding operations, and
  • the values of other names are the objects associated with those names in the environment.

We may regard the second rule as a special case of the third one by stipulating that symbols such as + and * are also included in the global environment, and are associated with the sequences of machine instructions that are their "values." The key point to notice is the role of the environment in determining the meaning of the symbols in expressions. In an interactive language such as Lisp, it is meaningless to speak of the value of an expression such as (+ x 1) without specifying any information about the environment that would provide a meaning for the symbol x (or even for the symbol +). As we shall see in chapter 3, the general notion of the environment as providing a context in which evaluation takes place will play an important role in our understanding of program execution.

Notice that the evaluation rule given above does not handle definitions. For instance, evaluating (define x 3) does not apply define to two arguments, one of which is the value of the symbol x and the other of which is 3, since the purpose of the define is precisely to associate x with a value. (That is, (define x 3) is not a combination.)

Such exceptions to the general evaluation rule are called special forms. Define is the only example of a special form that we have seen so far, but we will meet others shortly. Each special form has its own evaluation rule. The various kinds of expressions (each with its associated evaluation rule) constitute the syntax of the programming language. In comparison with most other programming languages, Lisp has a very simple syntax; that is, the evaluation rule for expressions can be described by a simple general rule together with specialized rules for a small number of special forms.

Compound Procedures

We have identified in Lisp some of the elements that must appear in any powerful programming language:

  • Numbers and arithmetic operations are primitive data and procedures.
  • Nesting of combinations provides a means of combining operations.
  • Definitions that associate names with values provide a limited means of abstraction.

Now we will learn about procedure definitions, a much more powerful abstraction technique by which a compound operation can be given a name and then referred to as a unit.

We begin by examining how to express the idea of "squaring." We might say, "To square something, multiply it by itself." This is expressed in our language as

(define (square x) (* x x))

We can understand this in the following way:

(define (square  x)        (*         x     x))
  ↑        ↑     ↑          ↑         ↑    ↑
 To      square something, multiply   it by  itself.

We have here a compound procedure, which has been given the name square. The procedure represents the operation of multiplying something by itself. The thing to be multiplied is given a local name, x, which plays the same role that a pronoun plays in natural language. Evaluating the definition creates this compound procedure and associates it with the name square.

The general form of a procedure definition is

(define (<name> <formal parameters>) <body>)

The name is a symbol to be associated with the procedure definition in the environment. The formal parameters are the names used within the body of the procedure to refer to the corresponding arguments of the procedure. The body is an expression that will yield the value of the procedure application when the formal parameters are replaced by the actual arguments to which the procedure is applied. The name and the formal parameters are grouped within parentheses, just as they would be in an actual call to the procedure being defined.

Having defined square, we can now use it:

(square 21)

(square (+ 2 5))

(square (square 3))

We can also use square as a building block in defining other procedures. For example, $x^2 + y^2$ can be expressed as

(+ (square x) (square y))

We can easily define a procedure sum-of-squares that, given any two numbers as arguments, produces the sum of their squares:

(define (sum-of-squares x y) (+ (square x) (square y)))

(sum-of-squares 3 4)

Now we can use sum-of-squares as a building block in constructing further procedures:

(define (f a) (sum-of-squares (+ a 1) (* a 2))) (f 5)

Compound procedures are used in exactly the same way as primitive procedures. Indeed, one could not tell by looking at the definition of sum-of-squares given above whether square was built into the interpreter, like + and *, or defined as a compound procedure.

The Substitution Model for Procedure Application

To evaluate a combination whose operator names a compound procedure, the interpreter follows much the same process as for combinations whose operators name primitive procedures, which we described in section 1.1.3. That is, the interpreter evaluates the elements of the combination and applies the procedure (which is the value of the operator of the combination) to the arguments (which are the values of the operands of the combination).

We can assume that the mechanism for applying primitive procedures to arguments is built into the interpreter. For compound procedures, the application process is as follows:

To apply a compound procedure to arguments, evaluate the body of the procedure with each formal parameter replaced by the corresponding argument.

To illustrate this process, let's evaluate the combination

(f 5)

where f is the procedure defined in section 1.1.4. We begin by retrieving the body of f:

(sum-of-squares (+ a 1) (* a 2))

Then we replace the formal parameter a by the argument 5:

(sum-of-squares (+ 5 1) (* 5 2))

Thus the problem reduces to the evaluation of a combination with two operands and an operator sum-of-squares. Evaluating this combination involves three subproblems. We must evaluate the operator to get the procedure to be applied, and we must evaluate the operands to get the arguments. Now (+ 5 1) produces 6 and (* 5 2) produces 10, so we must apply the sum-of-squares procedure to 6 and 10. These values are substituted for the formal parameters x and y in the body of sum-of-squares, reducing the expression to

(+ (square 6) (square 10))

If we use the definition of square, this reduces to

(+ (* 6 6) (* 10 10))

which reduces by multiplication to

(+ 36 100)

and finally to

136

The process we have just described is called the substitution model for procedure application. It can be taken as a model that determines the "meaning" of procedure application, insofar as the procedures in this chapter are concerned. However, there are two points that should be stressed:

  • The purpose of the substitution is to help us think about procedure application, not to provide a description of how the interpreter really works. Typical interpreters do not evaluate procedure applications by manipulating the text of a procedure to substitute values for the formal parameters. In practice, the "substitution" is accomplished by using a local environment for the formal parameters. We will discuss this more fully in chapters 3 and 4 when we examine the implementation of an interpreter in detail.

  • Over the course of this book, we will present a sequence of increasingly elaborate models of how interpreters work, culminating with a complete implementation of an interpreter and compiler in chapter 5. The substitution model is only the first of these models — a way to get started thinking formally about the evaluation process. In general, when modeling phenomena in science and engineering, we begin with simplified, incomplete models. As we examine things in greater detail, these simple models become inadequate and must be replaced by more refined models. The substitution model is no exception. In particular, when we address in chapter 3 the use of procedures with "mutable data," we will see that the substitution model breaks down and must be replaced by a more complicated model of procedure application.

Applicative order versus normal order

According to the description of evaluation given in section 1.1.3, the interpreter first evaluates the operator and operands and then applies the resulting procedure to the resulting arguments. This is not the only way to perform evaluation. An alternative evaluation model would not evaluate the operands until their values were needed. Instead it would first substitute operand expressions for parameters until it obtained an expression involving only primitive operators, and would then perform the evaluation. If we used this method, the evaluation of

(f 5)

would proceed according to the sequence of expansions

(sum-of-squares (+ 5 1) (* 5 2))

(+ (square (+ 5 1)) (square (* 5 2)) )

(+ (* (+ 5 1) (+ 5 1)) (* (* 5 2) (* 5 2)))

followed by the reductions

(+ (* 6 6) (* 10 10))

(+ 36 100)

136

This gives the same answer as our previous evaluation model, but the process is different. In particular, the evaluations of (+ 5 1) and (* 5 2) are each performed twice here, corresponding to the reduction of the expression

(* x x)

with x replaced respectively by (+ 5 1) and (* 5 2).

This alternative "fully expand and then reduce" evaluation method is known as normal-order evaluation, in contrast to the "evaluate the arguments and then apply" method that the interpreter actually uses, which is called applicative-order evaluation. It can be shown that, for procedure applications that can be modeled using substitution (including all the procedures in the first two chapters of this book) and that yield legitimate values, normal-order and applicative-order evaluation produce the same value. (See exercise 1.5 for an instance of an "illegitimate" value where normal-order and applicative-order evaluation do not give the same result.)

Lisp uses applicative-order evaluation, partly because of the additional efficiency obtained from avoiding multiple evaluations of expressions such as those illustrated with (+ 5 1) and (* 5 2) above and, more significantly, because normal-order evaluation becomes much more complicated to deal with when we leave the realm of procedures that can be modeled by substitution. On the other hand, normal-order evaluation can be an extremely valuable tool, and we will investigate some of its implications in chapters 3 and 4.

Conditional Expressions and Predicates

The expressive power of the class of procedures that we can define at this point is very limited, because we have no way to make tests and to perform different operations depending on the result of a test. For instance, we cannot define a procedure that computes the absolute value of a number by testing whether the number is positive, negative, or zero and taking different actions in the different cases according to the rule $$ |x| = \begin{cases} x &\mbox{if } x > 0 \\ 0 &\mbox{if } x = 0 \\ -x &\mbox{if } x < 0 \end{cases} $$

This construct is called a case analysis, and there is a special form in Lisp for notating such a case analysis. It is called cond (which stands for "conditional"), and it is used as follows:

(define (abs x) (cond ((> x 0) x) ((= x 0) 0) ((< x 0) (- x))))

The general form of a conditional expression is

(cond (<p1> <e1>) (<p2> <e2>) ... (<pn> <en>))

consisting of the symbol cond followed by parenthesized pairs of expressions (<p> <e>) called clauses. The first expression in each pair is a predicate — that is, an expression whose value is interpreted as either true or false.

Conditional expressions are evaluated as follows. The predicate <p1>; is evaluated first. If its value is false, then <p2> is evaluated. If <p2>'s value is also false, then <p3> is evaluated. This process continues until a predicate is found whose value is true, in which case the interpreter returns the value of the corresponding consequent expression <e> of the clause as the value of the conditional expression. If none of the <p>'s is found to be true, the value of the cond is undefined.

The word predicate is used for procedures that return true or false, as well as for expressions that evaluate to true or false. The absolute-value procedure abs makes use of the primitive predicates <, >, and =. These take two numbers as arguments and test whether the first number is, respectively, greater than, less than, or equal to the second number, returning true or false accordingly.

Another way to write the absolute-value procedure is

(define (abs x) (cond ((< x 0) (- x)) (else x)))

which could be expressed in English as "If $x$ is less than zero return $-x$; otherwise return $x$." else is a special symbol that can be used in place of the <p> in the final clause of a cond. This causes the cond to return as its value the value of the corresponding <e> whenever all previous clauses have been bypassed. In fact, any expression that always evaluates to a true value could be used as the <p> here.

Here is yet another way to write the absolute-value procedure:

(define (abs x) (if (< x 0) (- x) x))

This uses the special form if, a restricted type of conditional that can be used when there are precisely two cases in the case analysis. The general form of an if expression is

(if <predicate> <consequent> <alternative>)

To evaluate an if expression, the interpreter starts by evaluating the <predicate> part of the expression. If the <predicate> evaluates to a true value, the interpreter then evaluates the <consequent> and returns its value. Otherwise it evaluates the <alternative> and returns its value.

In addition to primitive predicates such as <, = and >, there are logical composition operations, which enable us to construct compound predicates. The three most frequently used are these:

  • (and <e1> ... <en>)

    The interpreter evaluates the expressions <e> one at a time, in left-to-right order. If any <e> evaluates to false, the value of the and expression is false, and the rest of the <e>'s are not evaluated. If all <e>'s evaluate to true values, the value of the and expression is the value of the last one.

  • (or <e1> ... <en>)

    The interpreter evaluates the expressions <e> one at a time, in left-to-right order. If any <e> evaluates to a true value, that value is returned as the value of the or expression, and the rest of the <e>'s are not evaluated. If all <e>'s evaluate to false, the value of the or expression is false.

  • (not <e>)

    The value of a not expression is true when the expression <e> evaluates to false, and false otherwise.

Notice that and and or are special forms, not procedures, because the subexpressions are not necessarily all evaluated. Not is an ordinary procedure. As an example of how these are used, the condition that a number $x$ be in the range $5 < x < 10$ may be expressed as

(and (> x 5) (< x 10))

As another example, we can define a predicate to test whether one number is greater than or equal to another as

(define (>= x y) (or (> x y) (= x y)))

or alternatively as

(define (>= x y) (not (< x y)))

Exercise 1.3. Define a procedure that takes three numbers as arguments and returns the sum of the squares of the two larger numbers.

(define (square-sum-larger a b c) 'your-code-here)

Exercise 1.4. Observe that our model of evaluation allows for combinations whose operators are compound expressions. Use this observation to describe the behavior of the following procedure:

(define (mystery a b) ((if (> b 0) + -) a b))

Describe the behavior of the procedure by selecting which descriptions are equivalent. If you think that the procedure definition is invalid, choose the reasons why.

Exercise 1.5. Ben Bitdiddle has invented a test to determine whether the interpreter he is faced with is using applicative-order evaluation or normal-order evaluation. He defines the following two procedures:

(define (p) (p)) (define (test x y) (if (= x 0) 0 y))

Then he evaluates the expression

(test 0 (p))

What behavior will Ben observe with an interpreter that uses applicative-order evaluation? What behavior will he observe with an interpreter that uses normal-order evaluation? Explain your answer. (Assume that the evaluation rule for the special form if is the same whether the interpreter is using normal or applicative order: The predicate expression is evaluated first, and the result determines whether to evaluate the consequent or the alternative expression.)

discussion

Example: Square Roots by Newton's Method

Procedures, as introduced above, are much like ordinary mathematical functions. They specify a value that is determined by one or more parameters. But there is an important difference between mathematical functions and computer procedures. Procedures must be effective.

As a case in point, consider the problem of computing square roots. We can define the square-root function as $$ \text{$\sqrt x$ = the $y$ such that $y \ge 0$ and $y^2 = x$} $$

This describes a perfectly legitimate mathematical function. We could use it to recognize whether one number is the square root of another, or to derive facts about square roots in general. On the other hand, the definition does not describe a procedure. Indeed, it tells us almost nothing about how to actually find the square root of a given number. It will not help matters to rephrase this definition in pseudo-Lisp:

(define (sqrt x) (the y (and (>= y 0) (= (square y) x))))

This only begs the question.

The contrast between function and procedure is a reflection of the general distinction between describing properties of things and describing how to do things, or, as it is sometimes referred to, the distinction between declarative knowledge and imperative knowledge. In mathematics we are usually concerned with declarative (what is) descriptions, whereas in computer science we are usually concerned with imperative (how to) descriptions.

How does one compute square roots? The most common way is to use Newton's method of successive approximations, which says that whenever we have a guess $y$ for the value of the square root of a number $x$, we can perform a simple manipulation to get a better guess (one closer to the actual square root) by averaging $y$ with $\frac{x}{y}$. For example, we can compute the square root of 2 as follows. Suppose our initial guess is $1$:

\begin{array}{c|c|c} \text{Guess} & \text{Quotient} & \text{Average} \\ \hline \\ 1 & \frac{2}{1} = 2 & \frac{1}{2} (2 + 1) = 1.5 \\ 1.5 & \frac{2}{1.5} = 1.3333 & \frac{1}{2} (1.3333 + 1.5) = 1.4617 \\ 1.4617 & \frac{2}{1.4617} = 1.4188 & \frac{1}{2} (1.4167 + 1.4188) = 1.4142 \\ 1.4142 & \frac{2}{1.4142} = 1.4164 & \cdots \end{array}

Continuing this process, we obtain better and better approximations to the square root.

Now let's formalize the process in terms of procedures. We start with a value for the radicand (the number whose square root we are trying to compute) and a value for the guess. If the guess is good enough for our purposes, we are done; if not, we must repeat the process with an improved guess. We write this basic strategy as a procedure:

(define (sqrt-iter guess x) (if (good-enough? guess x) guess (sqrt-iter (improve guess x) x)))

A guess is improved by averaging it with the quotient of the radicand and the old guess:

(define (improve guess x) (average guess (/ x guess)))

where

(define (average x y) (/ (+ x y) 2))

We also have to say what we mean by "good enough." The following will do for illustration, but it is not really a very good test. (See exercise 1.7.) The idea is to improve the answer until it is close enough so that its square differs from the radicand by less than a predetermined tolerance (here 0.001):

(define (good-enough? guess x) (

Finally, we need a way to get started. For instance, we can always guess that the square root of any number is 1:

(define (sqrt x) (sqrt-iter 1.0 x))

If we type these definitions to the interpreter, we can use sqrt just as we can use any procedure:

(sqrt 9)

(sqrt (+ 100 37))

(sqrt (+ (sqrt 2) (sqrt 3)))

(square (sqrt 1000))

The sqrt program also illustrates that the simple procedural language we have introduced so far is sufficient for writing any purely numerical program that one could write in, say, C or Pascal. This might seem surprising, since we have not included in our language any iterative (looping) constructs that direct the computer to do something over and over again. Sqrt-iter, on the other hand, demonstrates how iteration can be accomplished using no special construct other than the ordinary ability to call a procedure.

Exercise 1.6. Alyssa P. Hacker doesn’t see why if needs to be provided as q special form. “Why can’t I just define it as an ordinary procedure in terms of cond?” she asks. Alyssa’s friend Eva Lu Ator claims this can indeed be done, and she defines a new version of if:

(define (new-if predicate then-clause else-clause) (cond (predicate then-clause) (else else-clause)))

Eva demonstrates the program for Alyssa:

(new-if (= 2 3) 0 5)

(new-if (= 1 1) 0 5)

Delighted,Alyssa uses new-if to rewrite the square root program:

(define (sqrt-iter guess x) (new-if (good-enough? guess x) guess (sqrt-iter (improve guess x) x)))

What happens when Alyssa attempts to use this to compute square roots? Explain.

discussion

Exercise 1.7. The good-enough? test used in computing square roots will not be very effective for finding the square roots of very small numbers. Also, in real computers, arithmetic operations are almost always performed with limited precision. This makes our test inadequate for very large numbers. Explain these statements, with examples showing how the test fails for small and large numbers.

An alternative strategy for implementing good-enough? is to watch how guess changes from one iteration to the next and to stop when the change is a very small fraction of the guess. Design a square-root procedure that uses this kind of end test. Does this work better for small and large numbers?

(define (my-sqrt x) 'your-code-here)

(square (sqrt 0.0009))

(square (my-sqrt 0.0009))

Exercise 1.8. Newton's method for cube roots is based on the fact that if y is an approximation to the cube root of x, then a better approximation is given by the value $$ \frac{x/y^2 + 2y}{3} $$

Use this formula to implement a cube-root procedure analogous to the square-root procedure. (In Section 1.3.4 we will see how to implement Newton’s method in general as an abstraction of these square root and cube-root procedures.)

(define (cube-root x) 'your-code-here)

(cube-root 27)

Procedures as Black-Box Abstractions

Sqrt is our first example of a process defined by a set of mutually defined procedures. Notice that the definition of sqrt-iter is recursive; that is, the procedure is defined in terms of itself. The idea of being able to define a procedure in terms of itself may be disturbing; it may seem unclear how such a “circular” definition could make sense at all, much less specify a well-defined process to be carried out by a computer. This will be addressed more carefully in section 1.2. But first let's consider some other important points illustrated by the sqrt example.

Observe that the problem of computing square roots breaks up naturally into a number of subproblems: how to tell whether a guess is good enough, how to improve a guess, and so on. Each of these tasks is accomplished by a separate procedure. The entire sqrt program can be viewed as a cluster of procedures (shown in figure 1.2) that mirrors the decomposition of the problem into subproblems.

Figure 1.2: Procedural decomposition of the sqrt program.

The importance of this decomposition strategy is not simply that one is dividing the program into parts. After all, we could take any large program and divide it into parts — the first ten lines, the next ten lines, the next ten lines, and so on. Rather, it is crucial that each procedure accomplishes an identifiable task that can be used as a module in defining other procedures. For example, when we define the good-enough? procedure in terms of square, we are able to regard the square procedure as a “black box.” We are not at that moment concerned with how the procedure computes its result, only with the fact that it computes the square. The details of how the square is computed can be suppressed, to be considered at a later time. Indeed, as far as the good-enough? procedure is concerned, square is not quite a procedure but rather an abstraction of a procedure, a so-called procedural abstraction. At this level of abstraction, any procedure that computes the square is equally good.

Thus, considering only the values they return, the following two procedures for squaring a number should be indistinguishable. Each takes a numerical argument and produces the square of that number as the value.

(define (square x) (* x x))

(define (double x) (+ x x)) (define (square x) (exp (double (log x))))

(define (square x) (- 1 (/ (* 2 x) (tan (* 2 (atan x))))))

So a procedure definition should be able to suppress detail. The users of the procedure may not have written the procedure themselves, but may have obtained it from another programmer as a black box. A user should not need to know how the procedure is implemented in order to use it.

Local names

One detail of a procedure's implementation that should not matter to the user of the procedure is the implementer's choice of names for the procedure's formal parameters. Thus, the following procedures should not be distinguishable:

(define (square x) (* x x))

(define (square y) (* y y))

This principle — that the meaning of a procedure should be independent of the parameter names used by its author — seems on the surface to be self-evident, but its consequences are profound. The simplest consequence is that the parameter names of a procedure must be local to the body of the procedure. For example, we used square in the definition of good-enough? in our square-root procedure:

(define (good-enough? guess x) (

The intention of the author of good-enough? is to determine if the square of the first argument is within a given tolerance of the second argument. We see that the author of good-enough? used the name guess to refer to the first argument and x to refer to the second argument. The argument of square is guess. If the author of square used x (as above) to refer to that argument, we see that the x in good-enough? must be a different x than the one in square. Running the procedure square must not affect the value of x that is used by good-enough?, because that value of x may be needed by good-enough? after square is done computing.

If the parameters were not local to the bodies of their respective procedures, then the parameter x in square could be confused with the parameter x in good-enough?, and the behavior of good-enough? would depend upon which version of square we used. Thus, square would not be the black box we desired.

A formal parameter of a procedure has a very special role in the procedure definition, in that it doesn't matter what name the formal parameter has. Such a name is called a bound variable, and we say that the procedure definition binds its formal parameters. The meaning of a procedure definition is unchanged if a bound variable is consistently renamed throughout the definition. If a variable is not bound, we say that it is free. The set of expressions for which a binding defines a name is called the scope of that name. In a procedure definition, the bound variables declared as the formal parameters of the procedure have the body of the procedure as their scope.

In the definition of good-enough? above, guess and x are bound variables but <, -, abs, and square are free. The meaning of good-enough? should be independent of the names we choose for guess and x so long as they are distinct and different from Internal definitions and block structure

We have one kind of name isolation available to us so far: The formal parameters of a procedure are local to the body of the procedure. The square-root program illustrates another way in which we would like to control the use of names. The existing program consists of separate procedures:

(define (sqrt x) (sqrt-iter 1.0 x)) (define (sqrt-iter guess x) (if (good-enough? guess x) guess (sqrt-iter (improve guess x) x))) (define (good-enough? guess x) (

The problem with this program is that the only procedure that is important to users of sqrt is sqrt. The other procedures (sqrt-iter, good-enough?, and improve) only clutter up their minds. They may not define any other procedure called good-enough? as part of another program to work together with the square-root program, because sqrt needs it. The problem is especially severe in the construction of large systems by many separate programmers. For example, in the construction of a large library of numerical procedures, many numerical functions are computed as successive approximations and thus might have procedures named good-enough? and improve as auxiliary procedures. We would like to localize the subprocedures, hiding them inside sqrt so that sqrt could coexist with other successive approximations, each having its own private good-enough? procedure. To make this possible, we allow a procedure to have internal definitions that are local to that procedure. For example, in the square-root problem we can write

(define (sqrt x) (define (good-enough? guess x) (

Such nesting of definitions, called block structure, is basically the right solution to the simplest name-packaging problem. But there is a better idea lurking here. In addition to internalizing the definitions of the auxiliary procedures, we can simplify them. Since x is bound in the definition of sqrt, the procedures good-enough?, improve, and sqrt-iter, which are defined internally to sqrt, are in the scope of x. Thus, it is not necessary to pass x explicitly to each of these procedures. Instead, we allow x to be a free variable in the internal definitions, as shown below. Then x gets its value from the argument with which the enclosing procedure sqrt is called. This discipline is called lexical scoping.

(define (sqrt x) (define (good-enough? guess) (

We will use block structure extensively to help us break up large programs into tractable pieces. The idea of block structure originated with the programming language Algol 60. It appears in most advanced programming languages and is an important tool for helping to organize the construction of large programs.


Based on Structure and Interpretation of Computer Programs, a work at http://mitpress.mit.edu/sicp/.

Forgotify | Discover a previously unheard Spotify track

Ruby 2.1: Out-of-Band GC · computer talk by @tmm1

$
0
0

Comments:" Ruby 2.1: Out-of-Band GC · computer talk by @tmm1 "

URL:http://tmm1.net/ruby21-oobgc/


29 Jan 2014

Ruby 2.1's GC is better than ever, but ruby still uses a stop-the-world GC implementation. This means collections triggered during request processing will add latency to your response time. One way to mitigate this is by running GC in-between requests, i.e. "Out-of-Band".

OOBGC is a popular technique, first introduced by Unicorn and later integrated into Passenger. Traditionally, these out-of-band collectors force a GC every N requests. While this works well, it requires careful tuning and can add CPU pressure if unnecessary collections occur too often.

In kiji (twitter's REE fork), @evanweaver introduced GC.preemptive_start as an alternative to the "every N requests" model. This new method could make more intelligent decisions about OOBGC based on the size of the heap and the number of free slots. We've long used a similar trick in our 1.9.3 fork to optimize OOBGC on github.com.

When we upgraded to a patched 2.1.0 in production earlier this month, I translated these techniques into a new OOBGC for RGenGC. Powered by 2.1's new tracepoint GC hooks, it understands both lazy vs immediate sweeping and major vs minor GC in order to make the best descision about when a collection is required.

Using the new OOBGC is simple:

require'gctools/oobgc'GC::OOB.run()# run this after every request body is flushed

or if you're using Unicorn:

# in config.rurequire'gctools/oobgc'ifdefined?(Unicorn::HttpRequest)useGC::OOB::UnicornMiddlewareend

OOBGC results

With ruby 2.1, our average OOBGC pause time (oobgc.mean) went from 125ms to 50ms thanks to RGenGC. The number of out-of-band collections (oobgc.count) also went down, since the new OOBGC only runs when necessary.

The overall result is much less CPU time (oobgc.sum) spent doing GC work between requests.

GC during requests

After our 2.1 upgrade, we're performing GC during requests 2-3x more often than before (gc.time.count). However since all major collections can happen preemptively, only minor GCs happen during requests making the average GC pause only 25ms (gc.time.mean).

The overall result is reduced in-request GC overhead (gc.time.sum), even though GC happens more often.

Note: Even with the best OOBGC, collections during requests are inevitable (especially on large requests with lots of allocations). The GC's job during these requests is to control memory growth, so I do not recommend disabling ruby's GC during requests.

One Div Zero: A Brief, Incomplete, and Mostly Wrong History of Programming Languages

$
0
0

Comments:"One Div Zero: A Brief, Incomplete, and Mostly Wrong History of Programming Languages"

URL:http://james-iry.blogspot.mx/2009/05/brief-incomplete-and-mostly-wrong.html


1801 - Joseph Marie Jacquard uses punch cards to instruct a loom to weave "hello, world" into a tapestry. Redditers of the time are not impressed due to the lack of tail call recursion, concurrency, or proper capitalization.

1842 - Ada Lovelace writes the first program. She is hampered in her efforts by the minor inconvenience that she doesn't have any actual computers to run her code. Enterprise architects will later relearn her techniques in order to program in UML.

1936 - Alan Turing invents every programming language that will ever be but is shanghaied by British Intelligence to be 007 before he can patent them.

1936 - Alonzo Church also invents every language that will ever be but does it better. His lambda calculus is ignored because it is insufficiently C-like. This criticism occurs in spite of the fact that C has not yet been invented.

1940s - Various "computers" are "programmed" using direct wiring and switches. Engineers do this in order to avoid the tabs vs spaces debate.

1957 - John Backus and IBM create FORTRAN. There's nothing funny about IBM or FORTRAN. It is a syntax error to write FORTRAN while not wearing a blue tie.

1958 - John McCarthy and Paul Graham invent LISP. Due to high costs caused by a post-war depletion of the strategic parentheses reserve LISP never becomes popular[1]. In spite of its lack of popularity, LISP (now "Lisp" or sometimes "Arc") remains an influential language in "key algorithmic techniques such as recursion and condescension"[2].

1959 - After losing a bet with L. Ron Hubbard, Grace Hopper and several other sadists invent the Capitalization Of Boilerplate Oriented Language (COBOL) . Years later, in a misguided and sexist retaliation against Adm. Hopper's COBOL work, Ruby conferences frequently feature misogynistic material.

1964 - John Kemeny and Thomas Kurtz create BASIC, an unstructured programming language for non-computer scientists.

1965 - Kemeny and Kurtz go to 1964.

1970 - Guy Steele and Gerald Sussman create Scheme. Their work leads to a series of "Lambda the Ultimate" papers culminating in "Lambda the Ultimate Kitchen Utensil." This paper becomes the basis for a long running, but ultimately unsuccessful run of late night infomercials. Lambdas are relegated to relative obscurity until Java makes them popular by not having them.

1970 - Niklaus Wirth creates Pascal, a procedural language. Critics immediately denounce Pascal because it uses "x := x + y" syntax instead of the more familiar C-like "x = x + y". This criticism happens in spite of the fact that C has not yet been invented.

1972 - Dennis Ritchie invents a powerful gun that shoots both forward and backward simultaneously. Not satisfied with the number of deaths and permanent maimings from that invention he invents C and Unix.

1972 - Alain Colmerauer designs the logic language Prolog. His goal is to create a language with the intelligence of a two year old. He proves he has reached his goal by showing a Prolog session that says "No." to every query.

1973 - Robin Milner creates ML, a language based on the M&M type theory. ML begets SML which has a formally specified semantics. When asked for a formal semantics of the formal semantics Milner's head explodes. Other well known languages in the ML family include OCaml, F#, and Visual Basic.

1980 - Alan Kay creates Smalltalk and invents the term "object oriented." When asked what that means he replies, "Smalltalk programs are just objects." When asked what objects are made of he replies, "objects." When asked again he says "look, it's all objects all the way down. Until you reach turtles."

1983 - In honor of Ada Lovelace's ability to create programs that never ran, Jean Ichbiah and the US Department of Defense create the Ada programming language. In spite of the lack of evidence that any significant Ada program is ever completed historians believe Ada to be a successful public works project that keeps several thousand roving defense contractors out of gangs.

1983 - Bjarne Stroustrup bolts everything he's ever heard of onto C to create C++. The resulting language is so complex that programs must be sent to the future to be compiled by the Skynet artificial intelligence. Build times suffer. Skynet's motives for performing the service remain unclear but spokespeople from the future say "there is nothing to be concerned about, baby," in an Austrian accented monotones. There is some speculation that Skynet is nothing more than a pretentious buffer overrun.

1986 - Brad Cox and Tom Love create Objective-C, announcing "this language has all the memory safety of C combined with all the blazing speed of Smalltalk." Modern historians suspect the two were dyslexic.

1987 - Larry Wall falls asleep and hits Larry Wall's forehead on the keyboard. Upon waking Larry Wall decides that the string of characters on Larry Wall's monitor isn't random but an example program in a programming language that God wants His prophet, Larry Wall, to design. Perl is born.

1990 - A committee formed by Simon Peyton-Jones, Paul Hudak, Philip Wadler, Ashton Kutcher, and People for the Ethical Treatment of Animals creates Haskell, a pure, non-strict, functional language. Haskell gets some resistance due to the complexity of using monads to control side effects. Wadler tries to appease critics by explaining that "a monad is a monoid in the category of endofunctors, what's the problem?"

1991 - Dutch programmer Guido van Rossum travels to Argentina for a mysterious operation. He returns with a large cranial scar, invents Python, is declared Dictator for Life by legions of followers, and announces to the world that "There Is Only One Way to Do It." Poland becomes nervous.

1995 - At a neighborhood Italian restaurant Rasmus Lerdorf realizes that his plate of spaghetti is an excellent model for understanding the World Wide Web and that web applications should mimic their medium. On the back of his napkin he designs Programmable Hyperlinked Pasta (PHP). PHP documentation remains on that napkin to this day.

1995 - Yukihiro "Mad Matz" Matsumoto creates Ruby to avert some vaguely unspecified apocalypse that will leave Australia a desert run by mohawked warriors and Tina Turner. The language is later renamed Ruby on Rails by its real inventor, David Heinemeier Hansson. [The bit about Matsumoto inventing a language called Ruby never happened and better be removed in the next revision of this article - DHH].

1995 - Brendan Eich reads up on every mistake ever made in designing a programming language, invents a few more, and creates LiveScript. Later, in an effort to cash in on the popularity of Java the language is renamed JavaScript. Later still, in an effort to cash in on the popularity of skin diseases the language is renamed ECMAScript.

1996 - James Gosling invents Java. Java is a relatively verbose, garbage collected, class based, statically typed, single dispatch, object oriented language with single implementation inheritance and multiple interface inheritance. Sun loudly heralds Java's novelty.

2001 - Anders Hejlsberg invents C#. C# is a relatively verbose, garbage collected, class based, statically typed, single dispatch, object oriented language with single implementation inheritance and multiple interface inheritance. Microsoft loudly heralds C#'s novelty.

2003 - A drunken Martin Odersky sees a Reese's Peanut Butter Cup ad featuring somebody's peanut butter getting on somebody else's chocolate and has an idea. He creates Scala, a language that unifies constructs from both object oriented and functional languages. This pisses off both groups and each promptly declares jihad.

Footnotes

Fortunately for computer science the supply of curly braces and angle brackets remains high.Catch as catch can - Verity Stob

Edits

  • 5/8/09 added BASIC, 1964
  • 5/8/09 Moved curly brace and angle bracket comment to footnotes
  • 5/8/09 corrected several punctuation and typographical errors
  • 5/8/09 removed bit about Odersky in hiding
  • 5/8/09 added Objective-C, 1986
  • 5/8/09 added Church and Turing
  • 4/9/10 added Ada (1983) and PHP(1995)

South Korea's online trend: Paying to watch a pretty girl eat - CNN.com

$
0
0

Comments:"South Korea's online trend: Paying to watch a pretty girl eat - CNN.com"

URL:http://edition.cnn.com/2014/01/29/world/asia/korea-eating-room/index.html?hpt=hp_c3


STORY HIGHLIGHTS

  • The Diva is an online "eating room" where thousands watch a woman devour food
  • Park Seo-Yeon makes more than $9,000 a month from online eating
  • Reasons for the phenomenon include a rise in one-person households
  • The interactive feature is appealing to Korean lonely hearts who hate eating alone

(CNN) -- In increasingly virtual South Korea, the latest bizarre fad is watching someone eat online.

Called 'muk-bang' in Korean, which translates to 'eating rooms,' online channels live-stream people eating enormous servings of food while chatting away to those who are watching.

The queen of this particular phenomenon is the Diva, a waifish, pretty 33-year-old woman apparently blessed with the stomach capacity of several elephants and the metabolism of a hummingbird.

Every evening around 8 p.m, several thousand viewers tune in to watch The Diva -- real name Park Seo-Yeon -- begin inhaling enough food for several college linebackers.

She easily polishes off four large pizzas or three kilograms (6 lb) of beef in one sitting, albeit over the span of several hours.

After she eats, she spends another two or three hours just talking to her fans, who communicate with her via a chat room which accompanies her live-stream channel.

For Park, online eating is not just a niche hobby but a significant source of income — she makes up to ₩10 million ($9,300) a month from her broadcasts alone.

Her costs are also high, however. She says she spends an average of $3,000 per month purchasing food for her show, which she broadcasts for about four to six hours per night.

Confessions of a Diva

Thanks to the live chat room that accompanies her channel, feedback is instantaneous and the show interactive.

Comments flood in and she reads from them in real time.

"My fans tell me that they really love watching me eat because I do so with so much gusto and make everything look so delicious," says Park.

"A lot of my viewers are on diets and they say they live vicariously through me, or they are hospital patients who only have access to hospital food so they also watch my broadcasts to see me eat."

"Fans who are on a diet say that they like eating vicariously through me," says Park.

While it would seem that her metabolism would make her public enemy number one, some of the Diva's biggest fans are women, and indeed her channel is more popular with women than with men, with a 60-40 ratio.

MORE: All by my selfie! Blogger shows how to take travel photos with an imaginary girlfriend

"One of the best comments I ever received from a viewer who said that she had gotten over her anorexia by watching me eat," says Park. "That really meant a lot to me."

She cooks about a third of the food that she eats, and the rest she has delivered. Offers of sponsorship have come in thick and fast, but she says she tests out sponsored food first and only features what she truly likes and wants to share.

Her fans show their appreciation by sending her money, in the form of virtual tokens that can be cashed in.

Afreeca TV, the publicly-listed social networking site that hosts her channel, allows users to buy and send virtual "star balloons" which can be monetized after the site takes a 30-40% commission.

Any payment by viewers is purely voluntary, as all channels can be viewed for free.

The service is currently limited to South Korea, although the company has plans to expand it to other countries.

Eating rooms are a separate category on Afreeca TV, South Korea's online streaming platform.

Cultural background

The Diva's success and the Korean eating room trend can be attributed to a number of specific cultural factors.

"We think it's because of three big reasons — the rise of one-person households in Korea, their ensuing loneliness and finally the huge trend of 'well-being culture' and excessive dieting in Korean society right now," says Afreeca TV public relations coordinator Serim An.

While watching food porn on a diet may sound like masochistic torture, apparently lonely, hungry Koreans prefer to eat vicariously.

Another thing, Koreans hate eating alone.

"For Koreans, eating is an extremely social, communal activity, which is why even the Korean word 'family' means 'those who eat together,'" says Professor Sung-hee Park of Ewha University's Division of Media Studies.

She believes its the interactive aspect of eating rooms that's so appealing to these lonely hearts.

MORE: 10 things South Korea does better than anywhere else

Loneliness was also the catalyst for the Diva.

"So many of my friends were getting married and I was living alone and lonely and bored," she says.

"When I first started my channel two years ago, I was showing a variety of content, from dance to outdoor activities, but it was my love of eating that really began drawing a response from fans," says Park.

The setting

And then there's the platform to make the phenomenon possible in the first place.

It's difficult to imagine the unique live-streaming online platform of Afreeca TV working as well on a daily basis anywhere other than South Korea's extremely wired culture.

With 78.5% of the entire population on smartphones and 7 million people riding the Seoul subway network every day, Afreeca TV is becoming particularly popular with Korean commuters, given that the Seoul subway has cellphone reception and Wi-Fi, and South Korean smartphones have TV streaming capabilities.

"Our mobile users surpassed our PC users a while ago, and most of our viewers watch our content while they are on the move," says An.

MORE: Super cars and avatars: Seoul's mind-blowing future technology museum

The majority of Afreeca TV's content is actually online gaming, where individual broadcasters called 'BJs' (short for Broadcast Jockeys), stream their gaming live for others to learn from or comment on. Anyone can live-stream from any device as long as they log in.

"Eating rooms" began popping up around 2009, says An, when users began to imitate celebrities' food shows by commenting as they were eating while broadcasting.

Now, of the platform's 5,000 channels that are streaming at any given point in time, 5% of those are eating rooms. Afreeca TV has a daily average viewership of 3 million.

Spinning off

The Diva says her success was a huge surprise, but there are still many who don't understand the concept and are liberal with their criticism.

"I get some really awful commenters who make me reexamine 'why am I doing this again?' but at the end of the day the positive feedback overwhelmingly outweighs the bad, so I am happy to continue."

While Park maintained her real estate consulting day job over the past two years, she quit last week to focus more on her eating room and potential spinoff businesses, including a clothing company.

When asked if she has any time for a private life, considering she broadcasts more than six hours a day every day including weekends, the answer is that she doesn't need one.

"This is a lot more fun," she says.

MORE: Interview: StarCraft II's biggest champ on what makes Korea a pro-gamer's paradise

Mint Adds Support for Bitcoin

$
0
0

Comments:"Mint Adds Support for Bitcoin"

URL:http://thenextweb.com/insider/2014/01/30/popular-financial-planning-service-mint-now-lets-users-keep-track-of-their-bitcoins/#!tWS01


Popular US financial planning company Mint has given Bitcoin a little more legitimacy after it partnered with Coinbase to add support for the virtual currency to its service.

Now Mint’s 10 million-plus users can keep tabs on their Bitcoin stash, alongside their existing credit cards, bank accounts and investments. The service converts holdings into US dollars to keep things simple amid the changing price of Bitcoin.

Mint’s Vince Maniago told Venture Beat that Bitcoin’s mainstream appeal has developed to the point that “we felt like it was something we couldn’t ignore anymore.” That’s a line that we’re starting to hear regularly, with the likes of Zynga, Overstock and TigerDirect all adopting the cryptocurrency in the past month.

➤ Mint integrates with Coinbase, so you can track bitcoin with the rest of your finances [Venture Beat]

Image via Antana / Flickr

The Older Mind May Just Be a Fuller Mind


Observation of Dirac monopoles in a synthetic magnetic field

Stolen Camera Finder - find your photos, find your camera

A 33-Year-Old NPR Story Convinced Me Google Glass Will Stop Looking So Dorky Soon - On The Media

$
0
0

Comments:" A 33-Year-Old NPR Story Convinced Me Google Glass Will Stop Looking So Dorky Soon - On The Media"

URL:http://www.onthemedia.org/story/google-glass-dorky/


Google showed off a new version of Google Glass yesterday.

Glass now works with actual prescription glasses frames, and in general it's been redesigned to look less clunky and dorky. I think that even with the redesign, Glass looks pretty silly. In fact, up until this morning, I couldn't imagine that Glass would ever not seem too silly for mainstream users.  And then I caught this quote from a Verge interview with Google Glass’s lead designer, Isabelle Olsson.

Olsson is pretty relaxed when the weirdness of Glass comes up — it's obviously the most common question she has to face when talking about the product. "It's interesting to see the parallels with headphones," she says. "The fact that people walk around with these huge headphones is kind of crazy, in a way. But now you don't think about it as technology, you think about it as something that delivers music to you."

This comparison felt fishy to me. Were people really that skeptical of Walkmen? After all, headphones were around for a long time before Walkmen were invented. A century had passed where people had gotten used to headphones, worn them at home. All Walkmen did was to take a familiar invention and make it portable.

But then I checked, and it turns out Olsson is right. Here’s the proof: a NPR story originally broadcast in 1981, when Walkmen were still pretty new. It’s mostly a series of man-on-the-street interviews, and people express a disgust for the newfangled invention that's very, very familiar:

    “They're obnoxious.”    “It looks stupid to me. Some people approve of it, you know. It's fine if - privacy your home, you know?”    “Yeah, people do kind of look funny and they kind of look, like, you know, pretty smug when I'm wearing them and everything.”    “ You know, it's nice when you're walking around to hear other people talking and see what they're doing. And you're kind of putting blinders on.”    “ It causes people to isolate themselves from their experience, the contact with nature - sort of, a neo-existential prelude to doom.”

The line that chimed hardest for me actually came from Steve Profitt, the reporter:

PROFITT: You know, next thing they should do is have a little movies, you know, little sunglass movies so you don't have to look, either.

So, yeah. I guess a lot of inventions seem dorky, smug, and alienating. And then, a few years later, they become so normal that it’s hard to imagine life without them.

California regulator seeks to shut down 'learn to code' bootcamps | VentureBeat | Education | by Christina Farr

$
0
0

Comments:"California regulator seeks to shut down 'learn to code' bootcamps | VentureBeat | Education | by Christina Farr"

URL:http://venturebeat.com/2014/01/29/california-regulator-seeks-to-shut-down-learn-to-code-bootcamps/


A handful of California coding bootcamps are fighting for survival after receiving a stern warning from regulators.

Unless they comply, these organizations face imminent closure and a hefty $50,000 fine. A BPPE spokesperson said these organizations have two weeks to start coming into compliance.

In mid-January, the Bureau for Private Postsecondary Education (BPPE) sent cease and desist letters to Hackbright Academy, Hack Reactor, App AcademyZipfian Academy, and others. General Assembly confirmed that it began working with BPPE several months ago in order to achieve compliance.

BPPE, a unit in the California Department of Consumer Affairs, is arguing that the bootcamps fall under its jurisdiction and are subject to regulation. BPPE is charged with licensing and regulating postsecondary education in California, including academic as well as vocational training programs. It was created in 2010 by the California Private Postsecondary Education Act of 2009, a bill aimed at providing greater oversight of the more than 1,500 postsecondary schools operating in the state.

These bootcamps have not yet been approved by the BPPE and are therefore being classified as unlicensed postsecondary educational institutions that must seek compliance or be forcibly shut down.

“Our primary goal is not to collect a fine. It is to drive them to comply with the law,” said Russ Heimerich, a spokesperson for BPPE. Heimerich is confident that these companies would lose in court if they attempt to fight BPPE.

Heimerich stressed that these bootcamps merely need to show that they are making steps toward compliance: “As long as they are making a good effort to come into compliance with the law, they fall down low on our triage of problem children. We will work with them to get them licensed and focus on more urgent matters,” Heimerich said.

Jake Schwartz, chief executive of General Assembly, recommended that companies work closely with BPPE. “We see government as a stakeholder — along with our students,” he said in a phone interview.

The coding bootcamps met Wednesday afternoon to plot a course of action.

Anthony Phillips, cofounder of Hack Reactor, said the founders of these bootcamps are not averse to oversight and regulation in principle. ”I would like to be part of a group that creates those standards,” he said in an interview at the Hack Reactor offices in downtown San Francisco. “However, what that looks like and what makes sense for our schools is not necessarily going to fit in the current regulations.”

Phillips’ cofounder at Hack Reactor, Shawn Drost, added: “We’re taking this seriously, but our legal and policy advisors are confident in a positive and rather conventional outcome.”

In the learn-to-code movement, online schools and in-person courses are springing up to meet a huge need for more developers across a wide range of industries. For a price, these schools offer training in digital skills, such as software development, data science, and user experience design.

The programs typically last 10 to 12 weeks. Potential recruits are often told that they have a shot at a job or internship at a competitive tech company like Facebook or Google. Tuition costs vary widely. At Hackbright Academy, it’s $15,000 for a 10 week program. Full scholarships are available, and students who land a job at a company in the Hackbright network can request a partial refund. At Hack Reactor, where tuition costs over $17,000, 99 percent of students are offered a job at companies like Adobe and Google. According to Phillips, the average salary for a computer scientist at these firms is over six figures.

Many of these boot camps have a strong social purpose: They specialize in bringing diversity to the tech sector and in helping underemployed or unemployed Californians find jobs. Hackbright, for instance, specializes in teaching women to code so they can compete for lucrative computer engineering jobs.

These bootcamps claim to be doing something innovative for which BPPE’s regulation is not applicable. But BPPE’s point of view is different: It is treating them in a similar manner to any other trade school and online education program.

The bootcamps fear that they will go bankrupt as regulatory processes can take up to 18 months.

This isn’t the first time that tech startups have clashed with regulators. The battle with BPPE is reminiscent of the FDA’s crackdown on 23andMe, which seems to have stemmed from the genetic-testing company’s unwillingness to submit its DNA tests for FDA approval. Similarly, Lyft, Uber, and Sidecar have been fighting with transit authorities around the country, with cities arguing that these companies should be regulated as taxi companies.

Startups argue that regulators are holding back innovation; regulators believe that consumer safety and fraud prevention is at stake.

Here’s a formal statement to the press from App Academy, Dev Bootcamp, Hackbright Academy, Hack Reactor, and Zipfian Academy:

The Bureau for Private Postsecondary Education (BPPE), a California regulatory agency under the Department of Consumer Affairs, has contacted us regarding our status under their regulations.  We welcome appropriate oversight in our fledgling industry, and are in close discussions with the BPPE to define our classification and take appropriate next steps. The assembled companies are App Academy, Dev Bootcamp, Hackbright Academy, Hack Reactor, and Zipfian Academy.  Since 2011, our workforce development programs have been operating in California, offering hands-on training for coding novices.  Thousands of individuals have participated, often finding high-paying employment in the field, and the programs themselves employ hundreds of individuals.  We are a valuable, thriving, and well-intentioned sector of California’s economy and workforce development, and our programs offer high-demand skills training to unemployed and underemployed Californians.

Blue Bottle Coffee Gets Caffeinated With $25.75 Million in Funding From Internet Stars and Morgan Stanley | Re/code

$
0
0

Comments:"Blue Bottle Coffee Gets Caffeinated With $25.75 Million in Funding From Internet Stars and Morgan Stanley | Re/code"

URL:http://recode.net/2014/01/29/blue-bottle-coffee-gets-caffeinated-with-25-75-million-in-funding-from-internet-stars-and-morgan-stanley/


Blue Bottle Coffee, the specialty roaster that has become a favorite of hipster techies in the San Francisco area, has raised $25.75 million from a range of high-profile Internet players and also Morgan Stanley Investment Management.

The banking giant is not making the investment directly, but “on behalf of certain mutual funds and other investment vehicles for which it acts as Investment Adviser.”

Blue Bottle said it will use the funding to “expand retail operations, improve internal training programs and further develop its quality control department.”

This is Blue Bottle’s second round of funding, having previously raised just under $20 million in late 2012 from a range of high-profile entrepreneurs and VCs. Those include Google Ventures, Index Ventures and True Ventures, as well as Instagram’s Kevin Systrom, Twitter and Medium co-founder Evan Williams, investor Chris Sacca and skateboarding star Tony Hawk.

Clay McLachlan/Claypix.com for Blue Bottle Blue Bottle CEO and founder James Freeman

Why are the digerati so attracted to fancy coffee? Founded in nearby Oakland in 2002 by James Freeman, the micro-roaster brought its focus on Japanese-style siphon coffee preparation to the fast-growing retail genre with great success.

From one store, it now has five cafes in San Francisco and four in the New York City area, which also offer food, as well as a mobile farmer’s cart and a seasonal kiosk. Lines are typically out the door (see photo above).

Blue Bottle said it had signed more leases in Oakland, Manhattan, Brooklyn, Palo Alto and Los Angeles that will open over the next twelve months.

In addition to retail stores, Blue Bottle has a brisk online business. Only about 25 percent of its sales are in the wholesaling of its beans, an arena that it is wading into slowly and carefully.

All this growth largely came after Freeman, who is CEO of the privately-held Blue Bottle, partnered with Bryan Meehan, an Irish entrepreneur with deep ties in the tech community. Meehan is the company’s executive chairman.

In an interview, Freeman talked a lot about his goal to keep delivering high levels of customer service and quality, especially as Blue Bottle grows.

“We want to build our company, but also have the artisanship of it remain intact,” he said. “There is a growing interest and enthusiasm for specialty coffee, so we also want people to feel like it is also accessible.”

In that, Freeman declined to run down mega-coffee chains like Starbucks, despite Blue Bottle’s tonier image. “We all owe them a debt of gratitude, because — were it not for them — the concept of coffee as more than a side thing never would have happened,” he said.

That said, he also noted, keeping up a level of high quality will be key for Blue Bottle going forward. “We want people to taste the investment we are making,” said Freeman.

Blue Bottle did not provide current financials, but Freeman noted that same-store sales — that is, annual changes at stores already operating and a key indicator of success in retail — were robust.

“I can’t believe I’m talking about same-store sales,” he said, laughing. “And I also can’t believe we are here now, after starting out in the back of one store.”

(For more, here is a blog post on the whole shebang by True Ventures’ Tony Conrad— who is, IMHO, the coffee hipster poster boy — in which he notes loftily: “What we saw and why we got involved is that James and his team are part of a handful of people who are founding a movement around coffee … We believe Blue Bottle Coffee is at the forefront of a ‘consumer movement’ or mega-trend in which consumers are moving to higher quality, artisanal micro-roasters of coffee, where quality, attention to detail, beauty and a distinctive experience are being sought over more mainstream alternatives.” Viva la siphon, apparently!)

Viewing all 9433 articles
Browse latest View live