Quantcast
Channel: Hacker News 50
Viewing all 9433 articles
Browse latest View live

Why is reading lines from stdin much slower in C++ than Python? - Stack Overflow

$
0
0

Comments:"Why is reading lines from stdin much slower in C++ than Python? - Stack Overflow"

URL:http://stackoverflow.com/questions/9371238/why-is-reading-lines-from-stdin-much-slower-in-c-than-python


I wanted to compare reading lines of string input from stdin using Python and C++ and was shocked to see my C++ code run an order of magnitude slower than the equivalent Python code. Since my C++ is rusty and I'm not yet an expert Pythonista, please tell me if I'm doing something wrong or if I'm misunderstanding something.

(tl;dr version: include the statement: cin.sync_with_stdio(false) or just use fgets instead.)

C++ code:

#include <iostream>
#include <time.h>
using namespace std;
int main() {
 string input_line;
 long line_count = 0;
 time_t start = time(NULL);
 int sec;
 int lps; 
 while (cin) {
 getline(cin, input_line);
 if (!cin.eof())
 line_count++;
 };
 sec = (int) time(NULL) - start;
 cerr << "Saw " << line_count << " lines in " << sec << " seconds." ;
 if (sec > 0) {
 lps = line_count / sec;
 cerr << " Crunch speed: " << lps << endl;
 } else
 cerr << endl;
 return 0;
}
//Compiled with:
//g++ -O3 -o readline_test_cpp foo.cpp

Python Equivalent:

#!/usr/bin/env python
import time
import sys
count = 0
start = time.time()
for line in sys.stdin:
 count += 1
delta_sec = int(time.time() - start_time)
if delta_sec >= 0:
 lines_per_sec = int(round(count/delta_sec))
 print("Read {0:n} lines in {1:n} seconds. LPS: {2:n}".format(count, delta_sec,
 lines_per_sec))

Here are my results:

$ cat test_lines | ./readline_test_cpp 
Saw 5570000 lines in 9 seconds. Crunch speed: 618889
$cat test_lines | ./readline_test.py 
Read 5570000 lines in 1 seconds. LPS: 5570000

Thanks in advance!

Edit:I should note that I tried this both under OS-X (10.6.8) and Linux 2.6.32 (RHEL 6.2). The former is a macbook pro, the latter is a very beefy server, not that this is too pertinent.

Edit 2:(Removed this edit, as no longer applicable)

$ for i in {1..5}; do echo "Test run $i at `date`"; echo -n "CPP:"; cat test_lines | ./readline_test_cpp ; echo -n "Python:"; cat test_lines | ./readline_test.py ; done
Test run 1 at Mon Feb 20 21:29:28 EST 2012
CPP:Saw 5570001 lines in 9 seconds. Crunch speed: 618889
Python:Read 5,570,000 lines in 1 seconds. LPS: 5,570,000
Test run 2 at Mon Feb 20 21:29:39 EST 2012
CPP:Saw 5570001 lines in 9 seconds. Crunch speed: 618889
Python:Read 5,570,000 lines in 1 seconds. LPS: 5,570,000
Test run 3 at Mon Feb 20 21:29:50 EST 2012
CPP:Saw 5570001 lines in 9 seconds. Crunch speed: 618889
Python:Read 5,570,000 lines in 1 seconds. LPS: 5,570,000
Test run 4 at Mon Feb 20 21:30:01 EST 2012
CPP:Saw 5570001 lines in 9 seconds. Crunch speed: 618889
Python:Read 5,570,000 lines in 1 seconds. LPS: 5,570,000
Test run 5 at Mon Feb 20 21:30:11 EST 2012
CPP:Saw 5570001 lines in 10 seconds. Crunch speed: 557000
Python:Read 5,570,000 lines in 1 seconds. LPS: 5,570,000

Edit 3:

Okay, I tried J.N.'s suggestion of trying having python store the line read: but it made no difference to python's speed.

I also tried J.N.'s suggestion of using scanf into a char array instead of getline into a std::string. Bingo! This resulted in equivalent performance for both python and c++. (3,333,333 LPS with my input data, which by the way are just short lines of three fields each, usually about 20 chars wide, though sometimes more).

Code:

char input_a[512];
char input_b[32];
char input_c[512];
while(scanf("%s %s %s\n", input_a, input_b, input_c) != EOF) { 
 line_count++;
};

Speed:

$ cat test_lines | ./readline_test_cpp2 
Saw 10000000 lines in 3 seconds. Crunch speed: 3333333
$ cat test_lines | ./readline_test2.py 
Read 10000000 lines in 3 seconds. LPS: 3333333

(Yes, I ran it several times.) So, I guess I will now use scanf instead of getline. But, I'm still curious if people think this performance hit from std::string/getline is typical and reasonable.

Edit 4 (was: Final Edit / Solution):

Adding: cin.sync_with_stdio(false);

Immediately above my original while loop above results in code that runs faster than Python.

New performance comparison (this is on my 2011 Macbook Pro), using the original code, the original with the sync disabled, and the original python, respectively, on a file with 20M lines of text. Yes, I ran it several times to eliminate disk caching confound.

$ /usr/bin/time cat test_lines_double | ./readline_test_cpp
 33.30 real 0.04 user 0.74 sys
Saw 20000001 lines in 33 seconds. Crunch speed: 606060
$ /usr/bin/time cat test_lines_double | ./readline_test_cpp1b
 3.79 real 0.01 user 0.50 sys
Saw 20000000 lines in 4 seconds. Crunch speed: 5000000
$ /usr/bin/time cat test_lines_double | ./readline_test.py 
 6.88 real 0.01 user 0.38 sys
Read 20000000 lines in 6 seconds. LPS: 3333333

Thanks to @Vaughn Cato for his answer! Any elaboration people can make or good references people can point to as to why this sync happens, what it means, when it's useful, and when it's okay to disable would be greatly appreciated by posterity. :-)

Edit 5 / Better Solution:

As suggested by Gandalf The Gray below, gets is even faster than scanf or the unsynchronized cin approach. I also learned that scanf and gets are both UNSAFE and should NOT BE USED due to potential of buffer overflow. So, I wrote this iteration using fgets, the safer alternative to gets. Here are the pertinent lines for my fellow noobs:

char input_line[MAX_LINE];
char *result;
//<snip>
while((result = fgets(input_line, MAX_LINE, stdin )) != NULL) 
 line_count++;
if (ferror(stdin))
 perror("Error reading stdin.");

Now, here are the results using an even larger file (100M lines; ~3.4GB) on a fast server with very fast disk, comparing the python, the unsynced cin, and the fgets approaches, as well as comparing with the wc utility. [The scanf version segfaulted and I don't feel like troubleshooting it.]:

$ /usr/bin/time cat temp_big_file | readline_test.py 
0.03user 2.04system 0:28.06elapsed 7%CPU (0avgtext+0avgdata 2464maxresident)k
0inputs+0outputs (0major+182minor)pagefaults 0swaps
Read 100000000 lines in 28 seconds. LPS: 3571428
$ /usr/bin/time cat temp_big_file | readline_test_unsync_cin 
0.03user 1.64system 0:08.10elapsed 20%CPU (0avgtext+0avgdata 2464maxresident)k
0inputs+0outputs (0major+182minor)pagefaults 0swaps
Saw 100000000 lines in 8 seconds. Crunch speed: 12500000
$ /usr/bin/time cat temp_big_file | readline_test_fgets 
0.00user 0.93system 0:07.01elapsed 13%CPU (0avgtext+0avgdata 2448maxresident)k
0inputs+0outputs (0major+181minor)pagefaults 0swaps
Saw 100000000 lines in 7 seconds. Crunch speed: 14285714
$ /usr/bin/time cat temp_big_file | wc -l
0.01user 1.34system 0:01.83elapsed 74%CPU (0avgtext+0avgdata 2464maxresident)k
0inputs+0outputs (0major+182minor)pagefaults 0swaps
100000000
Recap (lines per second):
python: 3,571,428 
cin (no sync): 12,500,000
fgets: 14,285,714
wc: 54,644,808

As you can see, fgets is better but still pretty far from wc performance; I'm pretty sure this is due to the fact that wc examines each character without any memory copying. I suspect that, at this point, other parts of the code will become the bottleneck, so I don't think optimizing to that level would even be worthwhile, even if possible (since, after all, I actually need to store the read lines in memory).

Also note that a small tradeoff with using a char * buffer and fgets vs unsynced cin to string is that the latter can read lines of any length, while the former requires limiting input to some finite number. In practice, this is probably a non-issue for reading most line-based input files, as the buffer can be set to a very large value that would not be exceeded by valid input.

This has been educational. Thanks to all for your comments and suggestions.

Edit 6:

As suggested by J.F. Sebastian in the comments below, the GNU wc utility uses plain C read() (within the safe-read.c wrapper) to read chunks (of 16k bytes) at a time and count new lines. Here's a python equivalent based on J.F.'s code (just showing the relevant snippet that replaces the python for loop:

BUFFER_SIZE = 16384 
count = sum(chunk.count('\n') for chunk in iter(partial(sys.stdin.read, BUFFER_SIZE), ''))

The performance of this version is quite fast (though still a bit slower than the raw c wc utility, of course:

$ /usr/bin/time cat temp_big_file | readline_test3.py 
0.01user 1.16system 0:04.74elapsed 24%CPU (0avgtext+0avgdata 2448maxresident)k
0inputs+0outputs (0major+181minor)pagefaults 0swaps
Read 100000000 lines in 4.7275 seconds. LPS: 21152829

Again, it's a bit silly for me to compare C++ fgets/cin and the first python code on the one hand to wc -l and this last python snippet on the other, as the latter two don't actually store the read lines but merely count newlines. Still, it's interesting to explore all the different implementations and think about the performance implications. Thanks again!

Edit 7: Tiny benchmark addendum and recap

(Hello HN readers!)

For completeness, I thought I'd update the read speed for the same file on the same box with the original (synced) C++ code. Again, this is for a 100M line file on a fast disk. Here's the complete table now:

Implementation Lines per second
cin (default) 819,672
python: 3,571,428
cin (no sync): 12,500,000
fgets: 14,285,714
wc: 54,644,808

Also, see my follow-up question about splitting lines in C++ vs Python... a similar speed story, where the naive approach is slower in C++!

Edit: for clarity, removed tiny bug in original code that wasn't related to the question.


Google to Acquire Nest – Investor Relations – Google

$
0
0

Comments:" Google to Acquire Nest – Investor Relations – Google "

URL:http://investor.google.com/releases/2014/0113.html


Google to Acquire Nest

MOUNTAIN VIEW, CA – JANUARY 13, 2014— Google Inc. (NASDAQ: GOOG) announced today that it has entered into an agreement to buy Nest Labs, Inc. for $3.2 billion in cash.

Nest’s mission is to reinvent unloved but important devices in the home such as thermostats and smoke alarms. Since its launch in 2011, the Nest Learning Thermostat has been a consistent best seller--and the recently launched Protect (Smoke + CO Alarm) has had rave reviews.

Larry Page, CEO of Google, said: “Nest’s founders, Tony Fadell and Matt Rogers, have built a tremendous team that we are excited to welcome into the Google family. They’re already delivering amazing products you can buy right now--thermostats that save energy and smoke/CO alarms that can help keep your family safe. We are excited to bring great experiences to more homes in more countries and fulfill their dreams!”

Tony Fadell, CEO of Nest, said: “We’re thrilled to join Google. With their support, Nest will be even better placed to build simple, thoughtful devices that make life easier at home, and that have a positive impact on the world.”

Nest will continue to operate under the leadership of Tony Fadell and with its own distinct brand identity. The transaction is subject to customary closing conditions, including the receipt of regulatory approvals in the US. It is expected to close in the next few months.

About Google Inc.

Google is a global technology leader focused on improving the ways people connect with information. Google’s innovations in web search and advertising have made its website a top internet property and its brand one of the most recognized in the world.

Contact
press@google.com

Everpix-Intelligence/Anonymized VC Feedback.md at master · everpix/Everpix-Intelligence · GitHub

$
0
0

Comments:"Everpix-Intelligence/Anonymized VC Feedback.md at master · everpix/Everpix-Intelligence · GitHub"

URL:https://github.com/everpix/Everpix-Intelligence/blob/master/Anonymized%20VC%20Feedback.md


Overview

This document contains the unedited feedback received from VCs who passed on investing during Everpix's Series A tours. Most of the feedback was provided in writing by email, in which case it has been copy pasted here, and sometimes over the phone, in which case conversation notes have been reproduced instead. Notes and comments are in italic.

This list only includes VCs we met with in the scope of a Series A participation and to whom with officially pitched (some introductions to VCs resulted in a phone call with no in-person follow up meeting but that was rare). We ended up meeting with most tier 1 and many tier 2 VCs from the Silicon Valley. We almost always got partner level intros and in the vast majority of cases were able to get an in-person meeting within 2 weeks. It's worth noting that very often the in-person meeting with the partner went well over the 45mn to 1h of allocated time as there was a lot of interest about the Everpix team & product.

First Tour (February / March 2013)

The main feedback during this tour was that we couldn't build a $100M / year revenue or $1B worth business from Everpix. One reason for it was likely the fact we weren't explaning properly what the potential of Everpix was and why it solved a emerging problem that will affect most consumers. Instead we were spending most time on the technology and user experience.

[2 meetings with multiple partners] Thanks for taking the time to meet with more of the team this week. We had a pretty thorough review and discussion over the course of the last several days and came to the conclusion that we are going to pass on the investment opp. You guys seem to be a spectacularly talented team and some informal reference checking confirmed that, but everyone here is hung up on the concern over being able to build a >$100M revenue subscription business in photos in this age of free photo tools. Its totally unclear, for sure, but I haven't been able to get enough momentum behind it with the team. [1 meeting with 1 partner] It was great to meet you and get an overview of everpix. I love what you're working on. I'm going to discuss today with my partners and follow up… [Never heard back] [1 meeting with 1 partner] I've had a chance to chat internally, and I think we're unfortunately going to pass on this round. While I think the technology and product looks very promising, we aren't ready to place a bet in the space. Would definitely love to stay in touch as you build out the company and chat again when you're next raising funding. Best of luck closing out this round and scaling up the company! [2nd email after asking for more details] A large piece was really just the historic difficulty in monetizing online photos. Obviously you have built-in monetization to start with, which is hugely helpful, but there was skepticism about that allowing a truly large revenue stream. We would have preferred generally to see larger user traction for a Series A for a consumer service, but that's not really a deal breaker, just something that makes it tougher to decide to do. [1 meeting with 1 partner] I enjoyed meeting yourself, Kevin, and Wayne. You three form a very impressive team for sure. I did talk with my partnership about you guys and Everpix. The reaction was positive for you as a team but weak in terms of whether a $B business could be built. My partnerships reaction thought the product idea was a good one but didn't see how to build a standalone company that could be a $B company with $100M of revenue. Because of that lack of clarity on our side, I have to bow out. [1 meeting with 1 partner] Got a few friends to try as well and just collected everyone's feedback. It is pretty cool: fast; stylish and minimal photo display; Photo stored uncompressed; easy import; good price. "Moments" was a miss for most people. It was perhaps more frustrating. For me, I just want to organize myself for a bit and then let you take over. I couldnt do that as the only dimension now is time. I assume more will come: like tags and labels or folders. For others, it is frustrating if you already know which photos you like best. One friend uploaded all of his 2012 and 2013 photos, and unfortunately the sys did a poor job of picking his favorites. Similar experience for others. Some other comments: captions/titles embedded in EXIF data do not get displayed; some of the automated feature analysis is wonky: for example, snow-covered cliffs at Sugarbowl are labeled as "54% likely to be a city". no developer API? All agree a lot of these seem to be immaturity. Great job getting it going!! As for [our fund] and me, I like how the trend is heavily in your favor. Massive number of pics and storage are issues for users. I am surprised by how slow the incumbents have innovated. I wonder if it is appropriate to scale back the product promise a bit and offer something that is more mundane as faster tagging and filing vs super smart AI. The "moment" value prop will take time to perfect and using that as your lead might set the expectation too high. I wonder whether that actually impede vs accelerate conversion. I also want to explore more about how everpix re-think on conventional wisdom and challenge commonly held assumptions on pic space. So, that's my .02. Hope it helps. Will check back with you in six months. Thanks so much for taking the time to connect. [2 meetings with 2 different partners] They said they're passing because they want to see first proof we can build a product that's attracting to more than the DSLR crowd, and we have a way to convince mobile users to use our system to store & browse photos (and therefore reach the billion dollar company target). [1 meeting with partner] He said that we have very impressive tech but he couldn't invest right now as we don't have our marketing story defined.

Second Tour (June / July 2013)

For this 2nd pass, we went back to the drawing board and did extensive work on product positioning (the "Photo Mess") validated by market research and consumer surveys. When presented with it at the beginning of the pitch, almost all VCs immediately said "I get it, I have this problem myself, no need to spend time on this". Then problem then become the fact we were in a highly competitive and noisy space.

[1 meeting with investor] We all agreed that the Everpix product is great and the early traction looks quite impressive. That said though as we continue to think about the current state of the space I think we are a bit too concerned about the competitive nature and thus uncomfortable pushing forward at this early stage. [2 meetings with partner] I've been thinking a bunch about Everpix over the last 10 days or so, and using it a bunch. And I really, really like what you've done with the product and especially the upcoming release that you're heading towards. But I have concerns about consumer adoption that are bugging me. I think you've got the right product at a good time in the market, but I am not sure how you'll get it on hundreds of thousands to millions of desktops & devices -- that's the big prize. So I think it makes the most sense for us to stay in touch; and I'd love to help you in any way that I can, but am not quite in a place where I'd feel great about leading an investment round right now. [Indirect email forwarded to us] I like Pierre & his team; recognize the problem that they're solving (I have it myself), and think they've got the best solution that exists right now. But I worry about the whole category in terms of distribution being really, really tough, and that's what's got me scared away honestly. (In addition to stiff competition from Apple & Google, although I think they'll always put up flawed, platform-centric products here.) [1 meeting with partner] He said we were early and they are really focused on Series B and later, but we should keep in touch and circle back when we have more revenues and users. [1 meeting with senior investment manager] I have spoken with people internally. We think that you and your team have built a very impressive product that addresses the need of millions of photo lovers. The proprietary sync and image analysis technology is exciting. It is also encouraging to see that Everpix is already monetising from its user base, and that the scalability of business model is inherently high. As previously flagged, we feel that Everpix is probably a bit early for [our fund] (i.e. timing is not right, not the product). We are eager to see more proven points as the business scales (i.e. v2.0 product, further optimisation in processing process, sustained user and volume growth etc.). Please do keep us in the loop, and reach out again when Everpix has hit more milestones! [1 meeting with principal] We did talk about your company. Unfortunately there wasn't enough critical mass among the partners to consider an investment. So regretfully we're unable to move further at this moment. [1 meeting with partner] He said they discussed Everpix for a long time at their partner meeting today. He agrees this is a real problem and the world requires a solution to it today. He believes we are the most interesting company in the space, loved the tech, likes the UX quite a bit (although wished it would be even simpler and easier to learn). The reason for passing however is 90% [competing cloud storage company]: this fund is on their board and this company is going aggressively after photos. This means if their fund were to help a deal with a big CE company or find a great designer, would it go to this other company or us? [3 meetings with partner - over the course of 18 months] They have been looking at investing in photo space for 12+ months. He says it's a "bad category" with lots of noise and that being the best is not enough (he believes we are at the top if not the best): we need either have a paid UA model that works ($5 per signup wouldn't work for him) or have "frequency" i.e. have a method for our users to use the app daily so it's in their mind and talk about it etc… [1 meeting with partner] He said their fund is running out and they have 2 or 3 investments left they can do and for photo space bar is quite high so he likes the product and sees potential as a platform but will pass. [2 meetings with 2 different partners] We've had a chance to consider a potential investment in everpix, and as much as we like you guys, we aren't ready to move forward right now. We think you are attacking the right problem. The issue is that several others are also approaching it too, perhaps not in as elegant a way. We'd like to see things evolve a bit more before potentially re-engaging. [1 meeting with partner] I'm very sorry for the delay getting back to you. I had some time to catch up with [other partner] a couple weeks back and we did some digging into our various photo initiates and generally came to the conclusion that while we agree you're addressing a big problem (managing exploding and fragmented photo libraries) we really wanted to see the business evolve and see some revenue momentum. At this time, we think you're a bit early for us. Happy to provide more color and help where we can.

Second and a Half Tour (August 2013)

We did another last minute small tour due to the fact founders we knew around us couldn't believe we weren't able to raise and kept offering intros. By this time we happened to have our best momentum ever: 50,000 signups, strong sales at $40K / month, accelerating growth, very high retention and conversion metrics and significant organic press coverage (Gizmodo, The Verge - Best average consumer solution for photos in the cloud, Houston Chronicle…). The reasons given by VCs to pass became more esoteric e.g. already investing in another competitor or not the right time.

[1 meeting with “junior” partner] I was able to speak with the other Partners on the deal team and we would like to respectfully pass on participating this round. We really liked the founding team's respective backgrounds across Apple, Cooliris, and Frog. It felt like a strong fit to what you were building and we were impressed by your team's accomplishments to-date. The early metrics on the engagement side were very promising as well with 40% monthly active free users and 61% open-rate on the Flashbacks feature. However, we were nervous around the size of the revenue potential behind the business as well as the path to getting a massive user base. As mentioned, to become a truly disruptive and sizable standalone business, you'd need to get a large number of users to become active members and upload photos to the service. There is no doubt a huge issue around photo mess (much more apparent with mobile now) but we weren't convinced yet that companies like DropBox, FB, Apple, Google, etc wouldn't eventually try to tackle the problem either via acquisition or building the feature internally. The licensing model of the technology felt like there was a capped upside and in order to become a big business, you'd indirectly have to compete with consumer facing photo-storage sites with massive amount of existing users. That being said we are sometimes wrong about these things and would like nothing more than for you to build a successful business. We're a young firm, but we've already invested in startups where we'd passed on earlier rounds. Please do stay in touch and if you don't end up taking the M&A offer, keep us updated with your progress. The door is open and we hope to see you again for subsequent rounds or for any future ventures. [1 meeting with senior partner] Here's the lay of the land – and I want to be very straightforward so that whatever happens you feel we've been above board and clear. We'd like to continue figuring out everpix because what you have accomplished is most impressive but we also need to ensure that we aren't on an eventual collision course with [other company] where, as you know, we've been a longtime business partner. We'd hope to resolve that within the next week or so. [Following phone call] We need to pass on Everpix: other company doesn't do today what Everpix does, but photos are so strategically important to it and it would be awkward for our fund and other company founder for us to invest in Everpix. [1 meeting with partner] Thanks for the note and really a pleasure meeting you guys. I truly love the service and think you guys have done a phenomenal job. In short, I am a fan, but, unfortunately, I can't invest at this time. The big reason is we have a competitive deal in the space called [redacted] that is not as far along as you but will definitely create a conflict. I don't want to put either of you in that position. The other reason is the $5M is out of our typical range (we like to be the first 250K to 2.5M in any deal). That said, please let me know if I can help in other ways and I do hope we can stay in touch. [2 phone meetings with 2 partners] We spent some time in discussion internally today. While [other partner] and I are both big fans of the product and the future vision for Everpix, we're going to have to pass on the opportunity. We don't have partner bandwidth at the moment to take a deeper look, and particularly not in the time frame you'd require for us to make a decision. [1 meeting with 2 partners] We internally got our wires crossed and thought we had closed the loop with you. We really liked the opportunity to learn about your vision and personally resonate with the problem as well. However the doubts linger about being able to scale a photo storage company internally. We would love to stay in touch for future rounds as you get to some scale. You guys are awesome and we wish you all the best.

Database Error

[TAS] Super Mario World "Executes Arbitrary Code" in 02:25.19 by Masterjun - YouTube

$
0
0

Comments:"[TAS] Super Mario World "Executes Arbitrary Code" in 02:25.19 by Masterjun - YouTube"

URL:https://www.youtube.com/watch?v=OPcV9uIY5i4


[TAS] Super Mario World "Executes Arbitrary Code" in 02:25.19 by Masterjun

Like

Sign in with your Google Account (YouTube, Google+, Gmail, Orkut, Picasa, or Chrome) to like Masterjun3's video.

Sign in with your Google Account (YouTube, Google+, Gmail, Orkut, Picasa, or Chrome) to dislike Masterjun3's video.

Published on Jan 5, 2014

Show more

Show less

Loading...

Loading...

Loading...

Loading...

Ratings have been disabled for this video.

Rating is available when the video has been rented.

This feature is not available right now. Please try again later.

Loading...

Byte Magazine Volume 06 Number 08 - Smalltalk : Free Download & Streaming : Internet Archive

$
0
0

Comments:"Byte Magazine Volume 06 Number 08 - Smalltalk : Free Download & Streaming : Internet Archive"

URL:https://archive.org/details/byte-magazine-1981-08


Byte Magazine Volume 06 Number 08 - Smalltalk (August 1981)

fullscreen Year:1981
Language:English
Collection:byte-magazine; computermagazines

Description

Features

Introducing the Smalltalk-BO System

The Smalltalk-8O System

Build a Z8-Based Control Computer with BASIC, Part 2

Object-Oriented Software Systems

The Smalltalk Environment

User-Oriented Descriptions of Smalltalk Systems

The Smalltalk Graphics Kernel

The Japanese Computer Invasion

Building Data Structures In the Smalltalk-8O System

Design Principles Behind Smalltalk

The Smalltalk-BO Virtual Machine

Building Control Structures In the Smalltalk-8O System

Is the Smalltalk-BO System for Children?

ToolBox: A Smalltalk illustration System

Virtual Memory for an Object-Oriented Language

Reviews

Microsoft Editor/Assembler Plus

BOSS: A Debugging Utility for the TRS-8O Model I

Nucleus

Editorial: Smalltalk: A Language for the 19BOs

Letters

BYTE's Bits

BYTELINES

BYTE's Bugs

Ask BYTE

Books Received

Software Received

Clubs and Newsletters

Event Queue

System Notes: Indirect I/O Addressing on the 8O8O

Aim-65 16-bit Hexadecimal to Decimal Conversion

Programming Quickies: A Disk Catalog for the Eighties

Alpha-Beta Tree Search Converted to Assembler

Fast Line-Drawing Techniques

Word Ujbnmarle

Binary-to-BCD Converter Program for the 8O8O

What's New?

Unclassified Ads

Reader Service

BOMB, BOMB Results


Be the first to write a review
Downloaded 3,530 times
Reviews

Selected metadata

Identifier:byte-magazine-1981-08
Mediatype:texts
Identifier-access:http://archive.org/details/byte-magazine-1981-08
Identifier-ark:ark:/13960/t2n59sf2p
Ppi:300
Ocr:ABBYY FineReader 8.0

The Lost World of the London Coffeehouse | The Public Domain Review

$
0
0

Comments:" The Lost World of the London Coffeehouse | The Public Domain Review"

URL:http://publicdomainreview.org/2013/08/07/the-lost-world-of-the-london-coffeehouse


In contrast to today’s rather mundane spawn of coffeehouse chains, the London of the 17th and 18th century was home to an eclectic and thriving coffee drinking scene. Dr Matthew Green explores the halcyon days of the London coffeehouse, a haven for caffeine-fueled debate and innovation which helped to shape the modern world.

A disagreement about the Cartesian Dream Argument (or similar) turns sour. Note the man throwing coffee in his opponent’s face. From the frontispiece of Ned Ward’s satirical poem Vulgus Brittanicus (1710) and probably more of a flight of fancy than a faithful depiction of coffeehouse practices – Source.

From the tar-caked wharves of Wapping to the gorgeous lamp-lit squares of St James’s and Mayfair, visitors to eighteenth-century London were amazed by an efflorescence of coffeehouses. “In London, there are a great number of coffeehouses”, wrote the Swiss noble César de Saussure in 1726, “…workmen habitually begin the day by going to coffee-rooms to read the latest news.” Nothing was funnier, he smirked, than seeing shoeblacks and other riffraff poring over papers and discussing the latest political affairs. Scottish spy turned travel writer John Macky was similarly captivated in 1714. Sauntering into some of London’s most prestigious establishments in St James’s, Covent Garden and Cornhill, he marvelled at how strangers, whatever their social background or political allegiances, were always welcomed into lively convivial company. They were right to be amazed: early eighteenth-century London boasted more coffeehouses than any other city in the western world, save Constantinople.

London’s coffee craze began in 1652 when Pasqua Rosée, the Greek servant of a coffee-loving British Levant merchant, opened London’s first coffeehouse (or rather, coffee shack) against the stone wall of St Michael’s churchyard in a labyrinth of alleys off Cornhill. Coffee was a smash hit; within a couple of years, Pasqua was selling over 600 dishes of coffee a day to the horror of the local tavern keepers. For anyone who’s ever tried seventeenth-century style coffee, this can come as something of a shock — unless, that is, you like your brew “black as hell, strong as death, sweet as love”, as an old Turkish proverb recommends, and shot through with grit.

It’s not just that our tastebuds have grown more discerning accustomed as we are to silky-smooth Flat Whites; contemporaries found it disgusting too. One early sampler likened it to a “syrup of soot and the essence of old shoes” while others were reminded of oil, ink, soot, mud, damp and shit. Nonetheless, people loved how the “bitter Mohammedan gruel”, as The London Spy described it in 1701, kindled conversations, fired debates, sparked ideas and, as Pasqua himself pointed out in his handbill The Virtue of the Coffee Drink (1652), made one “fit for business” — his stall was a stone’s throw from that great entrepôt of international commerce, the Royal Exchange.

A handbill published in 1652 to promote the launch of Pasqua Rosée’s coffeehouse telling people how to drink coffee and hailing it as the miracle cure for just about every ailment under the sun including dropsy, scurvy, gout, scrofula and even “mis-carryings in childbearing women” – Source.


Remember — until the mid-seventeenth century, most people in England were either slightly — or very — drunk all of the time. Drink London’s fetid river water at your own peril; most people wisely favoured watered-down ale or beer (“small beer”). The arrival of coffee, then, triggered a dawn of sobriety that laid the foundations for truly spectacular economic growth in the decades that followed as people thought clearly for the first time. The stock exchange, insurance industry, and auctioneering: all burst into life in 17th-century coffeehouses — in Jonathan’s, Lloyd’s, and Garraway’s — spawning the credit, security, and markets that facilitated the dramatic expansion of Britain’s network of global trade in Asia, Africa and America.

The meteoric success of Pasqua’s shack triggered a coffeehouse boom. By 1656, there was a second coffeehouse at the sign of the rainbow on Fleet Street; by 1663, 82 had sprung up within the crumbling Roman walls, and a cluster further west like Will’s in Covent Garden, a fashionable literary resort where Samuel Pepys found his old college chum John Dryden presiding over “very pleasant and witty discourse” in 1664 and wished he could stay longer — but he had to pick up his wife, who most certainly would not have been welcome.

The earliest known image of a coffeehouse dated to 1674, showing the kind of coffeehouse familiar to Samuel Pepys – Source.

No respectable women would have been seen dead in a coffeehouse. It wasn’t long before wives became frustrated at the amount of time their husbands were idling away “deposing princes, settling the bounds of kingdoms, and balancing the power of Europe with great justice and impartiality”, as Richard Steele put it in the Tatler, all from the comfort of a fireside bench. In 1674, years of simmering resentment erupted into the volcano of fury that was the Women’s Petition Against Coffee. The fair sex lambasted the “Excessive use of that Newfangled, Abominable, Heathenish Liquor called COFFEE” which, as they saw it, had reduced their virile industrious men into effeminate, babbling, French layabouts. Retaliation was swift and acerbic in the form of the vulgar Men’s Answer to the Women’s Petition Against Coffee, which claimed it was “base adulterate wine” and “muddy ale” that made men impotent. Coffee, in fact, was the Viagra of the day, making “the erection more vigorous, the ejaculation more full, add[ing] a spiritual ascendency to the sperm”.

There were no more Women’s Petitions after that but the coffeehouses found themselves in more dangerous waters when Charles II, a longtime critic, tried to torpedo them by royal proclamation in 1675. Traditionally, informed political debate had been the preserve of the social elite. But in the coffeehouse it was anyone’s business — that is, anyone who could afford the measly one-penny entrance fee. For the poor and those living on subsistence wages, they were out of reach. But they were affordable for anyone with surplus wealth — the 35 to 40 per cent of London’s 287,500-strong male population who qualified as ‘middle class’ in 1700 — and sometimes reckless or extravagant spenders further down the social pyramid. Charles suspected the coffeehouses were hotbeds of sedition and scandal but in the face of widespread opposition — articulated most forcefully in the coffeehouses themselves — the King was forced to cave in and recognise that as much as he disliked them, coffeehouses were now an intrinsic feature of urban life.

A map of Exchange Alley after it was razed to the ground in 1748, showing the sites of some of London’s most famous coffeehouses including Garraway’s and Jonathan’s – Source.

By the dawn of the eighteenth century, contemporaries were counting between 1,000 and 8,000 coffeehouses in the capital even if a street survey conducted in 1734 (which excluded unlicensed premises) counted only 551. Even so, Europe had never seen anything like it. Protestant Amsterdam, a rival hub of international trade, could only muster 32 coffeehouses by 1700 and the cluster of coffeehouses in St Mark’s Square in Venice were forbidden from seating more than five customers (presumably to stifle the coalescence of public opinion) whereas North’s, in Cheapside, could happily seat 90 people.

The character of a coffeehouse was influenced by its location within the hotchpotch of villages, cities, squares, and suburbs that comprised eighteenth-century London, which in turn determined the type of person you’d meet inside. “Some coffee-houses are a resort for learned scholars and for wits,” wrote César de Saussure, “others are the resort of dandies or of politicians, or again of professional newsmongers; and many others are temples of Venus.” Flick through any of the old coffeehouse histories in the public domain and you’ll soon get a flavour of the kaleidoscopic diversity of London’s early coffeehouses.

The walls of Don Saltero’s Chelsea coffeehouse were festooned with taxidermy monsters including crocodiles, turtles and rattlesnakes, which local gentlemen scientists like Sir Isaac Newton and Sir Hans Sloane liked to discuss over coffee; at White’s on St James’s Street, famously depicted by Hogarth, rakes would gamble away entire estates and place bets on how long customers had to live, a practice that would eventually grow into the life insurance industry; at Lunt’s in Clerkenwell Green, patrons could sip coffee, have a haircut and enjoy a fiery lecture on the abolition of slavery given by its barber-proprietor John Gale Jones; at John Hogarth’s Latin Coffeehouse, also in Clerkenwell, patrons were encouraged to converse in the Latin tongue at all times (it didn’t last long); at Moll King’s brothel-coffeehouse, depicted by Hogarth, libertines could sober up and peruse a directory of harlots, before being led to the requisite brothel nearby. There was even a floating coffeehouse, the Folly of the Thames, moored outside Somerset House where fops and rakes danced the night away on her rain-spattered deck.

Hogarth’s depiction of Moll and Tom King’s coffee-shack from The Four Times of Day (1736). Though it is early morning, the night has only just begun for the drunken rakes and prostitutes spilling out of the coffeehouse – Source.

Despite this colourful diversity, early coffeehouses all followed the same blueprint, maximising the interaction between customers and forging a creative, convivial environment. They emerged as smoky candlelit forums for commercial transactions, spirited debate, and the exchange of information, ideas, and lies. This small body-colour drawing shows an anonymous (and so, it’s safe to assume, fairly typical) coffeehouse from around 1700.

A small body-colour drawing of the interior of a London coffeehouse from c. 1705. Everything about this oozes warmth and welcome from the bubbling coffee cauldron right down to the flickering candles and kind eyes of the coffee drinkers – Source.

Looking at the cartoonish image, decorated in the same innocent style as contemporary decorated fans, it’s hard to reconcile it with Voltaire’s rebuke of a City coffeehouse in the 1720s as “dirty, ill-furnished, ill-served, and ill-lighted” nor particularly London Spy author Ned Ward’s (admittedly scurrilous) evocation of a soot-coated den of iniquity with jagged floorboards and papered-over windows populated by “a parcel of muddling muck-worms…some going, some coming, some scribbling, some talking, some drinking, others jangling, and the whole room stinking of tobacco.” But, the establishments in the West End and Exchange Alley excepted, coffeehouses were generally spartan, wooden and no-nonsense.

As the image shows, customers sat around long communal tables strewn with every type of media imaginable listening in to each other’s conversations, interjecting whenever they pleased, and reflecting upon the newspapers. Talking to strangers, an alien concept in most coffee shops today, was actively encouraged. Dudley Ryder, a young law student from Hackney and shameless social climber, kept a diary in 1715-16, in which he routinely recalled marching into a coffeehouse, sitting down next to a stranger, and discussing the latest news. Private boxes and booths did begin to appear from the late 1740s but before that it was nigh-on impossible to hold a genuinely private conversation in a coffeehouse (and still pretty tricky afterwards, as attested to by the later coffeehouse print below). To the left, we see a little Cupid-like boy in a flowing periwig pouring a dish of coffee à la mode— that is, from a great height — which would fuel some coffeehouse discussion or other.

Much of the conversation centred upon news:

There’s nothing done in all the world From Monarch to the Mouse, But every day or night ‘tis hurled Into the Coffee-House

chirped a pamphlet from 1672. As each new customer went in, they’d be assailed by cries of “What news have you?” or more formally, “Your servant, sir, what news from Tripoli?” or, if you were in the Latin Coffeehouse, “Quid Novi!” That coffeehouses functioned as post-boxes for many customers reinforced this news-gathering function. Unexpectedly wide-ranging discussions could be twined from a single conversational thread as when, at John’s coffeehouse in 1715, news about the execution of a rebel Jacobite Lord (as recorded by Dudley Ryder) transmogrified into a discourse on “the ease of death by beheading” with one participant telling of an experiment he’d conducted slicing a viper in two and watching in amazement as both ends slithered off in different directions. Was this, as some of the company conjectured, proof of the existence of two consciousnesses?

A Mad Dog in a Coffeehouse by the English caricaturist Thomas Rowlandson, c. 1800. Note the reference to Cerberus on the notice on the wall and the absence of long communal tables by the later 18th century – Source.

If the vast corpus of 17th-century pamphlet literature is anything to go by then early coffeehouses were socially inclusive spaces where lords sat cheek-by-jowl with fishmongers and where butchers trumped baronets in philosophical debates. “Pre-eminence of place none here should mind,” proclaimed the Rules and Orders of the Coffee-House (1674), “but take the next fit seat he can find” — which would seem to chime with John Macky’s description of noblemen and “private gentlemen” mingling together in the Covent Garden coffeehouses “and talking with the same Freedom, as if they had left their Quality and Degrees of Distance at Home.”

Perhaps. But propagandist apologias and wondrous claims of travel-writers aside, more compelling evidence suggests that far from co-existing in perfect harmony on the fireside bench, people in coffeehouses sat in relentless judgement of one another. At the Bedford Coffeehouse in Covent Garden hung a “theatrical thermometer” with temperatures ranging from “excellent” to “execrable”, registering the company’s verdicts on the latest plays and performances, tormenting playwrights and actors on a weekly basis; at Waghorn’s and the Parliament Coffee House in Westminster, politicians were shamed for making tedious or ineffectual speeches and at the Grecian, scientists were judged for the experiments they performed (including, on one occasion, dissecting a dolphin). If some of these verdicts were grounded in rational judgement, others were forged in naked class prejudice. Visiting Young Slaughter’s coffeehouse in 1767, rake William Hickey was horrified by the presence of “half a dozen respectable old men”, pronouncing them “a set of stupid, formal, ancient prigs, horrid periwig bores, every way unfit to herd with such bloods as us”.

But the coffeehouse’s formula of maximised sociability, critical judgement, and relative sobriety proved a catalyst for creativity and innovation. Coffeehouses encouraged political debate, which paved the way for the expansion of the electorate in the 19th century. The City coffeehouses spawned capitalist innovations that shaped the modern world. Other coffeehouses sparked journalistic innovation. Nowhere was this more apparent than at Button’s coffeehouse, a stone’s throw from Covent Garden piazza on Russell Street.

The figure in the cloak is Count Viviani; of the figures facing the reader the draughts player is Dr Arbuthnot, and the figure standing is assumed to be Pope – Source.

It was opened in 1712 by the essayist and playwright Joseph Addison, partly as a refuge from his quarrelsome marriage, but it soon grew into a forum for literary debate where the stars of literary London — Addison, Steele, Pope, Swift, Arbuthnot and others — would assemble each evening, casting their superb literary judgements on new plays, poems, novels, and manuscripts, making and breaking literary reputations in the process. Planted on the western side of the coffeehouse was a marble lion’s head with a gaping mouth, razor-sharp jaws, and “whiskers admired by all that see them”. Probably the world’s most surreal medium of literary communication, he was a playful British slant on a chilling Venetian tradition.

As Addison explained in the Guardian, several marble lions “with mouths gaping in a most enormous manner” defended the doge’s palace in Venice. But whereas those lions swallowed accusations of treason that “cut off heads, hang, draw, and quarter, or end in the ruin of the person who becomes his prey”, Mr Addison’s was as harmless as a pussycat and a servant of the public. The public was invited to feed him with letters, limericks, and stories. The very best of the lion’s digest was published in a special weekly edition of the original Guardian, then a single-sheet journal costing one-and-a-half pence, edited inside the coffeehouse by Addison. When the lion “roared so loud as to be heard all over the British nation” via the Guardian, writing by unknown authors was beamed far beyond the confines of Button’s making the public — rather than a narrow clique of wits — the ultimate arbiters of literary merit. Public responses were sometimes posted back to the lion in a loop of feedback and amplification, mimicking the function of blogs and newspaper websites today (but much more civil).

“An excellent piece of workmanship, designed by a great hand in imitation of the antique Egyptian lion, the face of it being compounded out of a lion and a wizard.” — Joseph Addison, the Guardian, 9 July 1713 – Source.

If you’re thinking of visiting Button’s today, brace yourself: it’s a Starbucks, one of over 300 clones across the city. The lion has been replaced by the “Starbucks community notice board” and there is no trace of the literary, convivial atmosphere of Button’s. Addison would be appalled.


Dr Matthew Green graduated from Oxford University in 2011 with a PhD in the impact of the mass media in 18th-century London. He works as a writer, broadcaster, freelance journalist, and lecturer. He is the co-founder of Unreal City Audio, which produces immersive, critically-acclaimed tours of London as live events and audio downloads. His limited edition hand-sewn pamphlet, The Lost World of the London Coffeehouse, published by Idler Books, is on sale now:http://unrealcityaudio.co.uk/shop/


Unreal City Audio Tours– Join actors, musicians, and Dr Matthew Green for an immersive whirlwind tour of London’s original coffeehouses every month. Featuring free shots of 17th-century style coffee! See website for details. Or download an epic two-hour coffeehouse audio tour, vividly reconstructing the lost acoustic world of 17th and 18th-century London and featuring performances by 13 actors, Dr Matthew Green’s narration, and broadside ballads all woven into a cinematic soundscape -
http://unrealcityaudio.co.uk/

  • A Foreign View of England in the Reigns of George I and II (1902 edition) by César de Saussure.
  • Selections from the Tatler, Spectator and Guardian (1885) by Joseph Addison and Richard Steele.
  • Inns and Taverns of Old London (1909) by Henry Shelley.
  • Club Life of London vol 2 (1866) by John Timbs.
  • The Early History of Coffee Houses in England (1893) by Edward Robinson.
  • All About Coffee (1922) by William Ukers
  • A Journey through England: In familiar letters from a gentleman here, to his friend abroad (1722) by John Macky.
  • The London Spy (1701) by Ned Ward.

HELP TO KEEP US AFLOAT

The Public Domain Review is a not-for-profit project and we rely on support from our readers to stay afloat. If you like what we do then please do consider making a donation. We welcome all contributions, big or small - everything helps!


Become a Patron

Make a one off Donation

SIGN UP TO THE NEWSLETTER

Sign up to get our free fortnightly newsletter which shall deliver direct to your inbox the latest brand new article and a digest of the most recent collection items. Simply add your details to the form below and click the link you receive via email to confirm your subscription!


Tags: addison, arbuthnot, coffee, coffeehouse, drugs, london, pope

This entry was posted on 7th August 2013 at 3:19 pm and is filed under Articles, Books, History. You can follow any responses to this entry through the RSS 2.0 feed.

URX Blog — The Modern Marketer’s Guide to App Analytics and Measurement

$
0
0

Comments:"URX Blog — The Modern Marketer’s Guide to App Analytics and Measurement"

URL:http://blog.urx.com/post/72905495808/the-modern-marketers-guide-to-app-analytics-and


One of the main obstacles preventing mobile advertisers from experimenting with a new solution is asking for an SDK integration. Data portability is a big issue for modern marketers, so properly instrumenting your app and internal analytics appropriately will let you work more easily with many service providers in the ecosystem. The goal of this document is to provide a high-level view of the tools and tricks that modern marketers use to fully understand their app data.

Ingredients:
  • Segment.IO
  • MobileAppTracking

Segment.IO is a multiplexer for in-app events. It allows you to send event-level data to multiple analytics tools without having to write new code. This is helpful for experimenting with different internal analytics dashboards/analyses.

MobileAppTracking is a conversion tracker that acts as the source of truth for attributing events to ad spend. This makes it very easy to try multiple ad platforms without worrying about SDK integrations.

Set-Up Mixpanel, Google Analytics, and other interesting analytics tool through Segment.io to see which best suits your needs and visualization preferences.

Events Tracked:
  • Opens
  • Item Viewed
  • Add to Cart
  • Purchase Conversion
  • Install
  • Update
  • Registration

Tracking the right events in MobileAppTracking ensures that data providers (and internal analysts) have a granular enough view of the data. Each event corresponds to a different experience/action in an app, and these might change based on the vertical of the app.

Data Tracked:
  • Product ID
  • Price
  • Customer ID

In addition to the traditional metadata collected about the clicks, it is also extremely important to track Product ID, Price, and Customer ID. These will enrich the events data to enable more clear insights to arise.

Channels Used:
  • Facebook for User Acquisition (CPI)
  • Push Notifications/Email
  • App Store Optimizations
  • Beginning Retargeting Campaigns (CPC)

The first channel people generally start advertising is Facebook for App Installs. App developers almost unanimously agree that Facebook brings the highest value users. In parallel, marketers will usually deliver Push Notification and Email campaigns to varying degrees of targeting.

App Store optimizations will help with the discoverability of your app in the app store, but it is still mostly a dark art. Once you have over ~250,000 app installs, retargeting will become a viable way of increasing the value of your pre-existing app audience.

Acquisition Strategy:
  • Target various audiences on Facebook.
  • Measure LTV of users to evaluate success of campaigns.
  • Compare internal LTV to CPI.
  • Find more scale.
Retargeting Strategy:
  • Spend as much as possible at target (CPC, ROAS)
  • Pick off low hanging fruit of conversions
  • Re-engage lapsed users
Organic Re-Engagement:
  • Be very targeted with push targeting to drive high CTR
  • Optimize emails for mobile
  • Deeplink to specific content

Questions? Contact john@urx.com

Vote on HN

Article 3

The World's Best Bounty Hunter Is 4-Foot-11. Here's How She Hunts | Threat Level | Wired.com

$
0
0

Comments:"The World's Best Bounty Hunter Is 4-Foot-11. Here's How She Hunts | Threat Level | Wired.com"

URL:http://www.wired.com/threatlevel/2013/12/skip-tracing-ryan-mullen/all/1


The boat, built in 1984, was in good shape, but Kenny was moving on to a new phase of his life, so he offered his boat, then called the Morning Star, at a price of $115,000. A sale had been negotiated, but the deal fell apart when the buyer’s broker saw the price inexplicably changed to $182,000 on some paperwork and decided the deal was fishy. The same buyer came back with a new broker, and the purchase was successfully concluded on July 3, 2012. After a series of online searches and phone conversations, Gomez determined that the second broker on the sale had been none other than Eddie Fortino, the same man who had been the broker for the Harper property in Natchez.

On the afternoon of June 19, Gomez phoned Fortino at his office in Metairie and informed him that she knew he had aided Ryan Mullen in the scam involving Morning Star. Fortino had brokered the purchase of the yacht for the $115,000 asking price, but—in a trick reminiscent of the Alice C real estate deal—the sales documents showed that the Morning Star was bought for $182,000. That was the price United Leasing had paid to buy the boat from Mullen under the terms of a “leaseback” agreement. (It was United Leasing that had eventually enlisted ACS to track down the yacht.) Fortino insisted that he didn’t set up the financing for the boat and had only received a $10,000 broker’s fee; Mullen had pocketed the remaining $57,000 himself.

“Pretty slick,” Gomez says. “Mullen not only got a Hatteras yacht he could hide out on—without his name officially attached to it—he also picked up a nice chunk of spending money.”

After Gomez described what she had surmised about the yacht deal, “Fortino asked me, ‘How long did it take you to figure it out?’” Gomez recalls. “That’s when I knew I had him.”

Gomez pressured Fortino, going so far as to bluff him with the idea that he might get paid for helping locate Mullen. Finally, Fortino agreed, promising also to send Gomez a copy of Mullen’s passport. When it arrived, Gomez was more intrigued than ever; the name on the passport was Patrick Peter Mullen, and the date of birth was December 4, 1981. “For a moment I asked myself if that was his real name,” she says, “but then I realized, no, it’s just another of his false identities.” A call to one of her government friends produced the information that a red flag had been placed on that passport number in the US Department of Homeland Security database. Gomez also discovered that the US Secret Service was looking for Mullen.

Mullen was at that moment preparing his yacht for imminent Departure. The series of real estate deals he had organized in cajun country had collapsed.

Gomez brought along a little extra muscle when she left her Texas home in her Ford Edge SUV on June 26 and headed to New Orleans: Joe Mendez was a licensed bodyguard who carried a .40 Glock on his belt. Mullen was somewhere in the metropolitan area, Gomez believed, but it was difficult to say exactly where—especially when Fortino kept changing his story, putting the yacht somewhere near Baton Rouge, then near Lafayette. Then he stopped answering the skip tracer’s phone calls and text messages.

Gomez located Fortino’s own home on a canal in Springfield, just northwest of Lake Pontchartrain, and on the afternoon of June 30 she drove up there, arriving as he was preparing his place for a big July 4th party. She confronted him in the front yard, threatening to go to the authorities with all the information she had learned about him unless he told the truth about Mullen’s whereabouts. Fortino buckled and promised to lead Gomez to the yacht the next day.

At 8 am the following morning, July 1, Gomez and Mendez were waiting in her SUV when Fortino’s pickup pulled into a Hooters parking lot. He led the SUV on an hour-and-a-half drive across the Mississippi on Interstate 10 into Iberville Parish, where he pulled over to the side of the road, climbed out of his truck, and told Gomez she would have to continue without him. He said that both the yacht and Mullen could be found in the private slip of the Alice C Plantation in St. Mary Parish.

Mullen was at that moment preparing Big Ol’ Girl for imminent departure. The series of real estate deals he had organized in Cajun country had collapsed. The closing date on the original contract for the Alice C had come and gone and still no money had shown up. Gary Blum agreed to an extension through June 28, but the real estate brokers he’d hired, who were handling both the Alice C deal and Mullen’s apartment complex purchase in New Iberia, had discovered that the $100,000 deposit for the apartment complex didn’t exist—no lawyer was holding it in escrow as they had been told. Learning this, the broker handling the Alice C deal decided to again verify that Mullen had nearly $1.4 million in two separate accounts in a local bank. When he asked for the bank officer who’d phoned several weeks earlier, the broker was told that no one by that name worked there. And there were no customers of the bank named Ryan Mullen either.

Mullen was securing the decks of his yacht at 11:15 am when Gomez parked her SUV in the driveway of the home next to the Alice C. With Mendez right behind her, Gomez stepped through a screen of sycamore trees into the enormous backyard of the plantation and found herself within 20 feet of a large man with a broad face and a mildly curious expression, standing on the dock beside his big boat. She asked his name. “Ron Muller,” the man replied. Gomez smiled and pulled out her cell phone, while Mendez stood by, one hand on his pistol.

Ten minutes later, Mullen was in handcuffs, surrounded by four police officers, arrested on a warrant from St. Bernard Parish for issuing worthless checks and a warrant from the federal court of the Eastern District of Louisiana for failure to appear on a contempt of court charge. He was taken away along with the five animals on board the yacht: two Chihuahuas, a pair of cats, and a magnificent green parrot. As the bird was carried off in its cage, it amused the growing crowd by repeating again and again the two phrases it knew best: “Kill ’em all!” and “Where’s Mullen?”

Within two hours, the officers had released the yacht to Gomez. On board, she found identification and contracts that placed Mullen at an assortment of addresses, purchase contracts for plantations in Louisiana and Mississippi, and billing and shipping invoices for various luxury automobiles along with a zippered bag filled with keys to more than a dozen vehicles. Gomez also discovered two large locked cases bearing the labels “Mastercheck Keypad and Printer” and “Mastercheck Analogue Interface.”

From St. Mary Parish, Mullen was moved to the jail in Jefferson Parish, where he faced criminal charges. Gomez spent the night baby­sitting the yacht at the Alice C, waiting for a pilot who had been dispatched by ACS to guide the boat back to Berwick, Louisiana.

As a banker, Blum knew all about the check-making machinery inside those cases, including how dangerous it could be in the hands of a criminal. “An ordinary citizen is not supposed to have that equipment,” Blum says. “It’s highly regulated, because these magnetic ink printers make checks that pass automatically through the proofing machines in banks.” Most citizens are unaware that very few checks are ever physically examined, Blum observes: “It’s all electronic. If you’ve got one of those machines that do the correct type of embossing, the checks will clear, the money will be released, and it can take months before the accounts are reconciled and somebody realizes what’s going on.”

Blum was in some ways even more impressed that Mullen had been able to arrange that call to his brokers from inside the New Orleans bank that verified the existence of two accounts containing almost $1.4 million. “I had heard about software that can manipulate the phone system where a false origin appears on caller ID,” Blum says, “but it never occurred to me that somebody I knew might be using it.”

Gomez’s job was done, but it didn’t feel that way. Her investigation had begun as the pursuit of a man on the run from the FBI for 14 years. Along the way, she not only learned that Ryan Eugene Mullen was a false name but also became convinced that the story of his exploits as a cybercriminal was one Mullen himself had invented, along with several others, to obscure his trail. “All along I assumed that part was real, because the Marshals Service is not going to put out a story like that if it isn’t true,” Gomez says. It was easy to see what Mullen had gotten out of the deception: The Marshals Service, the attorneys at United Leasing, and the assorted bill collectors, bounty hunters, and private investigators who were after him had wasted weeks and months looking for versions of Mullen that didn’t exist anywhere but online. She herself had used up most of a week confirming that Ryan Eugene Mullen was a phantom, Gomez noted, and “I wanted to know how it happened.”

The Marshals Service, attorneys, bill collectors, bounty hunters, and investigators all had wasted weeks and months looking for versions of Mullen that didn’t exist.

In some instances, she found, a single business transaction effected with a false identity had created the official record of a fictional Mullen. Ryan Eugene Mullen, though, had first appeared on City-Data.com, Gomez determined. Ryan Gino Mullen also seemed to have first appeared on the forum. The original postings, just weeks apart, appeared to have different authors, but there were clear links between the two. She assumed that Mullen himself or someone working for him had posted both bulletins to create confusion for anyone Googling him, but she wasn’t able to learn anything more about the posts.

Gomez was “trying to untangle a deep dark web,” Fortino says. “But Mullen created that web so no one can untangle it.”

The most troubling lesson she learned from Mullen, Gomez says, is how readily misleading information can migrate from a posting on an Internet forum to official status. “In a second, what’s false becomes true,” she observes. “All it takes is for one person to put it on the record.” That seems to be what happened with Mullen’s Most Wanted status. A spokeswoman from the US Marshals Service told WIRED that Deputy Sheasby knew nothing about a $2 million cybertheft by Mullen until he was told by “an investigator,” and that he’d passed on the story only because he felt obliged to make other investigators aware of everything he had heard.

The long process of sorting out Mullen’s crimes and finding all his assets had begun. (Mullen declined repeated requests to be interviewed for this story.) The Secret Service acknowledged that an investigation was under way, but the prosecutor on the case in Jefferson Parish, Jody Fortunato, knew nothing beyond that. The Secret Service declined to speak with WIRED about the case.

A couple of weeks into his incarceration, Mullen placed a call from jail to his friend Guthrie. He claimed he’d been beaten up shortly after arriving. Mullen described a litany of charges against him and pledged to make restitution to his victims. After this claim of contrition, though, he confided that three days before his capture he’d had a feeling something was wrong and thought then about moving the yacht to a new location. “I wish I had listened to myself,” he told Guthrie.

Gomez laughs when she hears this. “Maybe he lost track of which self he was, until I came along to remind him.”

gherkin/gherkin at master · alandipert/gherkin · GitHub

$
0
0

Comments:"gherkin/gherkin at master · alandipert/gherkin · GitHub"

URL:https://github.com/alandipert/gherkin/blob/master/gherkin


#!/usr/bin/env bash

# -*- mode: sh; sh-basic-offset: 2 -*-

usage(){

  echo"$0 [OPTION] FILE..."

  echo

echo"Options:"

  echo" -e|--eval STR Evaluate STR"

  echo" -l|--load FILE Load and evaluate FILE"

  echo" -r|--repl Start a REPL"

  exit 2

}

if[[ -z "$BASH_VERSION"]]||((${BASH_VERSINFO[0]}< 4));then

echo"bash >= 4.0 required">&2

  exit 1

fi

DEFAULT_IFS="$IFS"

# function return and error slots ##############################################

r=""

e=""

error(){

  [[ -z "$e"]]&&e="$1"||e="$1\n$e"

  r=$NIL

  return 1

}

# pushback reader ##############################################################

pb_max=100

pb_newline="$(printf '\034')"

pb_star="$(printf '\035')"

pb_get="^$"

pb_unget="^$"

history_flag=1

readline(){

  local IFS=$'\n\b'prompt="> " line

  set -f

  read -e -r -p "$prompt" line ||exit 0

  pb_get="${pb_get:0:$((${#pb_get}-1))}${line}${pb_newline}$"

  set +f

  unset IFS

  if[["$line"=~ ^[[:space:]]*- ]];then

echo"warning: lines starting with - aren't stored in history">&2

  elif[[ -n "$history_flag"]];then

history -s "$line"

  fi

}

getc(){

  local ch

  if((${#pb_get}== 2));then

readline

    getc

  else

ch="${pb_get:1:1}"

    pb_get="^${pb_get:2}"

    if((pb_max > 0));then

pb_unget="${pb_unget:0:$((${#pb_unget}-1))}"

      pb_unget="^${ch}${pb_unget:1:$((pb_max-1))}$"

    else

pb_unget="^${ch}${pb_unget:1}"

    fi

r="$ch"

  fi

}

ungetc(){

  if[["$pb_unget"=="^$"]];then

echo"ungetc: nothing more to unget, \$pb_max=$pb_max">&2 &&return 1

  else

pb_get="^${pb_unget:1:1}${pb_get:1}"

    pb_unget="^${pb_unget:2}"

  fi

}

has_shebangP(){[["$(head -1 $1)"=~ ^#! ]];}

strmap_file(){

  local f="$1" contents

  if has_shebangP "$f";then

contents="$(tail -n+2 "$f" | sed -e 's/^[ \t]*//' | tr -s '\n' $pb_newline)"

  else

contents="$(cat "$f" | sed -e 's/^[ \t]*//' | tr -s '\n' $pb_newline)"

  fi

mapped_file="(do $contents nil)"

  mapped_file_ptr=0

}

strmap_getc(){

  r="${mapped_file:$((mapped_file_ptr++)):1}"

}

strmap_ungetc(){

  let --mapped_file_ptr

}

_getc=getc

_ungetc=ungetc

# memory layout & gc ###########################################################

cons_ptr=0

symbol_ptr=0

protected_ptr=0

gensym_counter=0

array_cntr=0

declare -A interned_strings

declare -A car

declare -A cdr

declare -A environments

declare -A recur_frames

declare -A recur_fns

declare -A marks

declare -A global_bindings

declare -a symbols

declare -a protected

declare -a mark_acc

heap_increment=1500

cons_limit=$heap_increment

symbol_limit=$heap_increment

tag_marker="$(printf '\036')"

atag="${tag_marker}003"

declare -A type_tags=([000]=integer

                      [001]=symbol

                      [002]=cons

                      [003]=vector

                      [004]=keyword)

type(){

  if[["${1:0:1}"=="$tag_marker"]];then

r="${type_tags[${1:1:3}]}"

  else

r=string

  fi

}

strip_tag(){r="${1:4}";}

typeP(){

  local obj="$1"tag="$2"

  type"$obj"&&[[$r=="$tag"]]

}

make_integer(){r="${tag_marker}000${1}";}

make_keyword(){r="${tag_marker}004${1:1}";}

intern_symbol(){

  if[[ -n "${interned_strings[$1]}"]];then

r="${interned_strings[$1]}"

  else

symbol_ptr="$((symbol_ptr + 1))"

    interned_strings["$1"]="${tag_marker}001${symbol_ptr}"

    symbols["$symbol_ptr"]="$1"

    r="${tag_marker}001${symbol_ptr}"

  fi

}

defprim(){

  intern_symbol "$1"&&sym_ptr="$r"

  intern_symbol "$(printf '#<primitive:%s>' "$1")"&&prim_ptr="$r"

  global_bindings["$sym_ptr"]=$prim_ptr

  r="$prim_ptr"

}

cons(){

  local the_car="$1"the_cdr="$2"

  mark "$the_car"

  mark "$the_cdr"

  while[[ -n "${marks[${tag_marker}002${cons_ptr}]}"]];do

unset marks["${tag_marker}002$((cons_ptr++))"]

  done

if[[$cons_ptr==$cons_limit]];then

gc

  fi

unset environments["${tag_marker}002${cons_ptr}"]

  unset recur_frames["${tag_marker}002${cons_ptr}"]

  unset recur_fns["${tag_marker}002${cons_ptr}"]

  car["${tag_marker}002${cons_ptr}"]="$the_car"

  cdr["${tag_marker}002${cons_ptr}"]="$the_cdr"

  r="${tag_marker}002${cons_ptr}"

  cons_ptr="$((cons_ptr + 1))"

}

gensym(){

  gensym_counter=$((gensym_counter +1))

  intern_symbol "G__${gensym_counter}"

}

new_array(){

  r="arr$((array_cntr++))"

  declare -a $r

  r="${atag}${r}"

}

vset(){

  strip_tag "$1"

  eval"${r}[${2}]=\"${3}\""

  r="$1"

}

vget(){

  strip_tag "$1"

  eval"r=\${${r}[${2}]}"

}

count_array(){

  strip_tag "$1"

  eval"r=\${#${r}[@]}"

}

append(){

  local i

  strip_tag "$1"

  eval"i=\${#${r}[@]}"

  eval"${r}[${i}]=\"${2}\""

  r="$1"

}

append_all(){

  strip_tag "$1"

  local a1="$r"

  strip_tag "$2"

  local a2="$r"

  local len1 len2

  eval"len1=\${#${a1}[@]}"

  eval"len2=\${#${a2}[@]}"

  local i=0

  while((i < len2));do

eval"${a1}[((${i} + ${len1}))]=\"\${${a2}[${i}]}\""

    ((i++))

  done

r="$1"

}

prepend(){

  local i len

  strip_tag "$2"

  eval"len=\${#${r}[@]}"

  while((len > 0));do

eval"${r}[${len}]=\"\${${r}[((len - 1))]}\""

    ((len--))

  done

eval"${r}[0]=\"$1\""

  r="$2"

}

dup(){

  new_array

  local aptr="$r"

  strip_tag "$aptr"

  local narr="$r"

  strip_tag "$1"

  local len

  eval"len=\${#${r}[@]}"

  local i=0

  while((i < len));do

eval"${narr}[${i}]=\"\${${r}[${i}]}\""

    ((i++))

  done

r="$aptr"

}

concat(){

  dup "$1"

  append_all "$r""$2"

}

vector(){

  local v="$2"

  if[["$EMPTY"=="$v"|| -z "$v"||"$NIL"=="$v"]];then

new_array

    v="$r"

  fi

prepend $1$v

}

protect(){

  protected_ptr="$((protected_ptr + 1))"

  protected["$protected_ptr"]="$1"

}

unprotect(){protected_ptr="$((protected_ptr - 1))";}

acc_count=0

mark_seq(){

  local object="$1"

  while typeP "$object" cons &&[[ -z "${marks[$object]}"]];do

marks["$object"]=1

    mark_acc[acc_count++]="${car[$object]}"

    object="${cdr[$object]}"

  done

if typeP "$object" vector ;then

count_array "$object"

    local i sz="$r"

    for((i=0; i<sz; i++));do

vget "$object"$i

      mark_acc[acc_count++]="$r"

    done

fi

}

mark(){

  acc_count=0

  mark_seq "$1"

  local i

  for((i=0; i<${#mark_acc[@]}; i++));do

mark_seq "${mark_acc[$i]}"

  done

mark_acc=()

}

gc(){

  echo"GC...">&2

  IFS="$DEFAULT_IFS"

  mark "$current_env"

  for k in "${!environments[@]}";do mark "${environments[$k]}";done

for k in "${!protected[@]}";do mark "${protected[$k]}";done

for k in "${!stack[@]}";do mark "${stack[$k]}";done

for k in "${!global_bindings[@]}";do mark "${global_bindings[$k]}";done

cons_ptr=0

  while[[ -n "${marks[${tag_marker}002${cons_ptr}]}"]];do

unset marks["${tag_marker}002$((cons_ptr++))"]

  done

if[[$cons_ptr==$cons_limit]];then

echo"expanding heap...">&2

    cons_limit=$((cons_limit + heap_increment))

  fi

}

# reader #######################################################################

interpret_token(){

  [["$1"=~ ^-?[[:digit:]]+$ ]]\

    &&r=integer &&return

  [["$1"=~ ^:([[:graph:]]|$pb_star)+$ ]]\

    &&r=keyword &&return

  [["$1"=~ ^([[:graph:]]|$pb_star)+$ ]]\

    &&r=symbol &&return

return 1

}

read_token(){

  local token=""

  while$_getc;do

if[["$r"=~ ('('|')'|'['|']'|[[:space:]]|$pb_newline|,)]];then

      $_ungetc&&break

else

token="${token}${r}"

    fi

done

  [ -z "$token"]&&return 1

  if interpret_token "$token";then

case"$r" in

      symbol) intern_symbol "$token"&&return;;

      integer) make_integer "$token"&&return;;

      keyword) make_keyword "$token"&&return;;

      *) error "unknown token type: '$r'"

    esac

else

error "unknown token: '${token}'"

  fi

}

skip_blanks(){

  $_getc

  while[["$r"=~ ([[:space:]]|$pb_newline|,)]];do$_getc;done

  $_ungetc

}

skip_comment(){

  $_getc

  while[["$r" !="$pb_newline"]];do$_getc;done

}

read_list(){

  local ch read1 read2

  if lisp_read;then

read1="$r"

  else

    $_getc

    r="$NIL"

    return

fi

  $_getc&&ch="$r"

  case"$ch" in

    ".")

      lisp_read &&read2="$r"

      skip_blanks

      $_getc

      cons "$read1""$read2"

      ;;

    ")") cons "$read1"$NIL;;

    *)

      $_ungetc

      read_list

      cons "$read1""$r"

  esac

}

read_vector(){

  local ch read1

  if lisp_read;then

read1="$r"

  else

getc

    r="$EMPTY"

    return

fi

skip_blanks

  getc

  if[["$r"=="]"]];then

vector "$read1""$EMPTY"

  else

ungetc

    skip_blanks

    read_vector

    vector "$read1""$r"

  fi

}

read_string(){

  local s=""

  while true;do

    $_getc

    if[["$r"=="\\"]];then

      $_getc

      if[["$r"=="\""]];then

        s="${s}${r}"

      else

        s="${s}\\${r}"

      fi

elif[["$r"=="\""]];then

break

else

s="${s}${r}"

    fi

done

r="$(echo "$s" | tr "$pb_star" '*')"

}

lisp_read(){

  local ch read1 read2 read3 read4

  skip_blanks;$_getc;ch="$r"

  case"$ch" in

    "\"")

      read_string

      ;;

    "(")

      read_list

      ;;

    "[")

      read_vector

      ;;

    "'")

      lisp_read &&read1="$r"

      cons "$read1"$NIL&&read2="$r"

      cons $QUOTE"$read2"

      ;;

    ";")

      skip_comment

      lisp_read

      ;;

    *)

      $_ungetc

      read_token

  esac

}

string_list(){

  local c="$1" ret

  shift

if[["$1"==""]];then

cons $c$NIL&&ret="$r"

  else

string_list $*

    cons $c$r&&ret="$r"

  fi

r="$ret"

}

# printer ######################################################################

printing=

escape_str(){

  local i c

  r=""

  for((i=0; i < ${#1}; i++));do

c="${1:$i:1}"

    case"$c" in

      \")r="${r}\\\"";;

      \\)r="${r}\\\\";;

      *)r="${r}${c}"

    esac

done

}

str_arr(){

  local ret="["

  count_array "$1"

  local len=$r

  if(( 0 != len ));then

vget $1 0

    str "$r"

    ret="${ret}${r}"

    for((i=1 ; i < $len; i++));do

vget $1$i

      str "$r"

      ret="${ret} ${r}"

    done

fi

r="${ret}]"

}

str_list(){

  local lst="$1"

  local ret

  if[["${car[$lst]}"==$FN]];then

strip_tag "$lst"&&printf -v r '#<function:%s>'"$r"

  else

ret="("

    str "${car[$lst]}"

    ret="${ret}${r}"

    lst="${cdr[$lst]}"

    while typeP "$lst" cons ;do

str "${car[$lst]}"

      ret="${ret} ${r}"

      lst="${cdr[$lst]}"

    done

if[["$lst" !=$NIL]];then

str "$lst"

      ret="${ret} . ${r}"

    fi

r="${ret})"

  fi

}

str(){

  type"$1"

  case"$r" in

    integer) strip_tag "$1"&&printf -v r '%d'"$r";;

    cons) str_list "$1";;

    vector) str_arr "$1";;

    symbol) strip_tag "$1"&&printf -v r '%s'"$(echo "${symbols[$r]}" | tr $pb_star "*")";;

    keyword) strip_tag "$1"&&printf -v r ':%s'"$r";;

    *)

      if[[ -n $printing]];then

escape_str "$1"

        printf -v r '"%s"'"$r"

      else

printf -v r '%s'"$1"

      fi

      ;;

  esac

}

prn(){

  printing=1

  str "$1"

  printing=

  printf'%s'"$r"&&echo

}

# environment & control ########################################################

frame_ptr=0

stack_ptr=0

declare -a stack

intern_symbol '&'&&AMP="$r"

intern_symbol 'nil'&&NIL="$r"

intern_symbol 't'&&T="$r"

global_bindings[$NIL]="$NIL"

global_bindings[$T]="$T"

car[$NIL]="$NIL"

cdr[$NIL]="$NIL"

new_array &&EMPTY="$r"

current_env="$NIL"

intern_symbol 'quote'&&QUOTE=$r

intern_symbol 'fn'&&FN=$r

intern_symbol 'if'&&IF=$r

intern_symbol 'set!'&&SET_BANG=$r

intern_symbol 'def'&&DEF=$r

intern_symbol 'do'&&DO=$r

intern_symbol 'recur'&&RECUR=$r

intern_symbol 'binding'&&BINDING=$r

declare -A specials

specials[$QUOTE]=1

specials[$FN]=1

specials[$IF]=1

specials[$SET_BANG]=1

specials[$DEF]=1

specials[$DO]=1

specials[$RECUR]=1

specials[$BINDING]=1

defprim 'eq?'&&EQ=$r

defprim 'nil?'&&NILP=$r

defprim 'car'&&CAR=$r

defprim 'cdr'&&CDR=$r

defprim 'cons'&&CONS=$r

defprim 'list'&&LIST=$r

defprim 'vector'&&VECTOR=$r

defprim 'keyword'&&KEYWORD=$r

defprim 'eval'&&EVAL=$r

defprim 'apply'&&APPLY=$r

defprim 'read'&&READ=$r

defprim '+'&&ADD=$r

defprim '-'&&SUB=$r

defprim "$pb_star"&&MUL=$r

defprim '/'&&DIV=$r

defprim 'mod'&&MOD=$r

defprim '<'&&LT=$r

defprim '>'&&GT=$r

defprim 'cons?'&&CONSP=$r

defprim 'symbol?'&&SYMBOLP=$r

defprim 'number?'&&NUMBERP=$r

defprim 'string?'&&STRINGP=$r

defprim 'fn?'&&FNP=$r

defprim 'gensym'&&GENSYM=$r

defprim 'random'&&RAND=$r

defprim 'exit'&&EXIT=$r

defprim 'println'&&PRINTLN=$r

defprim 'sh'&&SH=$r

defprim 'sh!'&&SH_BANG=$r

defprim 'load-file'&&LOAD_FILE=$r

defprim 'gc'&&GC=$r

defprim 'error'&&ERROR=$r

defprim 'type'&&TYPE=$r

defprim 'str'&&STR=$r

defprim 'split'&&SPLIT=$r

defprim 'getenv'&&GETENV=$r

eval_args(){

  local args="$1"

  type"$args"

  if[["$r"== cons ]];then

while[["$args" !=$NIL]];do

lisp_eval "${car[$args]}"

      stack[$((stack_ptr++))]="$r"

      args="${cdr[$args]}"

    done

elif[["$r"== vector ]];then

count_array "$args"

    local i len="$r"

    for((i=0; i<len; i++));do

vget "$args""$i"

      lisp_eval "$r"

      stack[$((stack_ptr++))]="$r"

    done

elif[["$1" !="$NIL"]];then

str "$args"

    error "Unknown argument type: $r"

  fi

}

listify_args(){

  local p=$((stack_ptr -1))ret=$NIL stop

  [[ -z "$1"]]&&stop=$frame_ptr||stop="$1"

  while((stop <= p));do

cons "${stack[$p]}""$ret"&&ret="$r"

    p=$((p -1))

  done

r="$ret"

}

vectify_args(){

  local stop=$((stack_ptr -1)) ret

  new_array

  ret="$r"

  [[ -z "$1"]]&&p=$frame_ptr||p="$1"

  while((p <= stop));do

append "$ret""${stack[$((p++))]}"

  done

r="$ret"

}

acons(){

  local key="$1"datum="$2"a_list="$3"

  cons "$key""$datum"&& cons "$r""$a_list"

}

aget(){

  local key="$1"a_list="$2"

  while[["$a_list" !=$NIL]];do

if[["${car[${car[$a_list]}]}"=="$key"]];then

r="${cdr[${car[$a_list]}]}"&&return 0

    fi

a_list="${cdr[$a_list]}"

  done

r=$NIL&&return 1

}

analyze(){

  local fn="$1"body="$2"env="$3"

  while[["$body" !="$NIL"]];do

type"${car[$body]}"

    if[["$r"== cons ]];then

case"${car[${car[$body]}]}" in

        $FN) environments["${car[$body]}"]="$env";;

        $RECUR)

          recur_fns["${car[$body]}"]="$fn"

          recur_frames["${car[$body]}"]="$frame_ptr"

          ;;

        *) analyze "$fn""${car[$body]}""$env";;

      esac

fi

body="${cdr[$body]}"

  done

}

copy_list(){

  local lst="$1"copy="$NIL"prev="$NIL"curr="$NIL"

  while[["$lst" !="$NIL"]];do

cons "${car[$lst]}""$NIL"&&curr="$r"

    if[["$copy"=="$NIL"]];then

copy="$curr"

    else

cdr["$prev"]="$curr"

    fi

prev="$curr"

    lst="${cdr[$lst]}"

  done

r="$copy"

}

apply_user(){

  local fn="$1"

  local body="${cdr[${cdr[$fn]}]}"

  local params="${car[${cdr[$fn]}]}"

  local p="$frame_ptr"

  local ret="$NIL"

  local old_env

  [[ -z "${environments[$fn]}"]]&&local env=$NIL||local env="${environments[$fn]}"

  type"$params"

  local ptype="$r"

  if[["$ptype"=="cons"]];then

while[["$params" !=$NIL&&"${car[$params]}" !=$AMP]];do

acons "${car[$params]}""${stack[$((p++))]}""$env"&&env="$r"

      params="${cdr[$params]}"

    done

if[["${car[$params]}"==$AMP]];then

listify_args "$p"&&local more="$r"

      acons "${car[${cdr[$params]}]}""$more""$env"&&env="$r"

    fi

elif[["$ptype"=="vector"]];then

local i=1 len

    count_array "$params"

    len="$r"

    vget $params 0

    while((i <= len))&&[["$r" !=$AMP]];do

acons "$r""${stack[$((p++))]}""$env"&&env="$r"

      vget $params$((i++))

    done

if[["$r"=="$AMP"]];then

listify_args "$p"&&local more="$r"

      vget $params$i

      acons "$r""$more""$env"&&env="$r"

    fi

elif[["$params" !=$NIL]];then

error "Illegal type (${ptype}) for params in function"

    return 1

  fi

analyze "$fn""$body""$env"

  old_env="$current_env"

  current_env="$env"

  do_ "$body"&&ret="$r"

  current_env="$old_env"

  r="$ret"

}

eval_file(){

  strmap_file "$1"

  _getc=strmap_getc

  _ungetc=strmap_ungetc

  lisp_read

  _getc=getc

  _ungetc=ungetc

  protect "$r"

  lisp_eval "$r"

  unprotect

}

check_numbers(){

  while[[ -n "$1"]];do

if ! typeP "$1" integer ;then

str "$1"

      error "'$r' is not a number"

      return 1

    fi

shift

done

}

rev_str(){

  local i rev=""

  for((i=0; i < ${#1}; i++));do

rev="${1:$i:1}${rev}"

  done

r="$rev"

}

apply_primitive(){

  local primitive="$1"

  local arg0="${stack[$frame_ptr]}"

  local arg1="${stack[$((frame_ptr+1))]}"

  local arg2="${stack[$((frame_ptr+2))]}"

  r=$NIL

  case$primitive in

    $EQ)[["$arg0"=="$arg1"]]&&r="$T";;

    $NILP)[["$arg0"==$NIL]]&&r="$T";;

    $CAR)r="${car[$arg0]}";;

    $CDR)r="${cdr[$arg0]}";;

    $CONS) cons "$arg0""$arg1";;

    $LIST) listify_args ;;

    $VECTOR) vectify_args ;;

    $KEYWORD)

      type"$arg0"

      case$r in

        string) make_keyword "$arg0";;

        keyword)r="$arg0";;

        *)

          strip_tag "$arg0"

          error "Unable to make keyword from: $r"

          r="$NIL"

      esac

      ;;

    $STR)

      listify_args &&strs="$r"

      local ret=""

      while[["$strs" !="$NIL"]];do

str "${car[$strs]}"

        ret="${ret}$r"

        strs="${cdr[$strs]}"

      done

r="$ret"

      ;;

    $SPLIT)

      local i ret="$NIL"last=0

      rev_str "$arg1"&&local rev="$r"

      for((i=0; i < ${#rev}; i++));do

if[["${rev:$i:1}"=="$arg0"]];then

rev_str "${rev:$last:$((i - last))}"

          cons "$r""$ret"&&ret="$r"

          last="$((i + 1))"

        fi

done

if((last != 0));then

rev_str "${rev:$last:$((${#rev} - last))}"

        cons "$r""$ret"&&ret="$r"

      fi

r="$ret"

      ;;

    $GETENV)

      [[ -n "$arg0"]]&&eval"r=\$${arg0}"

      [[ -z "$r"]]&&r=$NIL

      ;;

    $EVAL) lisp_eval "$arg0";;

    $READ) lisp_read ;;

    $MOD)

      if check_numbers "$arg0""$arg1";then

strip_tag "$arg0"&&local x="$r"

        strip_tag "$arg1"&&local y="$r"

        make_integer $((x % y))

      fi

      ;;

    $LT)

      if check_numbers "$arg0""$arg1";then

strip_tag "$arg0"&&local x="$r"

        strip_tag "$arg1"&&local y="$r"

        ((x < y))&&r=$T||r=$NIL

      fi

      ;;

    $GT)

      if check_numbers "$arg0""$arg1";then

strip_tag "$arg0"&&local x="$r"

        strip_tag "$arg1"&&local y="$r"

        ((x > y))&&r=$T||r=$NIL

      fi

      ;;

    $CONSP) typeP "$arg0" cons &&r=$T;;

    $SYMBOLP) typeP "$arg0" symbol &&r=$T||r=$NIL;;

    $NUMBERP) typeP "$arg0" integer &&r=$T||r=$NIL;;

    $STRINGP) typeP "$arg0" string &&r=$T||r=$NIL;;

    $FNP) typeP "$arg0" cons &&[["${car[$arg0]}"==$FN]]&&r=$T;;

    $GC) gc &&r=$NIL;;

    $GENSYM) gensym ;;

    $ADD)

      if check_numbers "$arg0""$arg1";then

strip_tag "$arg0"&&local x="$r"

        strip_tag "$arg1"&&local y="$r"

        make_integer $((x + y))

      fi

      ;;

    $SUB)

      if check_numbers "$arg0""$arg1";then

strip_tag "$arg0"&&local x="$r"

        strip_tag "$arg1"&&local y="$r"

        make_integer $((x - y))

      fi

      ;;

    $APPLY)

      local old_frame_ptr=$frame_ptr

      frame_ptr=$stack_ptr

      type"$arg1"

      case$r in

        cons)

          while typeP "$arg1" cons;do

stack[$((stack_ptr++))]="${car[$arg1]}"

            arg1="${cdr[$arg1]}"

          done

          [[$arg1 !=$NIL]]&& error "Bad argument to apply: not a proper list"

          ;;

        vector)

          count_array "$arg1"

          local len="$r"

          for((i=0; i<len; i++));do

vget "$arg1""$i"

            stack[$((stack_ptr++))]="$r"

          done

          ;;

        *) error "Bad argument to apply: not a list"

      esac

if[[ -z "$e"]];then

apply "$arg0"

      fi

stack_ptr=$frame_ptr

      frame_ptr=$old_frame_ptr

      ;;

    $ERROR)

      printf'lisp error: '>&2

      prn "$arg0">&2

      ;;

    $TYPE)

      if[["$arg0"==$NIL]];then

r=$NIL

      else

type"$arg0"

        if[["$r"== cons ]]&&[["${car[$arg0]}"==$FN]];then

intern_symbol "function"

        else

intern_symbol "$r"

        fi

fi

      ;;

    $MUL)

      if check_numbers "$arg0""$arg1";then

strip_tag "$arg0"&&local x="$r"

        strip_tag "$arg1"&&local y="$r"

        make_integer $((x * y))

      fi

      ;;

    $DIV)

      local x y

      if check_numbers "$arg0""$arg1";then

strip_tag $arg0&&x=$r

        strip_tag $arg1&&y=$r

        make_integer $((x / y))

      fi

      ;;

    $RAND)

      if check_numbers "$arg0";then

strip_tag $arg0

        make_integer "$((RANDOM % r))"

      fi

      ;;

    $PRINTLN)

      listify_args &&local to_print="$r"

      while[["$to_print" !="$NIL"]];do

type"${car[$to_print]}"

        case"$r" in

          string)

            echo -e "${car[$to_print]}"

            ;;

          *) prn "${car[$to_print]}"

            ;;

        esac

to_print="${cdr[$to_print]}"

      done

r="$NIL"

      ;;

    $SH)

      local ret

      eval"ret=\$(${arg0})"

      IFS=$'\n'

      string_list $(for i in $ret;do echo"$i";done)

      IFS="$DEFAULT_IFS"

      ;;

    $SH_BANG)

      eval"${arg0}"

      [[$?== 0 ]]&&r=$T||r=$NIL

      ;;

    $LOAD_FILE)

      local f

      if[[ -r ${arg0}]];then

f="${arg0}"

      elif[[ -r "${arg0}.gk"]];then

f="${arg0}.gk"

      fi

if[["$f" !=""]];then

eval_file "$f"

      else

echo"File not found: ${arg0}">&2

        r="$NIL"

      fi

      ;;

    $EXIT)

      strip_tag $arg0

      exit"$r"

      ;;

    *) strip_tag "$1"&& error "unknown primitive function type: ${symbols[$r]}"

      return 1

  esac

}

apply(){

  if[["${car[$1]}"=="$FN"]];then

apply_user "$1"

  else

apply_primitive "$1"

  fi

}

add_bindings(){

  type"$1"

  if[[$r== cons ]];then

local pairs="$1" val

    while[["$pairs" !=$NIL&&"${cdr[$pairs]}" !=$NIL]];do

lisp_eval "${car[${cdr[$pairs]}]}"&&val="$r"

      if[[ -n "$e"]];then return 1;fi

acons "${car[$pairs]}""$val""$current_env"&&current_env="$r"

      pairs="${cdr[${cdr[$pairs]}]}"

    done

if[["$pairs" !=$NIL]];then

error "Bad bindings. Must be an even number of binding forms."

      return 1

    fi

elif[["$r"== vector ]];then

count_array "$1"

    local i v len="$r"

    if(( len % 2== 0 ));then

for((i=0; i<len;));do

vget "$1"$((i++))

        v="$r"

        vget "$1"$((i++))

        lisp_eval "$r"

        if[[ -n "$e"]];then return 1;fi

acons "$v""$r""$current_env"&&current_env="$r"

      done

else

error "Bad bindings. Must be an even number of binding forms."

    fi

else

error "bindings not available."

  fi

}

do_(){

  local body="$1"result="$NIL"

  while[["$body" !=$NIL]];do

lisp_eval "${car[$body]}"&&result="$r"

    body="${cdr[$body]}"

  done

if typeP "$result" cons &&[["${car[$result]}"=="$FN"]];then

copy_list "$result"

    environments["$r"]="$current_env"

  else

r="$result"

  fi

}

eval_special(){

  local special="$1"

  local op="${car[$1]}"

  local args="${cdr[$1]}"

  local arg0="${car[$args]}"

  local arg1="${car[${cdr[$args]}]}"

  local arg2="${car[${cdr[${cdr[$args]}]}]}"

  case$op in

    $QUOTE)r="$arg0";;

    $DO) do_ $args;;

    $FN)r=$special;;

    $IF)

      lisp_eval "$arg0"

      [["$r" !="$NIL"]]&& lisp_eval "$arg1"|| lisp_eval "$arg2"

      ;;

    $SET_BANG)

      if[[ -n "${global_bindings[$arg0]}"]];then

lisp_eval "$arg1"&& global_bindings[$arg0]="$r"

      else

strip_tag "$arg0"&& error "unbound variable: ${symbols[$r]}"

      fi

      ;;

    $RECUR)

      frame_ptr="${recur_frames[$1]}"

      stack_ptr=$frame_ptr

      while[["$args" !=$NIL]];do

lisp_eval "${car[$args]}"

        stack[$((stack_ptr++))]="$r"

        args="${cdr[$args]}"

      done

current_env="${environments[$1]}"

      apply_user "${recur_fns[$1]}"

      ;;

    $DEF)

      lisp_eval "$arg1"&& global_bindings["$arg0"]=$r

      r="$arg0"

      ;;

    $BINDING)

      local binding_body="${cdr[$args]}"

      local old_env="$current_env"

      add_bindings $arg0

      if[[ -z "$e"]];then

do_ $binding_body

      fi

current_env="$old_env"

      ;;

    *)

      strip_tag $op

      error "eval_special: unknown form: ${symbols[$r]}"

  esac

}

eval_function(){

  local op="${car[$1]}" eval_op

  local args="${cdr[$1]}"

  local old_frame_ptr=$frame_ptr

  frame_ptr=$stack_ptr

  lisp_eval "$op"&&eval_op="$r"

  if[[ -z "$e"]];then

protect "$eval_op"

    eval_args "$args"

    if[[ -z "$e"]];then

apply "$eval_op"

    fi

unprotect

  fi

stack_ptr=$frame_ptr

  frame_ptr=$old_frame_ptr

}

lisp_eval(){

  type$1

  case$r in

    symbol)

      [["$1"=="$NIL"]]&&r="$NIL"&&return

      [["$1"=="$T"]]&&r="$T"&&return

aget "$1""$current_env"&&return

if[[ -n "${global_bindings[$1]}"]];then

r="${global_bindings[$1]}"

      else

strip_tag "$1"&& error "unable to resolve ${symbols[$r]}"

      fi

      ;;

    cons)

      if[[ -n "${specials[${car[$1]}]}"]];then

eval_special "$1"

      else

eval_function "$1"

      fi

      ;;

    vector)

      local old_frame_ptr=$frame_ptr

      local old_stack_ptr=$stack_ptr

      frame_ptr=$stack_ptr

      eval_args "$1"

      if[[ -z "$e"]];then

vectify_args

      fi

stack_ptr=$old_stack_ptr

      frame_ptr=$old_frame_ptr

      ;;

    integer)r=$1;;

    string)r="$1";;

    keyword)r="$1";;

    *)

      error "lisp_eval: unrecognized type"

      return 1

      ;;

  esac

}

# repl #########################################################################

init_history(){

  intern_symbol "${pb_star}1"&&hist1="$r"

  intern_symbol "${pb_star}2"&&hist2="$r"

  intern_symbol "${pb_star}3"&&hist3="$r"

  global_bindings["$hist1"]="$NIL"

  global_bindings["$hist2"]="$NIL"

  global_bindings["$hist3"]="$NIL"

}

update_history(){

  global_bindings["$hist3"]="${global_bindings[$hist2]}"

  global_bindings["$hist2"]="${global_bindings[$hist1]}"

  global_bindings["$hist1"]="$r"

}

repl(){

  init_history

  while true;do

e=# clear existing error state

    lisp_read

    [[ -n "$e"]]&&printf"read error: $e\n">&2

    protect "$r"

    lisp_eval "$r"

    update_history

    [[ -n "$e"]]&&printf"eval error: $e\n">&2

    prn "$r"

    [[ -n "$e"]]&&printf"print error: $e\n">&2

    unprotect

  done

}

# start ########################################################################

eval_string(){

  local str="$1"

  lisp_read <<<"(do $str)"

  protect "$r"

  lisp_eval "$r"

  unprotect

}

# Start REPL if no arguments

[ -z "$*"]&& repl

# Process parameters

while["$*"];do

param=$1;shift;OPTARG=$1

  case$param in

    -e|--eval) eval_string "$OPTARG";shift

               [[$r !=$NIL]]&& prn $r

               ;;

    -l|--load) eval_file "$OPTARG";shift

               [[$r !=$NIL]]&& prn $r

               ;;

    -t|--test);;

    -r|--repl) repl ;;

    -*) usage ;;

    *) eval_file "$param"

               [[$r !=$NIL]]&& prn $r

               ;;

  esac

done

the kimono blog

$
0
0

Comments:"the kimono blog"

URL:http://kimonify.kimonolabs.com/kimload?url=http%3A%2F%2Fwww.kimonolabs.com%2Fwelcome.html


Web scraping. It's something we all love to hate. If you're a developer, you know what we're talking about. You wish the data you needed to power your app, model or visualization was available via API. But, most of the time it's not. So, you decide to build a web scraper. You write a ton of code, employ a laundry list of libraries and techniques, all for something that's by definition unstable, has to be hosted somewhere, and needs to be maintained over time.

We've felt this pain, many times over. So we built kimono to do all this heavy lifting for us. We actually decided to go a step further and make it easy enough for anyone to use, not just developers. If getting access to structured data from around the web is so interesting to us, why wouldn't it be interesting to everyone else, even people who can't code? Our first step toward solving this problem is not just to make building a web scraper easy, but to add a simple app builder feature, letting users see their data in an app vs. raw JSON. In fact, my Mom is using an app she built with kimono right now to check ski conditions near Lake Tahoe.

So, what does a web scraper for anyone really look like? Actually, you're already using it (unless you're on a mobile device, in which case you should totally come back using your computer). Notice that toolbar at the toop of the screen? That's the kimono toolbar. It shows information about the data that you're extracting from the page. Go ahead and try kimonifying the the table below. Click something and kimono will suggest similar data elements to you. You can add new datatypes by clicking + in the toolbar and preview your data output in JSON or CSV by clicking the icons at the top right.

In addition to handling click-selections on particular HTML elements using CSS selectors, we deal with data that's not well formed using regular expressions. In the list below (from the same site) you can see that each list item has no inherent structure to it. You might however want to select different pieces of the data and save them as different data types. Try using your mouse to select just the movie names (e.g. "The Phantom Menace"), then create a new datatype and then select just the year. You might have to make a couple of selections before kimono fully learns the pattern.

  • Episode I = The Phantom Menace (1999)
  • Episode II = Attack of the Clones (2002)
  • Episode III = Revenge of the Sith (2005)
  • Episode IV = Star Wars: A New Hope (1977)
  • Episode V = The Empire Strikes Back (1980)
  • Episode VI = Return of the Jedi (1983)

Notice how kimono self organized the data into two different collections for you? We felt it important keep things straightforward and allow users to just point and click while we figure out the underlying structure automatically. As we worked through the problem we defined a set of design principles like these that would underpin an ideal solution. We touch upon these below (more detail in blog posts to come). A good solution must:

Be accessible straight from the browser We wanted to create something that could be used anytime, anywhere by anyone. This meant no downloads, no installs, no system requirements. The challenge was to create a way for triggering kimono on any page you are looking at in your browser of choice. Recognize relationships amongst data like a human would Interpreting the HTML tags and DOM hierarchy often doesn't yield the true associations and structure of the data. We therefore needed to build in enough intelligence to figure out what you really want, and how the elements you've chosen are truly associated, without you having to think about a data model or underlying structure. Be flexible enough to work across different sites and data formats Each web site is different and presents its own unique challenges, from truly malformed HTML to dumps of Microsoft excel / word documents and pages serving up data dynamically with AJAX. Run and be hosted in the cloud In order to depend on the scraped data, we'd need to be able to store caches of run jobs on cloud hosted servers and send alerts when things go wrong.

Kimono is the first iteration of our solution to the scraping problem. It's still in beta and we want to invite you to check it out. We would love your feedback - for bugs, feature suggestions or anything else. Your input will help us make kimono better, so you never have to write a web scraper again. Hope to see you kimonifying pages on the web soon!

– Pratap and Ryan

Try it out for free

The Productivity Cycle - Alex Sexton

$
0
0

Comments:"The Productivity Cycle - Alex Sexton"

URL:https://alexsexton.com/blog/2014/1/the-productivity-cycle/


People are interesting. We know so little about ourselves compared to what we’d like to think we know. We’re all subtly different even though we’re, on a whole, overwhelmingly predictable. There are copious studies to back up “average” data from people, and plenty of arm-chair anthropologists and psychologists that have very nice theories on how we tick. But most of us aren’t ‘average,’ and perhaps some of us tock more than we tick (mark this as the first of many “stretches” in this post).

I’d like to take the chance, while we’re still mostly clueless, to write some of my non-scientific theories on cognitive ability and “focus” (the noun) in the context of creating and building things (or “shipping,” as it were).

Motive and Privilege

I see a thin thread woven through everything that I do, and everything that I see my peers do.

Legacy.

We’re all creative, and many of us want to “build things.” I hear it often in interviews, and over beers, that it’s just built into the core of many of us to want to “create something from nothing.” It often takes shape as some other pleasant-sounding turn of phrase, but the “need to create” seems to be innate in all but the most blissful of us. In my experience, this can often be traced back to the desire for someone to ‘make their mark,’ or as I previously alluded, the desire to pursue a legacy.

I’m all for this. This feeling is built in to my DNA as well. And while I would hope to have purely selfless reasons for wanting to create things, I’m certain that my ego, and my human-nature drive me to want to live longer than my bones.

I can’t help but lead with “ego” and “legacy” because the entire ability to create something from nothing (programming) and to get disproportionally rewarded for doing so (programming salaries) comes along with more than a touch of blind privilege. It’s a pretty good spot to be in to be hacking your mental focus levels so you can build bigger and better websites. Blindness to our privilege is mostly to be expected as long as we fight to improve our own awareness and adjust our actions appropriately. Read on now, forewarned of my ego, blindness to privilege, and extreme lack of brevity.

Caffeine is a Zero-Sum Game

Ostensibly, this is a graph of my potential “focus” levels during a day.

I read a fascinating blog post several years ago by Arvind Narayanan called The Calculus of Caffeine Consumption. It was pretty eye-opening for me to see my “focus” levels throughout the day graphed out as a sine wave. Naturally, it’s a massive over-simplification, but in my personal experience, not an entirely incorrect approximation of my energy levels in a day. I am tired, perk up, get hungry and eat and dip down, then hit a stride, rinse and repeat. It’s not a sine wave, but it’s a wave alright.

Like my Uncle Ray always says, “Any wave is sinusoidal with a sufficiently low sampling rate!”, though I think eventually it becomes a straight line. (Which I guess is a wave with an amplitude of zero?)

Our wave of “focus”/energy has all the normal properties of a wave: a wavelength, a period, and an amplitude. The premise of the article is that caffeine, a stimulant, is often consumed during the low points in our day. So we drink coffee when we wake up or when we feel tired after lunch in order to boost our ability to concentrate and sometimes even function. The effect of this decision that is lost on us, is that this reduces the total amplitude of our “focus” wave.

In other words, by consuming caffeine to reduce how low we get during the low points, we inherently reduce our high points as well.

Narayanan states that this type of consumption actually works pretty well for people who are trying keep from falling below a threshold of “focus” or energy. Consider a construction worker, data-entry employee, truck-driver, or similar, where the time put into the task is important, and dipping below a certain level of energy would be dangerous/deadly.

Creative workers, however, don’t have the same limitations. They often need “a moment of clarity” to spark their work, or to break out of “writer’s block.” They would want the absolute highest amplitude of focus, regardless of the consequences on the downside, and to be working while at their highs.

Narayanan, a true hacker spirit (read: CS Ph.D. and Assistant Professor at Princeton), attempts to exploit this relationship to his advantage. Could he use caffeine to increase the amplitude of his “focus?” A conclusive answer here would take quite a few more double blind studies, but in my experience, and seemingly in Dr. Narayanan’s as well, the answer is a resounding “yes.”

Drink a latte 30 minutes before a high point, work as hard as you can, and then use the warmth of your laptop to take a nap a few hours later, because you’ll be spent.

Caffeine is a zero-sum game, but you can use that to your advantage. Consuming caffeine in time for it to affect you at the exact peak of your “focus wave” effectively makes the highs higher, and the lows lower. The rich get richer, while the poor get poorer. It’s like the sad state of our socioeconomic classes, except not awful, and for brain power!

You Require More Vespene Gas

Many people who have read Daniel Kahneman’s Thinking Fast And Slow will remember the chocolate cake experiment.

Photo by Food Thinkers under CC-by-NC-SA

The experiment, led by Baba Shiv at Stanford University, is pretty simple. Half the students are asked to remember a 2 digit number, and half are asked to remember a 7-digit number. They walk down a hall, are told they’re done, and are asked if they want chocolate cake, or fruit salad as a refreshment. The students who were asked to remember the 7-digit number were nearly twice as likely to choose the chocolate cake. Why is this?

Simply put, we have a finite amount of mental energy. The students who spent their energy on remembering 7-digit numbers had no more energy left to spend on avoiding cake.

The prefrontal cortex is primarily responsible for the things that creative people crave, like focus, but also other functions like short-term memory, abstract problem solving, and willpower. The conclusion of the chocolate cake experiment implies that there is a finite amount of resources in the prefrontal cortex, and that one system’s use of those resources could directly affect the available resources of another function.

In the context of our sine-wave, I’m pretty sure I could make a good reference to calculus, because this concept has a lot to do with the area under the curve, but I’ll get it wrong, and I won’t hear the end of it in the comments.

This plays into our first theory quite well. If we use more mental energy in a quick burst (because we have a higher amplitude), we’ll need deeper rest in order to recharge this energy. During our rest periods, the troughs in our sine-wave, we have to refill the energy that we spent during our peaks.

I can’t safely postulate much about how to best do this, except for that studies show that naps are increasingly good for doing such things. Since you’re probably not going to have your big break during a lull in your cognitive ability, why not speed up the process of getting to another high point? Nap away! Refuel the exact part of your brain that will allow you to get in the zone again.

Some folks will recommend a caffeine nap. I don’t have a ton of intentional experience with caffeine naps, but the idea is that if you consume caffeine just prior to taking a nap, you’ll sleep until it kicks in (usually takes at least 30min for caffeine to metabolize), and then it’ll allow you to skip some of the groggier[1] steps on your way back to productivity. It probably works.

[1] Apparently, I can’t write the word ‘grog’ without thinking about Guybrush Threepwood.

Moderation

I would add, finally, that Dr. Narayanan found that the body adjusts to regular caffeine intake in as little as 2 to 3 weeks. That means that if you’re a long-time coffee-drinker, you really do need that cup of coffee to get going in the morning in order to get up to your pre-caffeine addiction baseline.

He notes that it takes 5 days to reach adenosine normality (good) if you’re not consuming caffeine. He retrospectively adds that he initially did not place enough value in these ‘quitting cycles’ (hinting that perhaps you should not repeat his mistakes).

Practical Application

Most of this should be obvious at this point, so I won’t drag on.

  • Plan your high points, work during them
  • Refuel during low points instead of stretching them out with forced work
  • Slingshot your amplitude with caffeine; 3 weeks on and 1 week off
  • Avoid non-intentional caffeine, only drink it on schedule
  • Don’t ignore the people you love

The Nap Month

I have nothing but my own personal experience to back this up, but I find that my motivation and passion levels work in a similar way on a yearly cycle as they do on a daily cycle. For parts of the year, I’m excited by work, and go out of my way to build things in my free time, and other parts of the year, I just want to come home and binge watch a season of a TV show on Netflix.

For this reason I have to wonder if some of the same energy principles apply. Can I increase the intensity and duration of the productive months? If so, at what cost?

I really struggle (in a not very serious kind of way) during the month or two when I feel like I can’t get anything done, but I think we can use a similar trick to help our productivity pick up again. Specifically, we need to A) simulate caffeine on a macro scale, and B) simulate naps on a macro scale.

Macrocaffeine

I find that an interesting, new passion project gets my creative energy flowing much better than jumping into old work. If I really want to get in the mood to program, I’ll hop off my projects with deadlines and build something that I know probably won’t ever even get finished, but that I’m just excited to build.

Caffeine blocks adenosine (a sleep chemical) receptors in our brain, causing us to avoid sleep longer. If we substitute the current list of things we have to do, with a temporarily more engaging list, it may give us the same slingshot effect that caffeine does for our energy on a micro level.

Macronaps

This one seems a little bit more straight forward to me.

Just stop working so much.

Take a nap from your work. I understand that this is not a viable solution for businesses that intend to make money, but I think there can be some good compromises here. Namely, most hip tech companies have very generous vacation allowances already. Use your vacation during a low point, and in perfect cliché form, “recharge.”

Additionally, companies with large enough teams can have two modes of employment that employees could ideally opt into. “Passion-mode” and “coast-mode.” Someone who is on an upswing should get put on a big project that’s going to take a lot of energy. Someone who is burnt out from the last big project should be given work that will allow them to show up a little late, and leave a little early.

There’s lots of work like this. The support team at your company would probably love it if developers frequently did 6-hour customer support stints. In no way do I imply that the support team doesn’t regularly break its back working very demanding hours/problems and doesn’t deserve their own down-time. There’s also plenty of documentation that I’m happy to churn out during my down month.

The point is to allow employees to be maximally lazy while still maintaining their minimum required value. The more quickly I’m able to get through the less motivated time, the more quickly I’ll be able to jump back into a difficult and challenging project and do it well.

A Final Moment of Clarity

I have very few projects and accomplishments that haven’t come to me in a “moment of clarity.” Naturally, I want to maximize the amount of these moments, and increase the odds that I’ll be working on something that I love when they occur. I have no idea if hacking your body is a good long-term strategy for making this happen, but I find that researching all of this sleep stuff is an excellent tool for procrastinating during my focus droughts.

I can’t guarantee that any of this will resonate with you, or work for you if you try it. But I do think that everyone goes through the motivational recessions, and we should be actively attempting to eliminate or reduce them. What is Quantitative Easing for the Soul?

I simply want my hard work to be spent most efficiently.

Special thanks to Michelle Bu for reading this ahead of time.

Hate Parking Tickets? Fixed Fights Them In Court For You | TechCrunch

$
0
0

Comments:"Hate Parking Tickets? Fixed Fights Them In Court For You | TechCrunch"

URL:http://techcrunch.com/2014/01/15/fight-parking-ticket-fixed/?utm_campaign=fb&ncid=fb


Up to 50 percent of parking tickets are dismissed when fought in court, but it takes knowledge and time to do it. New app Fixed will do it for you. Take a photo of your ticket, Fixed contests it, and if it’s dismissed, you pay Fixed 25 percent of the ticket price. If Fixed loses, you pay it nothing, so there’s nothing to lose. Fixed just launched in San Francisco, but wants to fight tickets nationwide.

David Hegarty started Fixed after paying four parking tickets one morning only to come to his car and find two more. “The tickets were complete bullshit, and I knew they had been erroneously issued,” he tells me.

Sure, you could say he should have been more careful/not parked like an idiot. But in many cities, and especially San Francisco, parking rules are very complicated. Even if you manage to follow them all, the police and meter maids screw up sometimes too and wrongly give you a ticket.

So Hegarty did the research, figured out how to contest parking tickets, and submitted appeals on his two new tickets. He got both dismissed, so he started contesting all his tickets and frequently won. Soon he realized he wasn’t the only one sick of parking tickets, so he created Fixed with David Sanghera and DJ Burdick.

Here’s how it works:

Sign-up for Fixed (currently in a small beta trial in San Francisco) and enter your credit card details. Come back to your car, find a parking ticket you think was unfairly issued, take a photo of it with Fixed, and type in the violation number. Fixed looks up the violation type, tells you the probability of getting that type dismissed, and prompts you to take photos as evidence. If the ticket is for street cleaning, you might be prompted to take a photo of a missing street cleaning times sign. If it’s a “red curb” violation, you might be asked to photograph the faded curb paint. Fixed supplements the evidence with data like when the curb was supposed to be repainted, or whether the street was actually steep enough to warrant a “wheels not curbed” ticket. Fixed prepares a “contest letter” to fight the ticket, has you digitally sign it, mails it on your behalf, and takes care of all correspondence with the court. If Fixed gets your ticket dismissed, you pay it 25 percent of what the ticket would have cost. If it loses, you pay the ticket like normal but pay Fixed nothing.

The idea was so popular that Fixed filled up its early beta group in SF almost as soon as it launched its site, but you can sign up for the waiting list now.

For now, Fixed is bootstrapped, but it may need to raise money to expand its team to match demand for the service. It says San Francisco alone issues about $100 million worth of parking tickets a year, and estimates the US as a whole doles out over $3 billion in tickets. That’s a lucrative market that could help it raise venture capital to bring its ticket-fighting app across the country. And one day, it hopes to expand into contesting traffic tickets and moving violations.

In the meantime, it will have to compete with clumsier web-based services Parkingticket.com and ParkingTicketGuys. Scaling will be a serious challenge, and the company could run into trouble dealing with city governments. “They’ve seen parking fines as a cash cow that they milked from motorists,” Hegarty says. “If we start helping the motorist fight back, we don’t know how they’ll react.” Hopefully local governments would just nuke Fixed with some law like “only you and your lawyer may contest tickets on your behalf.”

Now, there’s an argument to be made that fighting parking tickets just takes money from the community. Ticket revenue can go to pay for important local infrastructure, and a lot of tickets are designed to prevent people from unsafely parking, obstructing other cars, or endlessly squatting on spots.

But still, I agree with Hegarty that it sometimes feels like city governments are unfairly sucking blood from people who can’t afford garages or private car services like Uber. $64 tickets (in SF) for not re-parking your car at 6 a.m. every other day seems a bit outrageous. If cities want to hammer people with expensive tickets, they should have to make parking rules clear and enforce them fairly. If they don’t, Hegarty says Fixed is “here to restore a little bit of justice to your day.”

Here’s some more startups that can make your life easier:

Wash.io – On-Demand, Door-To-Door Laundry And Dry Cleaning

Homejoy – On-Demand House Cleaning

Super successful companies - Sam Altman

$
0
0

Comments:" Super successful companies - Sam Altman "

URL:http://blog.samaltman.com/super-successful-companies


I spent some time recently thinking about what companies that grow up to be extremely successful do when they are very young. I came up with the following list. It’s from personal experience and I’m sure there are plenty of exceptions. While plenty of non-successful startups do some of these things too, I think there is value in trying to match the patterns. 

*They are obsessed with the quality of the product/experience. Almost a little too obsessed—they spend a lot of time on details that at first glance wouldn’t seem to be really important.  The founders of these companies react as if they feel physical pain when something isn’t quite right with the product or a user has a bad customer support experience. Although they believe in launching early and iterating, they generally won't release something crappy. (This is not an excuse to launch slowly.  You're probably taking too long to launch.)

As part of this, they don't put anyone between the founders and the users.  The founders of these companies do things like sales and customer support themselves.

 

*They are obsessed with talent. The founders take great pride in the quality of their team and do whatever it takes to get the best people to join them.  Everyone says they only want to hire the best people, but the best founders don't compromise on this point.  If they do make a hiring mistake, they fix it very quickly.

And they hire very slowly.  They don't get any thrill out of having employees for its own sake, and they do the dirty work themselves at the beginning.

As part of this, they really focus on getting the culture of the company right.

*They can explain the vision for the company in a few clear words. This is most striking in contrast to companies that require multiple complicated sentences to explain, which never seem to do really well.  Also, they can articulate why they're going to succeed even if others going after the problem have failed, and they have a clear insight about why their market is a great one.

More generally, they communicate very well.

*They generate revenue very early on in their lives.  Often as soon as they get their first user.

*They are tough and calm.  Founders of great companies are always tough and unflappable.  Every startup seems like it's going to die--sometimes multiple times in a single day--and founders of really successful companies just seem to pull out a gun and shoot the villain without losing their train of thought.

Formidableness can be developed; I've seen weak-seeming founders grow into it fast.

*They keep expenses low.  In addition to hiring slowly, they start off very frugal. Interestingly, the companies that don't do this (and usually fail) often justify it by saying "we're thinking really big".  After everything is working really well, they will sometimes ramp up expenses a lot but manage to still only spend where it matters.

*They make something a small number of users really love. Paul Buchheit was the first person I ever heard point this out, but it's really true.  Successful startups nearly always start with an initial core of super happy users that become very dependent on their product, and then expand from there.  The strategy of something that starts with something a huge number of people sort of like empirically does not work as well.

*They grow organically.  And they are generally skeptical of inorganic strategies like big partnership deals and to a lesser extent PR.  They certainly don't have huge press events to launch their startup.  Mediocre founders focus on big PR launches to answer their growth prayers.

 

*They are focused on growth. The founders always know their user and revenue numbers.  There’s never any hesitation when you ask them.  They have targets they are trying to hit for the next week, month, and year.

 

*They balance a focus on growth with strategic thinking about the future.   They have clear plans and strong opinions about what they're going to build that no one can talk them out of.  But they focus more on execution in the moment than building out multi-year strategic plans.

Another way this trait shows itself is "right-sized" first projects.  You can't go from zero to huge; you have to find something not too big and not too small to build first.  They seem to have an innate talent for figuring out right-sized projects.

*They do things that don't scale.  Paul Graham has written about this.  The best founders take it unusually far.

*They have a whatever-it-takes attitude. There are some things about running a startup that are not fun.  Mediocre founders try to hire people for the parts that they don't like.  Great founders just do whatever they think is in the best interest of the company, even if they're not "passionate" about that part of the business.

*They prioritize well.  In any given day there are 100 reasonable things that you could work on.  It's easy to get pulled into a fire on number 7, or even to spend time at a networking event or something like that that probably ranks in the mid-90s.  The founders that are really successful are relentless about making sure they get to their top two or three priorities each day (as part of this, they figure out what the right priorities are), and ignoring other items.

*The founders are nice.  I'm sure this doesn't always apply, but the most successful founders I know are nicer than average.  They're tough, they're very competitive, and they are ruthless, but they are fundamentally nice people.

*They don't get excited about pretending to run a startup.  They care about being successful, not going through the motions to look successful.  They get no thrill from having a 'real' company; they don't spend a lot of time interviewing lawyers and accountants or going to network events or anything like that.  They want to win and don't care much about how they look doing so.

One reason that this is super important is that they are willing to work on things that seem trivial, like a website that lets you stay on an air mattress in someone's house.  Most of the best ideas seem like bad ideas when they start, and if you're more into appearance than substance, you won't want people laughing at you.  You are far better off to start a company that people laugh at but keeps growing relentlessly but a company with a beautiful office that seems serious but is always two quarters away from starting its growth ramp.

*They get stuff done. Mediocre founders spend a lot of time talking about grand plans; the best founders may be working on things that seem small but get them done extraordinarily quickly.  Every time you talk to them, they've gotten a few new things done.  Even if they're working on big projects, they get small chunks done incrementally and have demonstratable progress--they never disappear for a year and jump from nothing to a huge project being completed.  And they're reliable--if they tell you they'll do something, it happens.

*They move fast. They make quick decisions on everything.  They respond to emails quickly.  This is one of the most striking differences between great and mediocre founders.  Great founders are execution machines.


Coming soon: Stripe CTF3

$
0
0

Comments:"Coming soon: Stripe CTF3"

URL:https://stripe.com/blog/ctf3-coming-soon


Greg Brockman, January 15, 2014

It's been over a year since our last Capture the Flag competition, and in the meanwhile we've fielded dozensofinquiries about when the next one's coming. The wait is almost over: CTF3 will be here a week from today.

Next Wednesday, starting at 11am San Francisco time (2pm Boston, 7pm London, 6am Melbourne), we'll be launching CTF3.

This time around, we're trying something a bit different. Rather than being about security, CTF3 will focus on distributed systems engineering. You'll learn how to build fault-tolerant, performant software while playing around with a bunch of cool cutting-edge technologies. Like with previous CTFs, our goal is to give you hands-on exposure to interesting engineering problems that you normally only get to read about. If you've been wanting to really grok things like Paxos/Raft,DDOS prevention, distributed search, or Bitcoin (and maybe even bit twiddling), now's your chance.

As always, we're building this CTF for programmers of all skill levels and backgrounds. Even if the above sounds daunting, you should still give it a shot — you might be surprised by what you can accomplish. For those looking for some competition, there will be leaderboards where you can compete against your friends or the CTF community at large.

Want to meet and hack alongside your fellow CTFers in person? While the main gathering will once again be based on IRC (check back for details next week), we'll also be hosting kickoff events in San Francisco, Boston, andLondon.

Over the next week, the CTF team (Siddarth Chandrasekaran, Andy Brody, Christian Anderson, andI) will be putting the finishing touches on CTF3. See you on the other side!

Making Remote Work Work: An Adventure in Time and Space | MongoHQ Blog

$
0
0

Comments:" Making Remote Work Work: An Adventure in Time and Space | MongoHQ Blog"

URL:http://blog.mongohq.com/making-remote-work-work-an-adventure-in-time-and-space/


Mon­goHQ is a dis­trib­uted com­pany, with 21 employ­ees scat­tered across Cal­i­for­nia, Alabama, Utah, Wind­sor Ontario, Que­bec, and Lon­don. This isn’t rare, by any means, and is becom­ing more com­mon as tech­nol­ogy cor­po­ra­tions evolve … par­tic­u­larly when they real­ize that the inter­sec­tion of the entire pop­u­la­tion of good hack­ers and the area that’s rea­son­ably acces­si­ble to their office looks some­thing like this:

Most posts like this talk a lot about the tools, which is under­stand­able because they’re so much fun to futz around with. At their best, though, tools that help with dis­trib­uted teams are an exten­sion of a remote friendly work­place, and get­ting to that is *hard*. We’re prob­a­bly 80% of the way there, and with enough dili­gence will be 82% of the way there by the end of this year.

Work­ing well remotely takes practice

We’ve started ask­ing about remote work expe­ri­ence and skills very early in a promis­ing inter­view cycle. We expect a lot from peo­ple, and the “remote require­ment” just adds another dimen­sion of dif­fi­culty to their jobs. The nor­mal response, par­tic­u­larly from peo­ple who’ve been phys­i­cally present at offices all their lives, is a bit non­cha­lant. They assume that “remote work” means “I can spend a lot of time ski­ing and work from the lodge”.

So that’s true, kind of, as evi­denced by Steve up there. He does go ski­ing a lot and has been known to work from the lodge. He also, when not ski­ing, starts his days roughly 6 hours later than a nor­mal Lon­doner to make sure he’s around to talk to peo­ple on the supe­rior Amer­i­can time zones (except Pacific time, Pacific time is awful).

What they don’t always think about, though, is the inher­ent fire­wall a com­mute cre­ates between “work” and “per­sonal life”. Work­ing out of a home office opens up an entire world of sur­pris­ingly difficult-​​to-​​handle dis­trac­tions, par­tic­u­larly for those of us with fam­i­lies. It’s easy to avoid a gui­tar wield­ing tod­dler when the office is 5 miles away and he has no driver’s license. It’s harder when the wall between the liv­ing room and the office makes a delight­ful bang­ing noise when struck with a guitar.

Every­one works remotely, even from an office

If you did the math, you noticed we have about four times as many peo­ple as we do loca­tions. Mon­goHQ has offices in both San Mateo, CA (about 15 miles south of San Fran­cisco) and Birm­ing­ham, AL. Hav­ing cen­tral­ized offices can wreck a bud­ding remote friendly cul­ture. Work­ing in a way that’s inclu­sive of peo­ple who aren’t phys­i­cally (or even tem­po­rally) present is not entirely nat­ural, and exclud­ing remote employ­ees from impor­tant inter­ac­tions is a quick path to agony.

We’ve expe­ri­enced some vari­ant of this prob­lem every time a new hire has joined in one of our offices, and are now very explicit about the “work as if you’re not here” stan­dard. We expect every­one to work with the remote col­lab­o­ra­tion tools, be avail­able via the same chan­nels, and pro­duce writ­ten arti­facts of inter­ac­tions that are impor­tant to share. This is a tough require­ment to stick to. We’ve encoun­tered peo­ple who func­tion very well in an office envi­ron­ment, but can’t seem to func­tion remotely.

Hyper­sen­si­tiv­ity to “the feels”

Like most decep­tively sim­ple ideas, remote work is easy and amaz­ing when every­thing is going well. If your com­pany is killing it, all the hockey sticks point up, and every­one gets along amaz­ingly well, the remote work cul­ture is prob­a­bly good. There is dan­ger, how­ever, when things go south. We’ve expe­ri­enced the emo­tional troughs as a com­pany, and had indi­vid­u­als fight their own per­sonal battles.

The real­ity of a remote work­place is that the con­nec­tions are largely arti­fi­cial con­structs. Peo­ple can be very, very iso­lated. A person’s default behav­ior when they go into a funk is to avoid seek­ing out inter­ac­tions, which is effec­tively the same as actively with­draw­ing in a remote work envi­ron­ment. It takes a tremen­dous effort to get on video chats, use our text based com­mu­ni­ca­tion tools, or even call some­one dur­ing a dark time.

We’re learn­ing how to inter­pret the emo­tional state of peo­ple we only rarely see, and started work­ing to delib­er­ately draw peo­ple out of iso­la­tion with com­pany wide video events. We’ve even tried record­ing bits of these events (a recent one had a series of light­ning talks) and shar­ing them around after­ward in an effort to get face time, even if just one way. This is a hard prob­lem, though, and there’s really very lit­tle we can do about it other than notice that it’s happening.

The prac­ti­cal (and the tools!)

In gen­eral, a dis­trib­uted team is a series of com­mu­ni­ca­tion prob­lems. We tackle this in two gen­eral ways. First, we favor async com­mu­ni­ca­tion when possible:

  • We have a cen­tral “work tool” (it’s home grown, we call it Com­pose) we use to record what we’re pro­duc­ing each day. Any­one who wants to know what some­one is up to, or what’s been accom­plished on project X, can find out here … assum­ing everyone’s doing their job.
  • Most day to day com­mu­ni­ca­tion hap­pens in Hipchat, with some work hap­pen­ing in project spe­cific rooms. It’s rea­son­ably easy to catch up on a day or two of activ­ity most of the time.
  • When we do have meet­ings (this is a whole post by itself), we like to have pre-​​reads avail­able on a Wiki. These get updated with use­ful infor­ma­tion as the meet­ing pro­gresses, with the goal of mak­ing the actual meet­ings optional. We use Hack­pad for this, mostly, but have also used Con­flu­ence and Github’s wikis.
  • Email is a bit of a beast, but we do have “open” mail­ing lists and try to include those on most email com­mu­ni­ca­tions. The caveat is “try”, though, we haven’t been com­pletely suc­cess­ful in build­ing good com­pany email habits.

We also encour­age face time:

  • Sqwig­gle is really good for this, and there are any­where from 3 to 10 peo­ple on our Sqwig­gle setup at any given time. Sqwig­gle takes a pic­ture of whoever’s on every so often, and gets bonus points for help­ing us record peo­ples’ embar­rass­ing poses for posterity:
  • Google Hang­outs come in handy for sched­uled events. We used to have a per­pet­ual Hang­out on wall mounted tele­vi­sions in both offices, but that cre­ated a sort of audi­ence /​ pre­sen­ter vibe that was a lit­tle odd.
  • We have a Dou­ble Robot­ics robot in each office. Remote employ­ees can “drop in”, drive around, and even pull off low level pranks.
  • Does real estate count as a tool? We have an apart­ment near our San Mateo office that any­one can use when they’re in town. Peo­ple tend to travel to San Mateo about once every 6 weeks on average.

It’s a process

If you are try­ing to build a remote friendly cul­ture, the best advice we have for you is “just keep iter­at­ing”. It’s not an easy thing to do, but it’s an amaz­ing capa­bil­ity for a com­pany to have. Our con­tin­ued attempts to do it well have resulted in some phe­nom­e­nal peo­ple join­ing we never would have met oth­er­wise. We’ve mostly been able to get this far by pow­er­ing through the tough stuff, com­mit­ting to un-​​natural but valu­able habits, and learn­ing from other com­pa­nies who are try­ing to do similar.

Jan 15, 2014 By Kurt Mackey Share

Handybook Buys Exec in a Deal for the On-Demand World

The Next Phase of Node.js

$
0
0

Comments:"The Next Phase of Node.js"

URL:http://blog.nodejs.org/2014/01/15/the-next-phase-of-node-js/index.html


Wed, 15 Jan 2014 17:00:00 UTC - Isaac Z. Schlueter

Node's growth has continued and accelerated immensely over the last few years. More people are developing and sharing more code with Node and npm than I would have ever imagined. Countless companies are using Node, and npm along with it.

Over the last year, TJ Fontaine has become absolutely essential to the Node.js project. He's been building releases, managing the test bots,fixing nasty bugs and making decisions for the project with constant focus on the needs of our users. He was responsible for an update to MDB to support running ::findjsobjects on Linux core dumps, and is working on a shim layer that will provide a stable C interface for Node binary addons. In partnership with Joyent and The Node Firm, he's helped to create a path forward for scalable issue triaging. He's become the primary point of contact keeping us all driving the project forward together.

Anyone who's been close to the core project knows that he's been effectively leading the project for a while now, so we're making it official. Effective immediately, TJ Fontaine is the Node.js project lead. I will remain a Node core committer, and expect to continue to contribute to the project in that role. My primary focus, however, will be npm.

At this point, npm needs work, and I am eager to deliver what the Node community needs from its package manager. I am starting a company, npm, Inc., to deliver new products and services related to npm. I'll be sharing many more details soon about exactly how this is going to work, and what we'll be offering. For now, suffice it to say that everything currently free will remain free, and everything currently flaky will get less flaky. Pursuing new revenue is how we can keep providing the npm registry service in a long-term sustainable way, and it has to be done very carefully so that we don't damage what we've all built together.

npm is what I'm most passionate about, and I am now in a position to give it my full attention. I've done more than I could have hoped to accomplish in running Node core, and it's well past time to hand the control of the project off to its next gatekeeper.

TJ is exactly the leader who can help us take Node.js to 1.0 and beyond. He brings professionalism, rigor, and a continued focus on inclusive community values and culture. In the coming days, TJ will spell out his plans in greater detail. I look forward to the places that Node will go with his guidance.

Please join me in welcoming him to this new role :)

Please post feedback and comments onthe Node.JS user mailing list.
Please post bugs and feature requests onthe Node.JS github repository.

← Node v0.11.10 (Unstable)

How to Hire Outsourced Developers in Four Steps

$
0
0

Comments:"How to Hire Outsourced Developers in Four Steps"

URL:http://www.donnfelker.com/how-to-hire-outsourced-developers/


This is Part 2 of a 2 part series on hiring developers. Read Part 1, How to Hire Programmers, if you’re hiring for full time positions. Read this article if you’re outsourcing.

I recently wrote about how to hire programmers which focuses on hiring full time employees. This post is tailored to help you learn how to hire outsourced developers. You’ll see that the two methods share a lot of similarities except hiring a contractor is a bit less involved as they are not going to be joining your team full time.

A little background … I’ve hired almost 10 remote, outsourced developers who were all overseas using the method I explain below.  I feel that while its not fool proof, its the best method I’ve encountered and I feel that it has the best ROI with the lowest time commitment. Over the years I’ve used contract employees for my business and I’ve been able to create this process as a by-product of my experiences. Sometimes it went well, other times it didn’t and sometimes it was an absolute disaster. After refining my hiring practices many times over I was able to create the process below that always returned top notch talent for outsourced developers. I hope the process below does the same you.

The Process

There are 4 simple steps and if you cut out the job posting step then you actually only have three steps to go through. The goal of this process is to save you time and money while maximizing the chance of getting the best candidate you can find. Please note, during this process you will probably need to spend anywhere from $50-$250 depending on the contractors that you put through this review process – more on that later. You will be posting a job, reviewing the candidates and selecting the top X (usually at least the top 10). At that point you’ll hire them to do a simple programming task. Once they complete it (if they complete it) you’ll be able to review the code and determine who has the best implementation and then you usually will hire that person. The steps are listed below.

4 Steps to Hiring a Good Outsourced Developer

Post a Job Description a Job Site Prelim Review: Hire the top Candidates (minimum 10) for 1 Hour The Programming Task Keep the Top Candidate(s)

Step 1: Post a Job Description on a Job Site

Goal: To create an interesting job posting that will attract quality applicants

Note: I prefer to use oDesk.com as the site where I hire my contract programers so this entire article is baed around that site.

Log into oDesk.com and post a job. Make sure the title accurately explains what you want the person to do. Do not post things like “JavaScript Ninja” or “Rails Badass”. Titles such as “Senior Rails Developer” or “Wordpress Developer” seem to fare a bit better than abstract titles. The description of the job should describe what you need in detail. Do not leave this as a simple one liner such as “Write good code and deliver on time.” Inform the candidate what you need from them. If its updating an existing code base inform them of what they’re getting into. If you’re a business person who does not understand code, explain what you want the product to do or what your product already does and why you need help. Don’t spend a ton of time on this. Maybe 10-15 minutes, max.

When hiring in oDesk you can provide various options about the job. For each of these sections I tend to use the following settings:

  • Skills Required
    • This is self explanatory. If you have a Rails site, put “Ruby on Rails” and probably “HTML” and “JavaScript”, etc.
    • If its another technology, put those technologies here.
  • How would you like to pay
    • Hourly when doing open ended development such as new feature work.
    • Fixed when the exact scope is defined. Such as: Install WordPress and configure theme to look like X.
  • Estimated workload
    • This varies person to person, but I usually have them on for Part Time
  • Desired Experience Level
    • Always choose Intermediate or Expert. Choosing beginner will get you some shoddy work that will eventually cost you 2x as much to replace later.
  • Marketplace Visibility
    • Anyone can find this job (UNLESS … I’m hiring a specific person)
  • Preferred Qualifications
    • Freelancer Type: No preference
    • Minimum Feedback Score: No Preference
      • The reason for this is because there are TONS of people who are experts who have never completed a job on oDesk before. Also, sometimes people get a bad score from the only job they did … not because it was bad work but because the person managing the product was terrible at their job. Don’t count them out.
    • Hours billed on ODesk: No preference
      • Again, you could have someone who is an expert who just joined. Don’t count them out.
    • Location
      • This one can be tricky. I personally do not care where they are. But, this is up to you. I usually put No Preference. However, if you are testing the various locations of the world to find what works best for you then choose that location.
    • English Level: 5 out of 5
      • This is a requirement. A lot of contractors on ODesk will say their 4/5 but when you converse with them via email, IM or Skype/Google Hangout you’ll realize they’re actually more like a 2 or a 3. Being able to communicate effectively is key to success. Without it, you’re fighting an uphill battle.
  • Screening Questions
    • I personally don’t add these as programming jobs are quite technical and either you know how to program well or you don’t.

Now, post the job.

Wait about 48 hours and then move onto step 2. During this time you’ll start having people apply for the position.

Step 2: Prelim Review – Hire the Top Candidates (at least 10)

Goal: To hire the top candidates who seem qualified in order to take them to the next step.

You’re probably going to get upwards of 25-60 applicants for your job posting from candidates all over the world. The first thing you will need to do is trim the list down. Here’s how you do it.

  • Cut anyone whose English (or whatever your chosen 5/5 language is) is terrible and very difficult to read. This is an immediate red flag. If you can’t communicate now, how do you expect to communicate later?
  • Cut anyone who does not have the skills to do the work. e.g. – If you’re hiring for a Rails developer and the candidates does not list Rails as one of their skills cut them from the list. You want the highest quality you can afford.
  • Cut anyone that doesn’t seem to fit the job description. e.g. – If you want someone to develop a Game for you, a line of business web developer probably isn’t the best fit for you.
  • Cut anyone who is way out of your budget. Sure, that guy who’s charging $155 an hour looks great but maybe your budget is $25/hr. No point in entertaining the thought of getting a $155/hr candidate on the roster. You’ll waste their time and yours.

After 48-72 hours you’ll want to trim your list down to a minimum of 10 candidates.

You’re at the point where you are now going to spend real money. If the average wage of your applicants is between $5 and $25 and hour you’ll spend roughly $50-250 to find the right person (at most). You’ll want to hire all 10 candidates. Yes, I’m serious. However, when you hire them you will inform them that you’re going to hire them to complete a simple task that should take no longer than 1 hour. You will send them the link to the Programming Task (Step 3) so they can complete the task. This one hour is PAID to the contractor when they do the work. This shows good faith to the contractor so they do not feel like they’re having to do work for free. If you ask outsourcers/freelancers/contractors to do work for free you’ll lose ALL of the good/best developers immediately. When you pay someone a full hour of their rate they take you much more seriously.

Something interesting happens at this point in the process. An average of 50% of the candidates never continue past this step. The reason is not exactly known but I suspect it is because they simply do not know how to perform the task that is given to them. That is exactly what Step 3 is for, weeding out the candidates who can do the work from those who cannot.

Step 3: The Programming Task

End Goals: Identify if the candidate has the necessary ability to solve programming problems with code. Gain insight into the quality of code that is produced by this candidate.

You’ll notice that this step is very similar to Step 3 in the How to Hire A Programmer article with a few alterations sprinkled about. While the articles share similarities there are key differences in the process so please continue reading if you’ve already read the previous article on How to Hire a Programmer.

During the Programming Task* the candidate will be tasked with solving a real problem using the language that I specify. The coding task should not take an experienced programmer more than 1 hour to complete at most. I only pay for one hour of the contractors time. They can take as long as they want to get it done, but I’ll only pay for one hour. The goal is simple: To determine if the candidate can produce the code that is needed.

The programming challenge is the great equalizer. Not because the problem is necessarily difficult (it’s not, its easy), but because it allows you to gain insight into the quality of code that the person writes. It also answers a vast array of other questions as well:

  • Does the candidate communicate effectively?
  • Does the candidate know how to follow directions?
  • Does the candidate know how to use Git and GitHub?
  • Does the candidate follow coding best practices?
  • Does the candidate know how to code?
  • Does the candidate follow through and succeed?

I usually inform the candidate that the solution to the problem is due within 24 hours. The programming challenge is posted online on GitHub/BitBucket in a Git repo that is shared across all candidates. The same repo is used for everyone. Yes, that means that others can see the same answers as other applicants (via pull request). Thats fine because if someone wants to cheat it is easy to detect. If 3 people’s repo’s look exactly the same then the person who checked in the code first is most likely the person who wrote it. No two people write the exact same code every single time.

The repo contains all the necessary information about the task. The candidate is to follow the instructions and email me when their done. Here are a few example of some of the programming challenges I’ve created:

Feel free to fork these and save them for yourself. If you visited any of the sites above you’ll see that each of these examples had the candidate perform a series of steps. If the candidate is not sure of what to do, they could ask me for clarification or simply Google for the answer. When the candidate is done they’re to submit a pull request to me so I can review it.

This challenge is genius because it answers so many questions about the candidate. Namely, can they code and get something done on time. It also gives me the chance to review the code to see if it is a complete hack or if its something that is quite good. What I’ve found is that on average over 60% of candidates will never finish the problem – they simply don’t know how. Of the 40% that finish only about 20% actual do it correctly and usually at that point you’re left with either 1 or 2 candidates that are looking good. If you have more to choose from then thats a good problem to have!

It is also important to note that ODesk requires that the candidates install monitoring software that takes periodic screen shots of the work they’re doing so that you can review the work. This allows me to view their progress and semi-evaluate their process of solving the problem.

A common concern when letting someone do this task is: What happens if they cheat? What if they call their friend or have someone else do it for them? That is a possibility, but I see this as a real world problem. Sometimes programmers don’t know how to solve a problem so they have to call for help. They call friends, IM them, look stuff up on the internet or just find an open source project that does what they want and they take the code and modify it. Thats the world programmers live in. If I do that at an office or at home, who cares. If you’re truly a hack I’ll know fairly quickly. If you do get hired, I’m going to sniff out a problem very quickly and at that point you’ll need to make the decision to keep the person or not. If you’re a business person who can’t sniff this out, you may notice other things slipping such as timeframes, budgets etc. Any red flags are exactly that, red flags – so evaluate each one with caution.

If the candidate fails to complete the programming challenge, thats an immediate red flag. The task should be easy enough for an experienced programmer to finish in an hour. If the candidate cannot finish within the time allotted either the programming task is way too hard or the candidate is not suited for the position. If you’r using my Git repos from the links above (or something very similar) then this should not be the case. At this point I advise to stop the interview process for those who cannot complete the task and move onto the next candidate.

For those that do solve the problem correctly I like to review the code and if everything is satisfactory I’ll put them on the list of potential hires.

If you’re someone who cannot code, but you want to follow this process I highly suggest you find a friend, friend of the family, or coworker who is a programmer and ask them to help you. Offer to pay for 2 hours of their time so you can get their opinion of the candidates. Explain to them the process you’re going through so they know what their role is – a code reviewer. If you use this approach you will want to give the code reviewer instructions to give you a scale of “No/Maybe/Yes” for each code review. No means No – do not hire. Maybe means – its ok, not perfect, but workable. Yes means Yes, hire this person, this is good work. I include maybe because sometimes code looks good, but its not perfect. Thats OK. Sometimes you need to get out an MVP of a product out the door ASAP and it does not have to be perfect, it just has to work. The “Maybe” candidates are suitable for this because you’re testing a market/etc and you’re trying to get things done as fast as possible. However, you want to aim for YES candidates at all times if possible.

If you don’t find anyone who can solve the problem, repost the job posting to get some new leads. I’ve had to post a job a couple of times to get the correct person for the job. Yes, this costs more money, but it is much cheaper to pay this up front than months down the road.

Step 4: Keep the Top Candidate(s)

Goal: To hire the proper person/people for the position

In the previous step you identified who you can hire through code review and through initial filtering based upon communication through the application process. At this point you should be ready to hire your contractor. You may have had 2-3 people make it this far.

If you’re hiring for one position, hire the top candidate and let the other two know that you decided to go with someone else but you may be in contact with them shortly because you were impressed and you may need to grow your team. This leaves the door open for future conversation with those candidates. If you need to hire two people, hire the top two. You get the point here.

If you don’t end up with enough contractors, then repost the position.

Wrap Up

Hiring a contractor can be a challenging task, but if you follow the steps outlined above you’ll find that you can lock in a good outsourced developer and eliminate the cruft in a few easy steps. I hope this helps you land a great outsourced developer.

* I’d love to take full credit for creating the programming challenge on my own, but I cannot. The implementation of this challenge was inspired by a company by the name of Integrum (who no longer does programming) for this method of programming challenge.

Viewing all 9433 articles
Browse latest View live