Quantcast
Channel: Hacker News 50
Viewing all 9433 articles
Browse latest View live

Burglars Who Took On F.B.I. Abandon Shadows - NYTimes.com

$
0
0

Comments:"Burglars Who Took On F.B.I. Abandon Shadows - NYTimes.com"

URL:http://www.nytimes.com/2014/01/07/us/burglars-who-took-on-fbi-abandon-shadows.html?ref=markmazzetti&_r=1&


By Retro Report

Stealing J. Edgar Hoover’s Secrets: One night in 1971, files were stolen from an F.B.I. office near Philadelphia. They proved that the bureau was spying on thousands of Americans. The case was unsolved, until now.

PHILADELPHIA — The perfect crime is far easier to pull off when nobody is watching.

So on a night nearly 43 years ago, while Muhammad Ali and Joe Frazier bludgeoned each other over 15 rounds in a televised title bout viewed by millions around the world, burglars took a lock pick and a crowbar and broke into a Federal Bureau of Investigation office in a suburb of Philadelphia, making off with nearly every document inside.

They were never caught, and the stolen documents that they mailed anonymously to newspaper reporters were the first trickle of what would become a flood of revelations about extensive spying and dirty-tricks operations by the F.B.I. against dissident groups.

The burglary in Media, Pa., on March 8, 1971, is a historical echo today, as disclosures by the former National Security Agency contractor Edward J. Snowden have cast another unflattering light on government spying and opened a national debate about the proper limits of government surveillance. The burglars had, until now, maintained a vow of silence about their roles in the operation. They were content in knowing that their actions had dealt the first significant blow to an institution that had amassed enormous power and prestige during J. Edgar Hoover’s lengthy tenure as director.

“When you talked to people outside the movement about what the F.B.I. was doing, nobody wanted to believe it,” said one of the burglars, Keith Forsyth, who is finally going public about his involvement. “There was only one way to convince people that it was true, and that was to get it in their handwriting.”

Mr. Forsyth, now 63, and other members of the group can no longer be prosecuted for what happened that night, and they agreed to be interviewed before the release this week of a book written by one of the first journalists to receive the stolen documents. The author, Betty Medsger, a former reporter for The Washington Post, spent years sifting through the F.B.I.’s voluminous case file on the episode and persuaded five of the eight men and women who participated in the break-in to end their silence.

Unlike Mr. Snowden, who downloaded hundreds of thousands of digital N.S.A. files onto computer hard drives, the Media burglars did their work the 20th-century way: they cased the F.B.I. office for months, wore gloves as they packed the papers into suitcases, and loaded the suitcases into getaway cars. When the operation was over, they dispersed. Some remained committed to antiwar causes, while others, like John and Bonnie Raines, decided that the risky burglary would be their final act of protest against the Vietnam War and other government actions before they moved on with their lives.

“We didn’t need attention, because we had done what needed to be done,” said Mr. Raines, 80, who had, with his wife, arranged for family members to raise the couple’s three children if they were sent to prison. “The ’60s were over. We didn’t have to hold on to what we did back then.”

A Meticulous Plan

The burglary was the idea of William C. Davidon, a professor of physics at Haverford College and a fixture of antiwar protests in Philadelphia, a city that by the early 1970s had become a white-hot center of the peace movement. Mr. Davidon was frustrated that years of organized demonstrations seemed to have had little impact.

In the summer of 1970, months after President Richard M. Nixon announced the United States’ invasion of Cambodia, Mr. Davidon began assembling a team from a group of activists whose commitment and discretion he had come to trust.

The group — originally nine, before one member dropped out — concluded that it would be too risky to try to break into the F.B.I. office in downtown Philadelphia, where security was tight. They soon settled on the bureau’s satellite office in Media, in an apartment building across the street from the county courthouse.

That decision carried its own risks: Nobody could be certain whether the satellite office would have any documents about the F.B.I.’s surveillance of war protesters, or whether a security alarm would trip as soon as the burglars opened the door.

The group spent months casing the building, driving past it at all times of the night and memorizing the routines of its residents.

“We knew when people came home from work, when their lights went out, when they went to bed, when they woke up in the morning,” said Mr. Raines, who was a professor of religion at Temple University at the time. “We were quite certain that we understood the nightly activities in and around that building.”

But it wasn’t until Ms. Raines got inside the office that the group grew confident that it did not have a security system. Weeks before the burglary, she visited the office posing as a Swarthmore College student researching job opportunities for women at the F.B.I.

The burglary itself went off largely without a hitch, except for when Mr. Forsyth, the designated lock-picker, had to break into a different entrance than planned when he discovered that the F.B.I. had installed a lock on the main door that he could not pick. He used a crowbar to break the second lock, a deadbolt above the doorknob.

The Retro Report video with this article is the 24th in a documentary series presented by The New York Times. The video project was started with a grant from Christopher Buck. Retro Report has a staff of 13 journalists and 10 contributors led by Kyra Darnton, a former “60 Minutes” producer. It is a nonprofit video news organization that aims to provide a thoughtful counterweight to today’s 24/7 news cycle.

Links 2013

$
0
0

Comments:"Links 2013"

URL:http://worrydream.com/Links2013/


Carver Mead describes a physical theory in which atoms exchange energy by resonating with each other. Before the energy transaction can happen, the two atoms must be phase-matched, oscillating in almost perfect synchrony with each other.

I sometimes think about resonant transactions as a metaphor for getting something out of a piece of writing. Before the material can resonate, before energy can be exchanged between the author and reader, the reader must already have available a mode of vibration at the author's frequency. (This doesn't mean that the reader is already thinking the author's thought; it means the reader is capable of thinking it.)

People often describe written communication in terms of transmission (the author explained the concept well, or poorly) and/or absorption (the reader does or doesn't have the background or skill to understand the concept). But I think of it more like a transaction -- the author and the reader must be matched with each other. The author and reader must share a close-enough worldview, viewpoint, vocabulary, set of mental models, sense of aesthetics, and set of goals. For any particular concept in the material, if not enough of these are sufficiently matched, no resonance will occur and no energy will be exchanged.

Perhaps, as a reader, one way to get more out of more material is to collect and cultivate a diverse set of resonators, to increase the probability of a phase-match.

Why Some Civil War Soldiers Glowed in the Dark | Mental Floss

$
0
0

Comments:"Why Some Civil War Soldiers Glowed in the Dark | Mental Floss"

URL:http://mentalfloss.com/article/30380/why-some-civil-war-soldiers-glowed-dark


By the spring of 1862, a year into the American Civil War, Major General Ulysses S. Grant had pushed deep into Confederate territory along the Tennessee River. In early April, he was camped at Pittsburg Landing, near Shiloh, Tennessee, waiting for Maj. Gen. Don Carlos Buell’s army to meet up with him.

On the morning of April 6, Confederate troops based out of nearby Corinth, Mississippi, launched a surprise offensive against Grant’s troops, hoping to defeat them before the second army arrived. Grant’s men, augmented by the first arrivals from the Ohio, managed to hold some ground, though, and establish a battle line anchored with artillery. Fighting continued until after dark, and by the next morning, the full force of the Ohio had arrived and the Union outnumbered the Confederates by more than 10,000.

The Union troops began forcing the Confederates back, and while a counterattack stopped their advance it did not break their line. Eventually, the Southern commanders realized they could not win and fell back to Corinth until another offensive in August (for a more detailed explanation of the battle, see this animated history).

All told, the fighting at the Battle of Shiloh left more than 16,000 soldiers wounded and more 3,000 dead, and neither federal or Confederate medics were prepared for the carnage.

The bullet and bayonet wounds were bad enough on their own, but soldiers of the era were also prone to infections. Wounds contaminated by shrapnel or dirt became warm, moist refuges for bacteria, which could feast on a buffet of damaged tissue. After months marching and eating field rations on the battlefront, many soldiers’ immune systems were weakened and couldn’t fight off infection on their own. Even the army doctors couldn’t do much; microorganisms weren’t well understood and the germ theory of disease and antibiotics were still a few years away. Many soldiers died from infections that modern medicine would be able to nip in the bud.

A Bright Spot

Some of the Shiloh soldiers sat in the mud for two rainy days and nights waiting for the medics to get around to them. As dusk fell the first night, some of them noticed something very strange: their wounds were glowing, casting a faint light into the darkness of the battlefield. Even stranger, when the troops were eventually moved to field hospitals, those whose wounds glowed had a better survival rate and had their wounds heal more quickly and cleanly than their unilluminated brothers-in-arms. The seemingly protective effect of the mysterious light earned it the nickname “Angel’s Glow.”

In 2001, almost one hundred and forty years after the battle, seventeen-year-old Bill Martin was visiting the Shiloh battlefield with his family. When he heard about the glowing wounds, he asked his mom - a microbiologist at the USDA Agricultural Research Service who had studied luminescent bacteria that lived in soil - about it.

“So you know, he comes home and, 'Mom, you're working with a glowing bacteria. Could that have caused the glowing wounds?’” Martin told Science Netlinks. “And so, being a scientist, of course I said, ‘Well, you can do an experiment to find out.’”

And that’s just what Bill did.

He and his friend, Jon Curtis, did some research on both the bacteria and the conditions during the Battle of Shiloh. They learned that Photorhabdus luminescens, the bacteria that Bill’s mom studied and the one he thought might have something to do with the glowing wounds, live in the guts of parasitic worms called nematodes, and the two share a strange lifecycle. Nematodes hunt down insect larvae in the soil or on plant surfaces, burrow into their bodies, and take up residence in their blood vessels. There, they puke up the P. luminescens bacteria living inside them. Upon their release, the bacteria, which are bioluminescent and glow a soft blue, begin producing a number of chemicals that kill the insect host and suppress and kill all the other microorganisms already inside it. This leaves P. luminescens and their nematode partner to feed, grow and multiply without interruptions.

As the worms and the bacteria eat and eat and the insect corpse is more or less hollowed out, the nematode eats the bacteria. This isn’t a double cross, but part of the move to greener pastures. The bacteria re-colonize the nematode’s guts so they can hitch a ride as it bursts forth from the corpse in search of a new host.

The next meal shouldn’t be hard to find either, since P. luminescens already sent them an invitation to the party. Just before they got got back in their nematode taxi, P. luminescens were at critical mass in the insect corpse, and scientists think that that many glowing bacteria attract other insects to the body and make the nematode’s transition to a new host much easier.

A Good Light

Looking at historical records of the battle, Bill and Jon figured out that the weather and soil conditions were right for both P. luminescens and their nematode partners. Their lab experiments with the bacteria, however, showed that they couldn’t live at human body temperature, making the soldiers’ wounds an inhospitable environment. Then they realized what some country music fans already knew: Tennessee in the spring is green and cool. Nighttime temperatures in early April would have been low enough for the soldiers who were out there in the rain for two days to get hypothermia, lowering their body temperature and giving P. luminescens a good home.

Based on the evidence for P. luminescens’s presence at Shiloh and the reports of the strange glow, the boys concluded that the bacteria, along with the nematodes, got into the soldiers’ wounds from the soil. This not only turned their wounds into night lights, but may have saved their lives. The chemical cocktail that P. luminescens uses to clear out its competition probably helped kill off other pathogens that might have infected the soldiers’ wounds. Since neither P. luminescens nor its associated nematode species are very infectious to humans, they would have soon been cleaned out by the immune system themselves (which is not to say you should be self-medicating with bacteria; P. luminescens infections can occur, and can result in some nasty ulcers). The soldiers shouldn’t have been thanking the angels so much as the microorganisms.

As for Bill and Jon, their study earned them first place in team competition at the 2001 Intel International Science and Engineering Fair.

Official Dolphin Emulator Website - An Old Problem Meets Its Timely Demise

$
0
0

Comments:"Official Dolphin Emulator Website - An Old Problem Meets Its Timely Demise"

URL:https://dolphin-emu.org/blog/2014/01/06/old-problem-meets-its-timely-demise/


The Legend of Zelda: The Wind Waker is one of the most popular Gamecube games, if not Nintendo games, in existence. Its mixture of an open world, sharp dungeons, and an inventive art style turned heads more than ten years ago when it was released. Dolphin has had its share of problems with Wind Waker, but none could be so frustrating as its mishandling of the heat distortion.

The Issues

This issue actually crops up in two ways throughout Wind Waker. The more common way is that all of the flame effects result in doubling rather than distortion. The "copy" ends up down and to the right quite a bit, making it awkward at best.

More troublesome is the way extreme heat in Dragon Roost Island is rendered. It results in severe screen tearing and detracts from the strong atmosphere presented by the game.

Desperate Measures

For years, it appeared this issue would never be fixed. No one knew what was wrong, and no one was focusing on fixing it for the longest time. Users, unlike developers, often care about the end product more than the why. That allows them to develop sometimes genius patches, codes, and work-arounds that developers would never condone or sometimes even consider! Using the free camera, action replay codes (shown below) and even texture replacement, users of Dolphin found their way to at least make the game tolerable.

The Payoff

As of 4.0-593, this issue is no more! The best way to explain how it was fixed would be to take the words from the one who fixed. delroth left a very entertaining and well thought out commit message that really dives into the error, why it existed, and how it was fixed.

The following is an excerpt from delroth's commit message

Let's talk a bit about this bug a bit. [The Wind Waker Heat Distortion Issue] is the 12th oldest remaining glitch in Dolphin. It was a lot of fun to debug and it kept me busy for a while :) Shoutout to Nintendo for framework.map, without which this could have taken a lot longer. Basic debugging using apitrace shows that the heat effect is rendered in an interesting way: An EFB copy texture is created, using the hardware scaler to divide the texture resolution by two and that way create the blur effect.This texture is then warped using indirect texturing: a deformation map is used to "move" the texture coordinates used to sample the framebuffer copy. Pixel shader: http://pastie.org/private/25oe1pqn6s0h5yieks1jfw Interestingly, when looking at apitrace, the deformation texture was only 4x4 pixels... weird. It also does not have any feature that you would expect from a deformation map. Seeing how the heat effect glitches, this deformation texture being wrong looks like a good candidate for the problem. Let's see how it's loaded! By NOPing random calls to GXSetTevIndirect, we find a call that when removed breaks the effect completely. The parameters used for this call come from the results of methods of JPAExTexShapeArc objects. 3 different objects go through this code path, by breaking each one we can notice that the one "controlling" the heat effect is the one at 0x81575b98. Following the path of this object a bit more, we can see that it has a method called "getIndTexId". When this is called, the returned texture ID is used to index a map and get a JPATextureArc object stored at 0x81577bec. Nice feature of JPATextureArc: they have a getName method. For this object, it returns "AK_kagerouInd01". We can probably use that to see how this texture should look like, by loading it "manually" from the Wind Waker DVD. Unfortunately I don't know how to do that. Fortunately [@Abahbob](https://twitter.com/Abahbob "@abahbob") got me the texture I wanted in less than 10min after I asked him on Twitter. AK_kagerouInd01 is a 32x32 texture that really looks like a deformation map: http://i.imgur.com/0TfZEVj.png . Fun fact: "kagerou" means "heat haze" in JP. So apparently we're not using the right texture object when rendering! The GXTexObj that maps to the JPATextureArc is at offset 0x81577bf0 and points to data at 0x80ed0460, but we're loading texture data from 0x0039d860 instead. I started to suspect the BP write that loads the texture parameters "did not work" somehow. Logged that and yes: nothing gets loaded to texture stage 1! ... but it turns out this is normal, the deformation map is loaded to texture stage 5 (hardcoded in the DOL). Wait, why is the TextureCache trying to load from texture stage 1 then?! Because someone sucked at hex. — delroth

Broken Code

436 u32 getTexCoord(int i) { return (hex>>(6*i+3))&3; }
437 u32 getTexMap(int i) { return (hex>>(6*i))&3; }

Working Code

436 u32 getTexCoord(int i) { return (hex>>(6*i+3))&7; }
437 u32 getTexMap(int i) { return (hex>>(6*i))&7; }

Oh yeah, this also fixes videos in some EA games.

New York Times Redesign

High-end CNC machines can't be moved without manufacturers' permission - Boing Boing

$
0
0

Comments:"High-end CNC machines can't be moved without manufacturers' permission - Boing Boing"

URL:http://boingboing.net/2014/01/06/high-end-cnc-machines-cant-b.html



On Practical Machinst, there's a fascinating thread about the manufacturer's lockdown on a high-priced, high-end Mori Seiki NV5000 A/40 CNC mill. The person who started the thread owns the machine outright, but has discovered that if he moves it at all, a GPS and gyro sensor package in the machine automatically shuts it down and will not allow it to restart until they receive a manufacturer's unlock code.

Effectively, this means that machinists' shops can't rearrange their very expensive, very large tools to improve their workflow from job to job without getting permission from the manufacturer (which can take a month!), even if their own the gear.

According to posts in the thread, many manufacturers have introduced this lockdown feature because their goods have found their way into Iran, violating the embargo. So now these machines can't be moved at all without the manufacturer's knowledge and consent, a situation that the manufacturers have turned into a business-opportunity by using the technology to assist in repossessing machines from delinquent lease-payers -- and requiring permission for privilege of deciding where to place their key capital assets.

I'm interested in the security implications of this. Malware like Stuxnet attacked embedded systems on computerized machines, causing them to malfunction in subtle ways. A subtly weakened or defective part from a big mill like the NV5000 might find its way into a vehicle or a high-speed machine, with disastrous consequences.

And since the mills are designed to be opaque to their owners, and to actively prevent their owners from reverse-engineering them (lest they disable the gyro/GPS), an infection would be nearly impossible to detect. Criminals and saboteurs are a lot less worried about voiding the warranty on your $100K business-asset than you are, and that asymmetry, combined with the mandate for opacity in the operations, presents a serious risk to machine shops and their customers (and their customers' users -- that is, everyone).

Thread: Mori/Ellison gyroscope unlocking

SPACEX SUCCESSFULLY LAUNCHES THAICOM 6 SATELLITE TO GEOSTATIONARY TRANSFER ORBIT | SpaceX

$
0
0

Comments:"SPACEX SUCCESSFULLY LAUNCHES THAICOM 6 SATELLITE TO GEOSTATIONARY TRANSFER ORBIT | SpaceX"

URL:http://www.spacex.com/press/2014/01/06/spacex-successfully-launches-thaicom-6-satellite-geostationary-transfer-orbit


THAICOM 6 mission marks second successful GTO flight for the upgraded Falcon 9 launch vehicle

Cape Canaveral Air Force Station, Florida – Today, Space Exploration Technologies (SpaceX) successfully launched the THAICOM 6 satellite for leading Asian satellite operator THAICOM.  Falcon 9 delivered THAICOM 6 to its targeted 295 x 90,000 km geosynchronous transfer orbit at 22.5 degrees inclination.  The Falcon 9 launch vehicle performed as expected, meeting 100% of mission objectives.

Falcon 9 lifted off from Space Launch Complex 40 (SLC-40) at 5:06 PM Eastern Time.  Approximately 184 seconds into flight, Falcon 9’s second stage’s single Merlin vacuum engine ignited to begin a five minute, 35 second burn that delivered the THAICOM 6 satellite into its parking orbit. Eighteen minutes after injection into the parking orbit, the second stage engine relit for just over one minute to carry the THAICOM 6 satellite to its final geostationary transfer orbit.  The restart of the Falcon 9 second stage is a requirement for all geostationary transfer missions.

“Today’s successful launch of the THAICOM 6 satellite marks the eighth successful flight in a row for Falcon 9,” said Gwynne Shotwell, President of SpaceX. “SpaceX greatly appreciates THAICOM’s support throughout this campaign and we look forward to a busy launch schedule in 2014.”   

The THAICOM 6 mission marks Falcon 9’s second flight to a geosynchronous transfer orbit and begins a regular cadence of launches planned for SpaceX in 2014. SpaceX has nearly 50 launches on manifest, of which over 60% are for commercial customers. 

This launch also marks the third of three qualification flights needed to certify the Falcon 9 to fly missions under the Evolved Expendable Launch Vehicle (EELV) program. Once Falcon 9 is certified, SpaceX will be eligible to compete to launch national security satellites for the U.S. Air Force.

 

About SpaceX

SpaceX designs, manufactures, and launches the world's most advanced rockets and spacecraft. The company was founded in 2002 by Elon Musk to revolutionize space transportation, with the ultimate goal of enabling people to live on other planets. Today, SpaceX is advancing the boundaries of space technology through its Falcon launch vehicles and Dragon spacecraft. SpaceX is a private company owned by management and employees, with minority investments from Founders Fund, Draper Fisher Jurvetson, and Valor Equity Partners. The company has more than 3,000 employees in California, Texas, Washington, D.C., and Florida. For more information, visit www.spacex.com

Clang 3.4 Release Notes — Clang 3.4 documentation

$
0
0

Comments:"Clang 3.4 Release Notes — Clang 3.4 documentation"

URL:http://llvm.org/releases/3.4/tools/clang/docs/ReleaseNotes.html


This document contains the release notes for the Clang C/C++/Objective-C frontend, part of the LLVM Compiler Infrastructure, release 3.4. Here we describe the status of Clang in some detail, including major improvements from the previous release and new feature work. For the general LLVM release notes, see the LLVM documentation. All LLVM releases may be downloaded from the LLVM releases web site.

For more information about Clang or LLVM, including information about the latest release, please check out the main Clang Web Site or the LLVM Web Site.

Note that if you are reading this file from a Subversion checkout or the main Clang web page, this document applies to the next release, not the current one. To see the release notes for a specific release, please see the releases page.

Some of the major new features and improvements to Clang are listed here. Generic improvements to Clang as a whole or to its underlying infrastructure are described first, followed by language-specific sections with improvements to Clang’s support for those languages.

This is expected to be the last release of Clang which compiles using a C++98 toolchain. We expect to start using some C++11 features in Clang starting after this release. That said, we are committed to supporting a reasonable set of modern C++ toolchains as the host compiler on all of the platforms. This will at least include Visual Studio 2012 on Windows, and Clang 3.1 or GCC 4.7.x on Mac and Linux. The final set of compilers (and the C++11 features they support) is not set in stone, but we wanted users of Clang to have a heads up that the next release will involve a substantial change in the host toolchain requirements.

Note that this change is part of a change for the entire LLVM project, not just Clang.

Improvements to Clang’s diagnostics

Clang’s diagnostics are constantly being improved to catch more issues, explain them more clearly, and provide more accurate source information about them. The improvements since the 3.3 release include:

  • -Wheader-guard warns on mismatches between the #ifndef and #define lines in a header guard.

    #ifndef multiple#define multi#endif

    returns warning: ‘multiple’ is used as a header guard here, followed by #define of a different macro [-Wheader-guard]

  • -Wlogical-not-parentheses warns when a logical not (‘!’) only applies to the left-hand side of a comparison. This warning is part of -Wparentheses.

    inti1=0,i2=1;boolret;ret=!i1==i2;

    returns warning: logical not is only applied to the left hand side of this comparison [-Wlogical-not-parentheses]

  • Boolean increment, a deprecated feature, has own warning flag -Wdeprecated-increment-bool, and is still part of -Wdeprecated.

  • Clang errors on builtin enum increments and decrements in C++.

    enumA{A1,A2};voidtest(){Aa;a++;}

    returns error: cannot increment expression of enum type ‘A’

  • -Wloop-analysis now warns on for-loops which have the same increment or decrement in the loop header as the last statement in the loop.

    voidfoo(char*a,char*b,unsignedc){for(unsignedi=0;i<c;++i){a[i]=b[i];++i;}}

    returns warning: variable ‘i’ is incremented both in the loop header and in the loop body [-Wloop-analysis]

  • -Wuninitialized now performs checking across field initializers to detect when one field in used uninitialized in another field initialization.

    classA{intx;inty;A():x(y){}};

    returns warning: field ‘y’ is uninitialized when used here [-Wuninitialized]

  • Clang can detect initializer list use inside a macro and suggest parentheses if possible to fix.

  • Many improvements to Clang’s typo correction facilities, such as:

    • Adding global namespace qualifiers so that corrections can refer to shadowed or otherwise ambiguous or unreachable namespaces.
    • Including accessible class members in the set of typo correction candidates, so that corrections requiring a class name in the name specifier are now possible.
    • Allowing typo corrections that involve removing a name specifier.
    • In some situations, correcting function names when a function was given the wrong number of arguments, including situations where the original function name was correct but was shadowed by a lexically closer function with the same name yet took a different number of arguments.
    • Offering typo suggestions for ‘using’ declarations.
    • Providing better diagnostics and fixit suggestions in more situations when a ‘->’ was used instead of ‘.’ or vice versa.
    • Providing more relevant suggestions for typos followed by ‘.’ or ‘=’.
    • Various performance improvements when searching for typo correction candidates.
  • LeakSanitizer is an experimental memory leak detector which can be combined with AddressSanitizer.

  • Clang no longer special cases -O4 to enable lto. Explicitly pass -flto to enable it.
  • Clang no longer fails on >= -O5. These flags are mapped to -O3 instead.
  • Command line “clang -O3 -flto a.c -c” and “clang -emit-llvm a.c -c” are no longer equivalent.
  • Clang now errors on unknown -m flags (-munknown-to-clang), unknown -f flags (-funknown-to-clang) and unknown options (-what-is-this).
  • Added new checked arithmetic builtins for security critical applications.
  • Fixed an ABI regression, introduced in Clang 3.2, which affected member offsets for classes inheriting from certain classes with tail padding. See PR16537.
  • Clang 3.4 supports the 2013-08-28 draft of the ISO WG21 SG10 feature test macro recommendations. These aim to provide a portable method to determine whether a compiler supports a language feature, much like Clang’s __has_feature macro.

C++1y Feature Support

Clang 3.4 supports all the features in the current working draft of the upcoming C++ standard, provisionally named C++1y. Support for the following major new features has been added since Clang 3.3:

  • Generic lambdas and initialized lambda captures.
  • Deduced function return types (auto f() { return 0; }).
  • Generalized constexpr support (variable mutation and loops).
  • Variable templates and static data member templates.
  • Use of ' as a digit separator in numeric literals.
  • Support for sized ::operator delete functions.

In addition, [[deprecated]] is now accepted as a synonym for Clang’s existing deprecated attribute.

Use -std=c++1y to enable C++1y mode.

  • OpenCL C “long” now always has a size of 64 bit, and all OpenCL C types are aligned as specified in the OpenCL C standard. Also, “char” is now always signed.

These are major API changes that have happened since the 3.3 release of Clang. If upgrading an external codebase that uses Clang as a library, this section should help get you past the largest hurdles of upgrading.

Wide Character Types

The ASTContext class now keeps track of two different types for wide character types: WCharTy and WideCharTy. WCharTy represents the built-in wchar_t type available in C++. WideCharTy is the type used for wide character literals; in C++ it is the same as WCharTy, but in C99, where wchar_t is a typedef, it is an integer type.

The static analyzer has been greatly improved. This impacts the overall analyzer quality and reduces a number of false positives. In particular, this release provides enhanced C++ support, reasoning about initializer lists, zeroing constructors, noreturn destructors and modeling of destructor calls on calls to delete.

Clang now includes a new tool clang-format which can be used to automatically format C, C++ and Objective-C source code. clang-format automatically chooses linebreaks and indentation and can be easily integrated into editors, IDEs and version control systems. It supports several pre-defined styles as well as precise style control using a multitude of formatting options. clang-format itself is just a thin wrapper around a library which can also be used directly from code refactoring and code translation tools. More information can be found on Clang Format’s site.

  • clang-cl provides a new driver mode that is designed for compatibility with Visual Studio’s compiler, cl.exe. This driver mode makes Clang accept the same kind of command-line options as cl.exe. The installer will attempt to expose clang-cl in any Visual Studio installations on the system as a Platform Toolset, e.g. “LLVM-vs2012”. clang-cl targets the Microsoft ABI by default. Please note that this driver mode and compatibility with the MS ABI is highly experimental.

The following methods have been added:

A wide variety of additional information is available on the Clang web page. The web page contains versions of the API documentation which are up-to-date with the Subversion revision of the source code. You can access versions of these documents specific to this release by going into the “clang/docs/” directory in the Clang tree.

If you have any questions or comments about Clang, please feel free to contact us via the mailing list.


GitHub System Status

London's first pay-per-minute cafe: will the idea catch on? | Travel | theguardian.com

$
0
0

Comments:" London's first pay-per-minute cafe: will the idea catch on? | Travel | theguardian.com "

URL:http://www.theguardian.com/travel/2014/jan/08/pay-per-minute-cafe-ziferblat-london-russia


Ziferblat, London – where drinks are free, but you pay 3p-per-minute to be there

Ever felt you've overstayed your welcome in a cafe, by reading, working or surfing the web while hugging the latte you bought two hours ago? Pay-per-minute cafes could be the answer. Ziferblat, the first UK branch of a Russian chain, has just opened in London (388 Old Street), where "everything is free inside except the time you spend there". The fee: 3p a minute.

Ziferblat means clock face in Russian and German (Zifferblatt). The idea is guests take an alarm clock from the cupboard on arrival and note the time, then keep it with them, before, quite literally, clocking out at the end. There's no minimum time. Guests can also get stuck into the complimentary snacks (biscuits, fruit, vegetables), or prepare their own food in the kitchen; they can help themselves to coffee from the professional machine, or have it made for them. There's even a piano – an idea that could seem brilliant or terrible, depending on who takes the seat.

Pick one of the clocks from the cupboard and take a seat

Ziferblat has opened 10 branches in Russia in the past two years and now wants to take the idea worldwide. With hostels, hotels and cafes around the world often filled with people either working remotely or enjoying some downtime online, the market for expansion is certainly there. The "coffice", we're told, is the way of the future.

Owner Ivan Mitin says during the first month of the UK opening, they have already drawn in some regulars. "Londoners are more prepared for such a concept; they understand the idea instantly. It's funny to see people queueing here to wash their dishes. It's not obligatory, but it's appreciated. They even wash each other's dishes. It's very social. We think of our guests as micro tenants, all sharing the same space."

Eight days into 2014, Time Out has already declared Ziferblat "a contender for best opening of the year". But what do you think? Does the idea appeal? Does £1.80 an hour sound like good value? Would you feel more relaxed, or more under pressure with a clock by your side? Let us know in the comments below.

BPS Research Digest: Childhood amnesia kicks in around age 7

$
0
0

Comments:"BPS Research Digest: Childhood amnesia kicks in around age 7"

URL:http://bps-research-digest.blogspot.com/2014/01/childhood-amnesia-kicks-in-around-age-7.html


You could travel the world with an infant aged under 3 and it's almost guaranteed that when they get older they won't remember a single boat trip, plane ride or sunset. This is thanks to a phenomenon, known as childhood or infantile amnesia, that means most of us lose all our earliest autobiographical memories. It's a psychological conundrum because when they are 3 or younger, kids are able to discuss autobiographical events from their past. So it's not that memories from before age 3 never existed, it's that they are subsequently forgotten.

Most of the research in this area has involved adults and children reminiscing about their earliest memories. For a new study Patricia Bauer and Marina Larkina have taken a different approach. They recorded mothers talking to their 3-year-olds about six past events, such as zoo visits or first day at pre-school. The researchers then re-established contact with the same families at different points in the future. Some of the children were quizzed again by a researcher when aged 5, others at age 6 or 7, 8 or 9. This way the researchers were able to chart differences in amounts of forgetting through childhood.

Bauer and Larkina uncovered a paradox - at ages 5 to 7, the children remembered over 60 per cent of the events they'd chatted about at age 3. However, their recall for these events was immature in the sense of containing few evaluative comments and few mentions of time and place. In contrast, children aged 8 and 9 recalled fewer than 40 per cent of the events they'd discussed at age 3, but those memories they did recall were more adult-like in their content. Bauer and Larkina said this suggests that adult-like remembering and forgetting develops at around age 7 or soon after. They also speculated that the immature form of recall seen at ages 5 to 7 could actually contribute to the forgetting of autobiographical memories - a process known as "retrieval-induced forgetting".

Another important finding was that the style mothers used when chatting with their 3-year-olds was associated with the level of remembering by those children later on. Specifically, mothers who used more "deflections", such as "Tell me more" and "What happened?" tended to have children who subsequently recalled more details of their earlier memories.

The researchers said their work "provides compelling evidence that accounts of childhood amnesia that focus only on changes in remembering cannot explain the phenomenon. The complementary processes involved in forgetting are also part of the explanation."

_________________________________

Bauer PJ and Larkina M (2013). The onset of childhood amnesia in childhood: A prospective investigation of the course and determinants of forgetting of early-life events. Memory (Hove, England) PMID: 24236647

Further reading
Tapping into people's earliest memories
Where did all the memories go?

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Manage multiple accounts

$
0
0

Comments:"Manage multiple accounts"

URL:https://stripe.com/blog/manage-multiple-accounts


Alex Sexton, January 8, 2014

We’ve noticed that many of you manage more than one Stripe account. Now, instead of creating multiple user accounts with different email addresses, you can add new or existing accounts to one profile and quickly switch between them in the Dashboard (and forget having to remember any additional email and password combos).

You can merge existing Stripe accounts, too. Simply invite your other accounts to your team (at any permission level) and you'll be able to seamlessly switch between them. We’ve also updated teams so that you can invite users who already have a Stripe account to your team. Users can belong to multiple teams, and you can quickly switch between these shared accounts as well.

Hopefully this makes managing multiple accounts less annoying. We have lots more de-annoying planned for 2014.

One of Many Worlds: Another go at Go ... failed!

$
0
0

Comments:"One of Many Worlds: Another go at Go ... failed!"

URL:http://oneofmanyworlds.blogspot.com/2014/01/another-go-at-go-failed.html


After a considerable gap, a gave Go another go!

The Problem

As part of a consulting engagement, I accepted a project to develop some statistical inference models in the area of drug (medicine) repositioning. Input data comprises three sets of associations: (i) between drugs and adverse effects, (ii) between drugs and diseases, and (iii) between drugs and targets (proteins). Using drugs as hub elements, associations are inferred between the other three kinds of elements, pair-wise.

The actual statistics computed vary from simple measures such as sensitivity (e.g. how sensitive is a given drug to a set of query targets?) and Matthews Correlation Coefficient, to construction of rather complex confusion matrices and generalised profile vectors for drugs, diseases, etc. Accordingly, the computational intensity varies considerably across parts of the models.

For the size of the test subset of input data, the in-memory graph of direct and transitive associations currently has about 15,000 vertices and over 1,4000,000 edges. This is expected to grow by two orders of magnitude (or more) when the full data set is used for input.

Programming Language

I had some temptation initially to prototype the first model (or two) in a language like Ruby. Giving the volume of data its due weight though, I decided to use Ruby for ad hoc validation of parts of the computations, with coding proper happening in a faster, compiled language. I have been using Java for most of my work (both open source as well as for clients). However, considering the fact that statistics instances are write-only, I hoped that Go could help me make the computations parallel easily[1].

My choice of Go caused some discomfort on the part of my client's programmers, since they have to maintain the code down the road. No serious objections were raised nevertheless. So, I went ahead and developed the first three models in Go.

Practical Issues With Go

The Internet is abuzz with success stories involving Go; there isn't an additional perspective that I can add! The following are factors, in no particular order, that inhibited my productivity as I worked on the project.

No Set in the Language

Through (almost) every hour of this project, I found myself needing an efficient implementation of a set data structure. Go does not have a built-in set; it has arrays, slices and maps (hash tables). And, Go lacks generics. Consequently, whichever generic data structure is not provided by the compiler can not be implemented in a library. I ended up using maps as sets. Everyone who does that realises the pain involved, sooner than later. Maps provide uniqueness of keys, but I needed sets for their set-like properties: being able to do minus, union, intersection, etc. I had to code those in-line every time. I have seen several people argue vehemently (even arrogantly) in golang-nuts that it costs just a few lines each time, and that it makes the code clearer. Nothing could be further from truth. In-lining those operations has only reduced readability and obscured my intent. I had to consciously train my eyes to recognise those blocks to mean union, intersection, etc. They also were very inconvenient when trying different sequences of computations for better efficiency, since a quick glance never sufficed!

Also, I found the performance of Go maps wanting. Profiling showed that get operations were consuming a good percentage of the total running time. Of course, several of those get operations are actually to check for the presence of a key.

No BitSet in the Standard Library

Since the performance of maps was dragging the computations back, I investigated the possibility of changing the algorithms to work with bit sets. However, there is no BitSet or BitArray in Go's standard library. I found two packages in the community: one on code.google.com and the other on github.com. I selected the former both because it performed better and provided a convenient iteration through only the bits set to true. Mind you, the data is mostly sparse, and hence both these were desirable characteristics.

Incidentally, both the bit set packages have varying performance. I could not determine the sources of those variations, since I could not easily construct test data to reproduce them on a small scale. A well-tested, high performance bit set in the standard library would have helped greatly.

Generics, or Their Absence

The general attitude in Go community towards generics seems to have degenerated into one consisting of a mix of disgust and condescension, unfortunately. Well-made cases that illustrate problems best served by generics, are being dismissed with such impudence and temerity as to cause repulsion. That Russ Cox' original formulation of the now-famous tri-lemma is incomplete at best has not sunk in despite four years of discussions. Enough said!

In my particular case, I have six sets of computations that differ in:

  • types of input data elements held in the containers, and upon which the computations are performed (a unique combination of three types for each pair, to be precise),
  • user-specified values for various algorithmic parameters for a given combination of element types,
  • minor computational steps and
  • types (and instances) of containers into which the results aggregate.

These differences meant that I could not write common template code that could be used to generate six versions using extra-language tools (as inconvenient as that already is). The amount of boiler-plate needed externally to handle the differences very quickly became both too much and too confusing. Eventually, I resorted to six fully-specialised versions each of data holders, algorithms and results containers, just for manageability of the code.

This had an undesirable side effect, though: now, each change to any of the core containers or computations had to be manually propagated to all the corresponding remaining versions. It soon led to a disinclination on my part to quickly iterate through alternative model formulations, since the overhead of trying new formulations was non-trivial.

Poor Performance

This was simply unexpected! With fully-specialised versions of graph nodes, edges, computations and results containers, I was expecting very good performance. Initially, it was not very good. In single-threaded mode, a complete run of three models on the test set of data took about 9 minutes 25 seconds. I re-examined various computations. I eliminated redundant checks in some paths, combined two passes into one at the expense of more memory, pre-identified query sets so that the full sets need not be iterated over, etc. At the end of all that, in single-threaded mode, a complete run of three models on the test set of data took about 2 minutes 40 seconds. For a while, I thought that I had squeezed it to the maximum extent. And so thought my client, too! More on that later.

Enhancement Requests

At that point, my client requested for three enhancements, two of which affected all the six + six versions of the models. I ploughed through the first change and propagated it through the other eleven specialised versions. I had a full taste of what was to come, though, when I was hit with the realisation that I was yet working on Phase 1 of the project, which had seven proposed phases in all!

Back to Java!

I took a break of one full day, and did a hard review of the code (and my situation, of course). I quickly identified three major areas where generics and (inheritance-based) polymorphism would have presented a much more pleasant solution. I had already spent 11 weeks on the project, the bulk of that going into developing and evaluating the statistical models. With the models now ready, I estimated that a re-write in Java would cost me about 10 working days. I decided to take the plunge.

The full re-write in Java took 8 working days. The ease with which I could model the generic data containers and results containers was quite expected. Java's BitSet class was of tremendous help. I had some trepidation about the algorithmic parts. However, they turned out to be easier than I anticipated! I made the computations themselves parts of formally-typed abstract classes, with the concrete parts such as substitution of actual types, the user-specified parameters and minor variations implemented by the subclasses. Conceptually, it was clear and clean: the base computations were easy to follow in the abstract classes. The overrides were clearly marked so, and were quite pointed.

Naturally, I expected a reduction in the size of the code base; I was not sure by how much, though. The actual reduction was by about 40%. This was nice, since it came with the benefit of more manageable code.

The most unexpected outcome concerned performance: a complete run of the three models on the test set of data now took about 30 seconds! My first suspicion was that something went so wrong as to cause a premature (but legal) exit somewhere. However, the output matched what was produced by the Go version (thanks Ruby), so that could not have been true. I re-ran the program several times, since it sounded too good to be true. Each time, the run completed in about 30 seconds.

I was left scratching my head. My puzzlement continued for a while, before I noticed something: the CPU utilisation reported by /usr/bin/time was around 370-380%! I was now totally stumped. conky showed that all processor cores was indeed being used. How could that be? The program was very much single-threaded.

After some thought and Googling, I saw a few factors that potentially enabled a utilisation of multiple cores.

  • All the input data classes were final.
  • All the results classes were final, with all of their members being final too.
  • All algorithm subclasses were final.
  • All data containers (masters), the multi-mode graph itself, and all results containers had only insert and look-up operations performed on them. None had a delete operation.

Effectively, almost all of the code involved only final classes. And, all operations were append-only. The compiler may have noticed those; the run-time must have noticed those. I still do not know what is going on inside the JRE as the program runs, but I am truly amazed by its capabilities! Needless to say, I am quite happy with the outcome, too!

Conclusions

  • If your problem domain involves patterns that benefit from type parameterisation or[2] polymorphism that is easily achievable through inheritance, Go is a poor choice.
  • If you find your Go code evolving into having few interfaces but many higher-order functions (or methods) that resort to frequent type assertions, Go is a poor choice.
  • Go runtime can learn a trick or two from JRE 7 as regards performance.

These may seem obvious to more informed people; but to me, it was some enlightenment!

[1] I tried Haskell and Elixir as candidates, but nested data holders with multiply circular references appear to be problematic to deal with in functional languages. Immutable data presents interesting challenges when it comes to cyclic graphs! The solutions suggested by the respective communities involved considerable boiler-plate. More importantly, the resulting code lost direct correspondence with the problem's structural elements. Eventually, I abandoned that approach.

[2] Not an exclusive or.

Stop Writing JavaScript Compilers! Make Macros Instead

$
0
0

Comments:"Stop Writing JavaScript Compilers! Make Macros Instead"

URL:http://jlongster.com/Stop-Writing-JavaScript-Compilers--Make-Macros-Instead


January 07 2014

The past several years have been kind to JavaScript. What was once a mediocre language plagued with political stagnation is now thriving with an incredible platform, a massive and passionate community, and a working standardization process that moves quickly. The web is the main reason for this, but node.js certainly has played its part.

ES6, or Harmony, is the next batch of improvements to JavaScript. It is near finalization, meaning that all interested parties have mostly agreed on what is accepted. It's more than just a new standard; Chrome and Firefox have already implemented a lot of ES6 like generators, let declarations, and more. It really is happening, and the process that ES6 has gone through will pave the way for quicker, smaller improvements to JavaScript in the future.

There is much to be excited about in ES6. But the thing I am most excited about is not in ES6 at all. It is a humble little library called sweet.js.

Sweet.js implements macros for JavaScript. Stay with me here. Macros are widely abused or badly implemented so many of you may be in shock right now. Is this really a good idea?

Yes, it is, and I hope this post explains why.

Macros Done Right

There are lots of different notions of "macros" so let's get that out of the way first. When I say macro I mean the ability to define small things that can syntactically parse and transform code around them.

C calls these strange things that look like #define foo 5 macros, but they really aren't macros like we want. It's a bastardized system that essentially opens up a text file, does a search-and-replace, and saves. It completely ignores the actual structure of the code so they are pointless except for a few trivial things. Many languages copy this feature and claim to have "macros" but they are extremely difficult and limiting to work with.

Real macros were born from Lisp in the 1980's with defmacro. It's shocking how often good ideas have roots back into papers from the 70s and 80s, and even specifically from Lisp itself. It was a natural step for Lisp because Lisp code has exactly the same syntax as its data structures. This means it's easy to throw data and code around and change its meaning.

Lisp went on to prove that macros fundamentally change the ecosystem of the language, and it's no surprise that newer languages have worked hard to include them.

However, it's a whole lot harder to do that kind of stuff in other languages that have a lot more syntax (like JavaScript). The naive approach would make a function that takes an AST, but ASTs are really cumbersome to work with, and at that point you might as well just write a compiler. Luckily, a lot of research recently has solved this problem and real Lisp-style macros have been included in newer languages like julia and rust.

And now, JavaScript.

A Quick Tour of Sweet.js

This post is not a tutorial on JavaScript macros. This post intends to explain how they could radically improve JavaScript's evolution. But I think I need to provide a little meat first for people who have never seen macros before.

Macros for languages that have a lot of special syntax take advantage of pattern matching. The idea is that you define a macro with a name and a list of patterns. Whenever that name is invoked, at compile-time the code is matched and expanded.

macro define {
 rule { $x } => {var $x
 }
 rule { $x = $expr } => {var $x = $expr
 }
}
define y;
define y = 5;

The above code expands to:

when run through the sweet.js compiler.

When the compiler hits define, it invokes the macro and runs each rule against the code after it. When a pattern is matched, it returns the code within the rule. You can bind identifiers & expressions within the matching pattern and use them within the code (typically prefixed with $) and sweet.js will replace them with whatever was matched in the original pattern.

We could have written a lot more code within the rule for more advanced macros. However, you start to see a problem when you actually use this: if you introduce new variables in the expanded code, it's easy to clobber existing ones. For example:

macro swap {
 rule { ($x, $y) } => {var tmp = $x;
 $x = $y;
 $y = tmp;
 }
}var foo = 5;var tmp = 6;
swap(foo, tmp);

swap looks like a function call but note how the macro actually matches on the parentheses and 2 arguments. It might be expanded into this:

var foo = 5;var tmp = 6;var tmp = foo;
foo = tmp;
tmp = tmp;

The tmp created from the macro collides with my local tmp. This is a serious problem, but macros solve this by implementing hygiene. Basically they track the scope of variables during expansion and rename them to maintain the correct scope. Sweet.js fully implements hygiene so it never generates the code you see above. It would actually generate this:

var foo = 5;var tmp$1 = 6;var tmp$2 = foo;
foo = tmp$1;
tmp$1 = tmp$2;

It looks a little ugly, but notice how two different tmp variables are created. This makes it extremely powerful to create complex macros elegantly.

But what if you want to intentionally break hygiene? Or what you want to process certain forms of code that are too difficult for pattern matching? This is rare, but you can do this with something called case macros. With these macros, actual JavaScript code is run at expand-time and you can do anything you want.

macro rand {case { _ $x } => {var r = Math.random();
 letstx $r = [makeValue(r)];return #{ var $x = $r }
 }
}
rand x;

The above would expand to:

var x$246 = 0.8367501533161177;

Of course, it would expand to a different random number every time. With case macros, you use case instead of rule and code within the case is run at expand-time and you use #{} to create "templates" that construct code just like the rule in the other macros. I'm not going to go deeper into this now, but I will be posting tutorials in the future so follow my blog if you want to here more about how to write these.

These examples are trivial but hopefully show that you can hook into the compilation phase easily and do really powerful things.

Macros are modular, Compilers are not!

One thing I like about the JavaScript community is that they aren't afraid of compilers. There are a wealth of libraries for parsing, inspecting, and transforming JavaScript, and people are doing awesome things with them.

Except that doesn't really work for extending JavaScript.

Here's why: it splits the community. If project A implements an extension to JavaScript and project B implements a different extension, I have to choose between them. If I use project A's compiler to try to parse code from project B, it will error.

Additionally, each project will have a completely different build process and having to learn a new one every time I want to try out a new extension is terrible (the result is that fewer people try out cool projects, and fewer cool projects are written). I use Grunt, so every damn time I need to write a grunt task for a project if one doesn't exist already.

Note: Maybe you are somebody that doesn't like build steps at all. I understand that, but I would encourage you to get over that fear. Tools like Grunt make it easy to automatically build on change, and you gain a lot by doing so.

For example, traceur is a really cool project that compiles a lot of ES6 features into simple ES5. However, it only has limited support for generators. Let's say I wanted to use regenerator instead, since it's much more awesome at compiling yield expressions.

I can't reliably do that because traceur might implement ES6 features that regenerator's compiler doesn't know about.

Now, for ES6 features we kind of get lucky because it is a standard and compilers like esprima have included support for the new syntax, so lots of projects will recognize it. But passing code through multiple compilers is just not a good idea. Not only is it slower, it's not reliable and the toolchain is incredibly complicated.

The process looks like this:

I don't think anyone is actually doing this because it doesn't compose. The result is that we have big monolothic compilers and we're forced to choose between them.

Using macros, it would look more like this:

There's only one build step, and we tell sweet.js which modules to load and in what order. sweet.js registers all of the loaded macros and expands your code with all them.

You can setup an ideal workflow for your project. This is my current setup: I configure grunt to run sweet.js on all my server-side and client-side js (see my gruntfile). I run grunt watch whenever I want to develop, and whenever a change is made grunt compiles that file automatically with sourcemaps. If I see a cool new macro somebody wrote, I just npm install it and tell sweet.js to load it in my gruntfile, and it's available. Note that for all macros, good sourcemaps are generated, so debugging works naturally.

Think about that. Imagine being able to pull in extensions to JavaScript that easily. What if TC39 actually introduced future JavaScript enhancements as macros? What if JavaScript actually adopted macros natively so that we never even had to run the build step?

This could potentially loosen the shackles of JavaScript to legacy codebases and a slow standardization process. If you can opt-in to language features piecemeal, you give the community a lot of power to be a part of the conversation since they can make those features.

Speaking of which, ES6 is a great place to start. Features like destructuring and classes are purely syntactical improvements, but are far from widely implemented. I am working on a es6-macros project which implements a lot of ES6 features as macros. You can pick and choose which features you want and start using ES6 today, as well as any other macros like Nate Faubion's execellent pattern matching library.

Note: sweet.js does not support ES6 modules yet, but you can give the compiler a list of macro files to load. In the future, you will be able to use the ES6 module syntax in the files to load specific modules.

A good example of this is in Clojure, the core.async library offers a few operators that are actually macros. When a go block is hit, a macro is invoked that completely transforms the code to a state machine. They were able to implement something similar to generators, which lets you pause and resume code, as a library because of macros (the core language doesn't know anything about it).

Of course, not everything can be a macro. The ECMA standardization process will always be needed and certain things require native implementations to expose complex functionality. But I would argue that a large part of improvements to JavaScript that people want could easily be implemented as macros.

That's why I'm excited about sweet.js. Keep in mind it is still in early stages but it is actively being worked on. I will teach you how to write macros in the next few blog posts, so please follow my blog if you are interested.

[1401.1219] Consciousness as a State of Matter


Chris Granger - Light Table is open source

$
0
0

Comments:"Chris Granger - Light Table is open source"

URL:http://www.chris-granger.com/2014/01/07/light-table-is-open-source/


07 Jan 2014

Today Light Table is taking a huge step forward - every bit of its code is now on Github and along side of that, we're releasing Light Table 0.6.0, which includes all the infrastructure to write and use plugins. If you haven't been following the 0.5.* releases, this latest update also brings a tremendous amount of stability, performance, and clean up to the party. All of this together means that Light Table is now the open source developer tool platform that we've been working towards. Go download it and if you're new give our tutorial a shot!

There's been a ton of work since the initial release 0.5.0 (about 200 items in the full changelog), so here's just a few of the highlights:

Plugins

The biggest thing to be released in 0.6.0 is the plugin infrastructure. Given the BOT architecture of Light Table though, "plugin" is a bit of a misnomer - they are capable of fundamentally redefining in or adding anything to Light Table.

Realistically the only distinction between the core code and plugins is which things we ship by default. This gives us an enormous opportunity to redefine what development is. To see what some simple plugins look like, check out the declassifier and CSS plugins. We also added a plugin manager that hooks to a central list of plugins, no need to go hunting all over github.

Inline docs and doc search

This was one of the big things from the original Light Table prototype and video. You can now search docs and get documentation for what's under your cursor, right inline.

Clojure(Script) nrepl, auto-complete, jump to definition, paredit...

Clojure saw a lot of love in this release from standard editory features like auto-complete and paredit, to reworking the back end to allow for remote nrepl sessions (connect to your server and watch things happen in real time!), all the way to interesting new things like custom watches and eval.

Performance, stability, and polish

Because we wanted to go open source there was a big push to clean things up and get Light Table ready for it's big unveiling. To that end we spent a ton of time trying to make everything smoother, faster, leaner. In many cases we improved performance by orders of magnitude - auto-complete is now wickedly fast, behaviors load faster, the command and navigate panes now scroll buttery smooth.

We made lots of changes and little improvements that also help it feel like LT does what you'd expect. You can now drop files/folders into the workspace tree, for example, or open the current file in a browser with a command. Along with that we put some time into making the default skin for Light Table more professional, less obtuse, and whole lot more versatile.

On the road again

Getting both plugins and going open source into a single release was a big undertaking, but it gets us closer to the community supported platform we've been working towards over the past year and a half. From here on out, anyone can play the game and what that'll result in is hard to tell, but it'll certainly be interesting. By getting the platform out, we can now focus a bit more on rethinking the state of the art. And we have some very interesting (and crazy!) ideas for what we think we can do to programming as an industry. I hope you'll join along with us in reimagining what it means to program.

Links

Katee/quietnet · GitHub

$
0
0

Comments:"Katee/quietnet · GitHub"

URL:https://github.com/Katee/quietnet


Quietnet

Simple chat program using near ultrasonic frequencies. Works without Wifi or Bluetooth and won't show up in a pcap.

Note: If you can clearly hear the send script working then your speakers may not be high quality enough to produce sounds in the near ultrasonic range.

Usage

run python send.py in one terminal window and python listen.py in another. Text you input into the send.py window should appear (after a delay) in the listen.py window.

Warning: May annoy some animals.

Installation

Quietnet is dependant on pyaudio and Numpy.

The Open-Office Trap : The New Yorker

$
0
0

Comments:"The Open-Office Trap : The New Yorker"

URL:http://www.newyorker.com/online/blogs/currency/2014/01/the-open-office-trap.html


In 1973, my high school, Acton-Boxborough Regional, in Acton, Massachusetts, moved to a sprawling brick building at the foot of a hill. Inspired by architectural trends of the preceding decade, the classrooms in one of its wings didn’t have doors. The rooms opened up directly onto the hallway, and tidbits about the French Revolution, say, or Benjamin Franklin’s breakfast, would drift from one classroom to another. Distracting at best and frustrating at worst, wide-open classrooms went, for the most part, the way of other ill-considered architectural fads of the time, like concrete domes. (Following an eighty-million-dollar renovation and expansion, in 2005, none of the new wings at A.B.R.H.S. have open classrooms.) Yet the workplace counterpart of the open classroom, the open office, flourishes: some seventy per cent of all offices now have an open floor plan.

The open office was originally conceived by a team from Hamburg, Germany, in the nineteen-fifties, to facilitate communication and idea flow. But a growing body of evidence suggests that the open office undermines the very things that it was designed to achieve. In June, 1997, a large oil and gas company in western Canada asked a group of psychologists at the University of Calgary to monitor workers as they transitioned from a traditional office arrangement to an open one. The psychologists assessed the employees’ satisfaction with their surroundings, as well as their stress level, job performance, and interpersonal relationships before the transition, four weeks after the transition, and, finally, six months afterward. The employees suffered according to every measure: the new space was disruptive, stressful, and cumbersome, and, instead of feeling closer, coworkers felt distant, dissatisfied, and resentful. Productivity fell.

In 2011, the organizational psychologist Matthew Davis reviewed more than a hundred studies about office environments. He found that, though open offices often fostered a symbolic sense of organizational mission, making employees feel like part of a more laid-back, innovative enterprise, they were damaging to the workers’ attention spans, productivity, creative thinking, and satisfaction. Compared with standard offices, employees experienced more uncontrolled interactions, higher levels of stress, and lower levels of concentration and motivation. When David Craig surveyed some thirty-eight thousand workers, he found that interruptions by colleagues were detrimental to productivity, and that the more senior the employee, the worse she fared.

Psychologically, the repercussions of open offices are relatively straightforward. Physical barriers have been closely linked to psychological privacy, and a sense of privacy boosts job performance. Open offices also remove an element of control, which can lead to feelings of helplessness. In a 2005 study that looked at organizations ranging from a Midwest auto supplier to a Southwest telecom firm, researchers found that the ability to control the environment had a significant effect on team cohesion and satisfaction. When workers couldn’t change the way that things looked, adjust the lighting and temperature, or choose how to conduct meetings, spirits plummeted.

An open environment may even have a negative impact on our health. In a recent study of more than twenty-four hundred employees in Denmark, Jan Pejtersen and his colleagues found that as the number of people working in a single room went up, the number of employees who took sick leave increased apace. Workers in two-person offices took an average of fifty per cent more sick leave than those in single offices, while those who worked in fully open offices were out an average of sixty-two per cent more.

But the most problematic aspect of the open office may be physical rather than psychological: simple noise. In laboratory settings, noise has been repeatedly tied to reduced cognitive performance. The psychologist Nick Perham, who studies the effect of sound on how we think, has found that office commotion impairs workers’ ability to recall information, and even to do basic arithmetic. Listening to music to block out the office intrusion doesn’t help: even that, Perham found, impairs our mental acuity. Exposure to noise in an office may also take a toll on the health of employees. In a study by the Cornell University psychologists Gary Evans and Dana Johnson, clerical workers who were exposed to open-office noise for three hours had increased levels of epinephrine—a hormone that we often call adrenaline, associated with the so-called fight-or-flight response. What’s more, Evans and Johnson discovered that people in noisy environments made fewer ergonomic adjustments than they would in private, causing increased physical strain. The subjects subsequently attempted to solve fewer puzzles than they had after working in a quiet environment; in other words, they became less motivated and less creative.

Open offices may seem better suited to younger workers, many of whom have been multitasking for the majority of their short careers. When, in 2012, Heidi Rasila and Peggie Rothe looked at how employees of a Finnish telecommunications company born after 1982 reacted to the negative effects of open-office plans, they noted that young employees found certain types of noises, such as conversations and laughter, just as distracting as their older counterparts did. The younger workers also disparaged their lack of privacy and an inability to control their environment. But they believed that the trade-offs were ultimately worth it, because the open space resulted in a sense of camaraderie; they valued the time spent socializing with coworkers, whom they often saw as friends.

That increased satisfaction, however, may merely mask the fact that younger workers also suffer in open offices. In a 2005 study, the psychologists Alena Maher and Courtney von Hippel found that the better you are at screening out distractions, the more effectively you work in an open office. Unfortunately, it seems that the more frantically you multitask, the worse you become at blocking out distractions. Moreover, according to the Stanford University cognitive neuroscientist Anthony Wagner, heavy multitaskers are not only “more susceptible to interference from irrelevant environmental stimuli” but also worse at switching between unrelated tasks. In other words, if habitual multitaskers are interrupted by a colleague, it takes them longer to settle back into what they were doing. Regardless of age, when we’re exposed to too many inputs at once—a computer screen, music, a colleague’s conversation, the ping of an instant message—our senses become overloaded, and it requires more work to achieve a given result.

Though multitasking millennials seem to be more open to distraction as a workplace norm, the wholehearted embrace of open offices may be ingraining a cycle of underperformance in their generation: they enjoy, build, and proselytize for open offices, but may also suffer the most from them in the long run.

Maria Konnikova is the author of “Mastermind: How to Think Like Sherlock Holmes.”

Photograph: View Pictures/UIG via Getty.

Untitled

$
0
0

Comments:"Untitled"

URL:http://www.scribd.com/vacuum?url=http://conferences.sigcomm.org/co-next/2013/program/p303.pdf


 

Towards a SPDY’ier Mobile Web?

Jeffrey Erman, Vijay Gopalakrishnan, Rittwik Jana, K.K. Ramakrishnan

AT&T Labs – ResearchOne AT&T Way, Bedminster, NJ, 07921{erman,gvijay,rjana,kkrama}@research.att.com

ABSTRACT

Despite its widespread adoption and popularity, the Hyper-text Transfer Protocol (HTTP) suffers from fundamentalperformance limitations. SPDY, a recently proposed alter-native to HTTP, tries to address many of the limitationsof HTTP (e.g., multiple connections, setup latency). Withcellular networks fast becoming the communication chan-nel of choice, we perform a detailed measurement study tounderstand the benefits of using SPDY over cellular net-works. Through careful measurements conducted over fourmonths, we provide a detailed analysis of the performanceof HTTP and SPDY, how they interact with the variouslayers, and their implications on web design. Our resultsshow that unlike in wired and 802.11 networks, SPDY doesnot clearly outperform HTTP over cellular networks. Weidentify, as the underlying cause, a lack of harmony betweenhow TCP and cellular networks interact. In particular, theperformance of most TCP implementations is impacted bytheir implicit assumption that the network round-trip la-tency does not change after an idle period, which is typi-cally not the case in cellular networks. This causes spuriousretransmissions and degraded throughput for both HTTPand SPDY. We conclude that a viable solution has to ac-count for these unique cross-layer dependencies to achieveimproved performance over cellular networks.

Categories and Subject Descriptors

C.2.2 [

Computer-Communication Networks

]: NetworkProtocols—

Applications 

; C.4 [

Performance of Systems

]:Measurement techniques

Keywords

SPDY, Cellular Networks, Mobile Web

1. INTRODUCTION

As the speed and availability of cellular networks grows,they are rapidly becoming the access network of choice. De-

Permission to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full cita-tion on the first page. Copyrights for components of this work owned by others thanACM must be honored. Abstracting with credit is permitted. To copy otherwise, or re-publish, to post on servers or to redistribute to lists, requires prior specific permissionand/or a fee. Request permissions from permissions@acm.org.

CoNEXT’13,

 December 9–12, 2013, Santa Barbara, California, USA.Copyright 2013 ACM 978-1-4503-2101-3/13/12 ...$15.00.http://dx.doi.org/10.1145/2535372.2535399.

spite the plethora of ‘apps’, web access remains one of themost important uses of the mobile internet. It is thereforecritical that the performance of the cellular data network betuned optimally for mobile web access.The Hypertext Transfer Protocol (HTTP) is the key build-ing block of the web. Its simplicity and widespread supporthas catapulted it into being adopted as the nearly ‘univer-sal’ application protocol, such that it is being consideredthe narrow waist of the future internet [11]. Yet, despite itssuccess, HTTP suffers from fundamental limitations, manyof which arise from the use of TCP as its transport layerprotocol. It is well-established that TCP works best if asession is long lived and/or exchanges a lot of data. This isbecause TCP gradually ramps up the load and takes timeto adjust to the available network capacity. Since HTTPconnections are typically short and exchange small objects,TCP does not have sufficient time to utilize the full net-work capacity. This is particularly exacerbated in cellularnetworks where high latencies (hundreds of milliseconds arenot unheard off [18]) and packet loss in the radio access net-work is common. These are widely known to be factors thatimpair TCP’s performance.SPDY [7] is a recently proposed protocol aimed at ad-dressing many of the inefficiencies with HTTP. SPDY usesfewer TCP connections by opening one connection per do-main. Multiple data streams are multiplexed over this singleTCP connection for efficiency. SPDY supports multiple out-standing requests from the client over a single connection.SDPY servers transfer higher priority resources faster thanlow priority resources. Finally, by using header compression,SPDY reduces the amount of redundant header informationeach time a new page is requested. Experiments show thatSPDY reduces page load time by as much as 64% on wirednetworks and estimate as much as 23% improvement on cel-lular networks (based on an emulation using Dummynet) [7].In this paper, we perform a detailed and systematic mea-surement study on real-world production cellular networksto understand the benefits of using SPDY. Since most web-sites do not support SPDY – only about 0.9% of all web-sites use SPDY [15] – we deployed a SPDY proxy that func-tions as an intermediary between the mobile devices and webservers. We ran detailed field measurements using 20 pop-ular web pages. These were performed across a four monthspan to account for the variability in the production cellularnetwork. Each of the measurements was instrumented andset up to account for and minimize factors that can bias theresults (e.g., cellular handoffs).

303

The Year in Kickstarter 2013

$
0
0

Comments:"The Year in Kickstarter 2013"

URL:http://www.kickstarter.com/year/2013


The
Year in
Kickstarter

In 2013
3 million people pledged
$480 million to Kickstarter projects

That works out to
$1,315,520 pledged a day
or$913 a minute

The3 million people who backed a project
came from214 countries and territories
andall seven continents (even Antarctica)

807,733 people backed more than one project
81,090 backed 10 or more projects
and975 people backed more than 100

19,911 projects were successfully funded in 2013
and thousands more came to life

Pebble arrived

Oculus Rift changed how we play

Ouya powered up

Goldieblox inspired new engineers

An emoji translation of Moby Dick
entered the Library of Congress

A Delorean hovercraft
cruised the San Francisco Bay

A human-powered helicopter took flight

Students built classrooms
from shipping containers

Grandma Pearl made Happy Canes

Photo Credit Rhonda Karsch

A skatepark opened in Philly

A photo exhibit opened on the Berlin Wall

Photo Credit Kai Wiedenhoefer

Kickstarter was on Jeopardy!

Kickstarter was in Cards Against Humanity

Blue Ruin won at Cannes

Inocente won an Oscar

Werner Herzog narrated
a project video about salt

Photo Credit Erinç Salor

Backers saved independent movie theaters

Backers resurrected Veronica Mars

Backers brought rappers to North Korea

Backers rescued a photographer from obscurity

Photo Credit © Vivian Maier/Maloof Collection

Backers launched satellites into space

2013 reminded us
that people are amazing

That ideas are exhilarating

That we’re all capable of
creating incredible things

Thank you for a wonderful year

Viewing all 9433 articles
Browse latest View live