Quantcast
Channel: Hacker News 50
Viewing all 9433 articles
Browse latest View live

AMD launches Kaveri processors, aimed at starting a computing revolution | GamesBeat | Games | by Dean Takahashi

$
0
0

Comments:"AMD launches Kaveri processors, aimed at starting a computing revolution | GamesBeat | Games | by Dean Takahashi"

URL:http://venturebeat.com/2014/01/14/amd-launches-kaveri-processors-aimed-at-starting-a-computing-revolution/


Advanced Micro Devices is launching its code-named Kaveri processors today, which represent one of the biggest technical advances that the company has made in some time. Kaveri chips are meant for games and other high-performance applications.

AMD

Kaveri has 2.4 billion transistors.

The new chips show that AMD is moving in a very different direction from Intel, which at last week’s 2014 International CES put a lot of emphasis on “perceptual computing,” or using gestures and other new kinds of interfaces to control computers. Instead of interfaces, AMD is focusing on powerful graphics capabilities. AMD says Kaveri has 2.4 billion transistors (the basic building blocks of computer electronics), and 47 percent of them are aimed at better, high-end graphics.

Although the code name is Kaveri, the new chips will officially be called the A-Series Accelerated Processing Units (APUs). Like most AMD processors, they combine both graphics and central processing unit functions on the same chip. AMD’s chips will include up to four CPUs and eight graphics processing units (GPUs) on a single piece of silicon.

Nine out 10 PCs are now shipping with CPUs and GPUs on the same chip, according to Jon Peddie Research.

The Kaveri line is the first series of chips to use a new approach to computing dubbed the Heterogeneous System Architecture (HSA), which makes it easier to get around bottlenecks inside a PC and speed the whole system up.

Kaveri chips also include Graphics Core Next (GCN), an architecture designed for next-generation games.

AMD claims that its GPUs are much more powerful than Intel’s. For example, AMD said that its A10-7850K chip is 24 percent faster than the system performance of the higher-priced Intel Core i5-4670K chip. It says its graphics performance is 87 percent better than Intel’s, and its compute performance is 63 percent better than Intel’s.

AMD says the new chips will also use Mantle, an applications programming interface that makes it easier for developers to write high-performance games for AMD chips — it’s kind of like AMD’s own version of Microsoft’s DirectX technology. The chips will also have AMD TrueAudio technology, a 32-channel surround audio technology. The A-series chips will support screen resolutions up to 4K, or UltraHD, which puts four times as many pixels on a screen as 1080p high-definition TV.

AMD

AMD Kaveri benchmarks

“AMD maintains our technology leadership with the 2014 AMD A-Series APUs, a revolutionary next generation APU that marks a new era of computing,” said Bernd Lienhard, corporate vice president and general manager of the client business unit at AMD. “With world-class graphics and compute technology on a single chip, the AMD A-Series APU is an effective and efficient solution for our customers and enable industry-leading computing experiences.”

HSA is pretty arcane technical material for consumers, but if it takes off, AMD says it will lead to faster and more power-efficient personal computers, tablets, smartphones and cloud servers. It goes hand-in-hand with hUMA, a new way for processors to access the memory inside an Accelerated Processing Unit, or a single chip that combines both a microprocessor and graphics.

The problem is that it isn’t easy for programmers to harness the power of the GPU, or graphics processing unit, inside an APU. The HSA has been designed to fix this problem, making graphics an equal partner with the CPU (central processing unit) and other processors, such as a digital signal processor, inside a computing system.

All of these functions used to be part of separate chips. But now they can be packaged inside the same system-on-chip, or SoC, on the same piece of silicon. The three different kinds of processors access data in different ways, but AMD wants to change and simplify that.

AMD

AMD APUs target a lot of applications.

GPUs can be used for non-graphics computing tasks, but it often takes too long to route requests for data through a CPU. Most developers don’t want to deal with the difficulty of optimizing their code for this kind of work. But a new technique, dubbed “heterogenous queuing,” allows applications to directly communicate with the GPU, treating it as an equal partner along side a CPU when it comes to accessing data quickly. That means an application won’t have to wait for the CPU when what it really needs to is to access the GPU.

With HSA and heterogenous queuing, the GPU doesn’t have to wait for the CPU to feed it data. It can spawn its own tasks on its own.

Nathan Brookwood, an analyst at Insight 64, calls this change the “same kind of conceptual breakthrough that the introduction of the virtual memory wrought in the 1970s,” when engineers figured out a better way to manage memory in a computer.

AMD also said that Mantle will make it easy for developers to access new features in graphics chips. It allows developers to write games “closer to the metal,” getting rid of some of the overhead associated with running a PC and letting them get more access to the hardware’s real firepower.

In a demo called Star Swarm, Oxide Games created a Mantle-based science fiction scene with a gigantic space battle involving thousands of spaceships.

Patrick Moorhead, analyst at Moor Insights & Strategy, said, “Kaveri is the most interesting chip AMD has launched in years, and I really like what I have seen so far with third-party benchmarks on next generation workloads. It really is the culmination of the seven years since AMD acquired ATI.”

He added, “Kaveri’s market success will be directly proportional to the speed at which [it] can enable more and more software to become HSA-aware, where it can much more easily tap into the gigaflops of the GPU’s performance. AMD now needs either a Google or Microsoft to commit to optimizing their operating system for HSA to seal the deal, as it will make software that much easier to write.”

The new A-Series chips will range in price from $119 to $173, while the power consumption will range from 45 watts to 95 watts. CPU frequency ranges from 3.1 gigahertz to 4.0 gigahertz. The A-Series APUs are available today.

AMD

AMD HSA speeds graphics and CPU processing.


Value is created by doing - Sam Altman

$
0
0

Comments:" Value is created by doing - Sam Altman "

URL:http://blog.samaltman.com/value-is-created-by-doing


Value is created by doing.

It’s easy to forget this.  A lot of stuff feels like work—commenting on HN, tweeting, reading about other companies’ funding rounds, grabbing coffee, etc [1]—is not actually work.  (If you count that as work, think really hard about the value you’re creating in your job.)  These activities can be worthwhile in small doses—it’s important to network and meet interesting people to stay in the flow of ideas—but they are not by themselves how new wealth gets created.

Value gets created when a company does things like build widgets and sell them to customers.  As a rough guideline, it’s good to stay in roles where you’re close to the doing.

Of course you have to do the right things.  Writing software no one wants does not create value—that’s called a class project.  So it’s critical to figure out the right thing to work on, and strategy is far more valuable than a lot of pivot-happy companies would have you believe.  But strategy alone has no value—value gets captured by execution.

It’s easier to sit around and talk about building a startup than it is to actually start a startup.  And it’s fun to talk about.  But over time, the difference between fun and fulfilling becomes clear.  Doing things is really hard—it’s why, for example, you can generally tell people what you’re working on without NDAs, and most patents never matter.  The value, and the difficulty, comes from execution.

There are good tricks for keeping yourself honest here.  When I was running a company, I used to make a list of everything I got done at the end of the day.  It was remarkable how I could feel like I had a really busy day and realize that night I got nothing done.  Similarly, I could have a day that felt only somewhat busy, but accomplish 3 or 4 major things. 

Err on the side of doing too much of the sort of work that matters and blowing off all the rest, or as Machiavelli said:

Make mistakes of ambition and not mistakes of sloth.  Develop the strength to do bold things, not the strength to suffer.

You build what you measure—if you measure your productivity by the number of meetings you have in a day, you will have a lot of meetings.  If you measure yourself by revenue growth or number of investments closed or something like that, you will probably have fewer meetings.

Another example of not-quite-work is every night in San Francisco, there are dinner parties where people get together and talk about the future.  It’s always fun and usually not very contentious—most people agree we need to go to space, for example.  But at the end of it, everyone goes home and works on something else.

If you believe that going to space is the most important project for humanity, then work on it.  If you can’t figure out to raise hundreds of millions of dollars, go work for SpaceX (joining a great company is a much better plan than starting a mediocre one).  If enterprise software is what you really love, then work on that. [2]

If you’re reading this and feeling unproductive, there’s a silver lining.  You can just close the browser window.  The good news is that it’s easy to course-correct, and it feels great.

[1] I count blogging as a marginal use of time, but the reason I started is because I realized it was important to be good at writing, I was bad at it, and the only way I was going to improve was with lots of practice.  And sometimes I meet really interesting founders because of something I wrote.

[2] This isn’t meant as any sort of relative value judgment; if what you want to do is build an enterprise software company, then you should do that.  The problem comes when what you really want to do is build rockets.  A lot of people feel like they first should do something to make money and then do what they care about (or first work at a company for awhile before starting a company they really want to start).  While you of course should take care of your family before anything else, you should try to work on what you really care about.  You can usually find a way.  The danger is that life is short and you only get to work on a small number of companies over the course of a career—it’s worth trying to make them count.

Payments startup WePay pockets $15M, kills direct-to-consumer offering | VentureBeat | Deals | by Eric Blattberg

$
0
0

Comments:"Payments startup WePay pockets $15M, kills direct-to-consumer offering | VentureBeat | Deals | by Eric Blattberg"

URL:http://venturebeat.com/2014/01/16/wepay-15-million-series-c/


WePay has outgrown its roots.

Originally designed to help friends collect money from each other online, the payments startup is shutting down its direct-to-consumer offering. It’s now focused squarely on its payments API, which powers online payments and fraud detection for crowdfunding sites, merchants, and small businesses.

Although WePay is much smaller than the eBay-owned Braintree or Stripe, two competing payments companies, it demonstrated strong growth last year, particularly in the crowdfunding market. In October, the company announced it was processing an average 648 percent more crowdfunding volume every month compared to 2012 — or up to $1.5 million daily.

And WePay has raised $15 million to continue growing throughout 2014, the company announced Thursday morning.

The Palo Alto-based company will use the new capital to continue hiring: Clerico expects his 50-strong team to grow to 75 by the end of 2014, with a focus on engineering talent. But its primary use will be international expansion.

“The hardest part of going international for us is learning how to underwrite merchants in all these different markets,” WePay CEO Bill Clerico told VentureBeat. “We’re not only facilitating payment platforms, but taking on all the fraud risk.”

Beyond the language barriers, tax and credit checks vary wildly across countries, and “it requires a lot of work” to adapt, noted Clerico. WePay uses social media data to underwrite some of it, but that requires plenty of research, too. While Yelp has a good reputation data in the U.S., for example, it’s less reliable in some other countries.

WePay has been thinking about the global market for at least nine months and is on the cusp of making some “exciting announcements” regarding international expansion, said Clerico.

Phil Purcell of Continental Investors led the $15 million funding round in WePay. Before heading Continental, Purcell cofounded the Discover Card and served as CEO of Morgan Stanley. He’ll be joining WePay’s board.

“It’s really rare that you get someone who has deep payments expertise but also has great experience in late stage and public investments,” said Clerico.

Other investors include Max Levchin, former chief technology officer of PayPal; Maynard Webb, former chief operating officer of eBay; and angel investor Raymond Tonsing. All existing institutional investors — Highland Capital Partners, August Capital, and Ignition Partners — also participated in the round, which brings WePay’s total financing to $35 million.

NSA collects millions of text messages daily in 'untargeted' global sweep | World news | theguardian.com

$
0
0

Comments:" NSA collects millions of text messages daily in 'untargeted' global sweep | World news | theguardian.com "

URL:http://www.theguardian.com/world/2014/jan/16/nsa-collects-millions-text-messages-daily-untargeted-global-sweep


The National Security Agency has collected almost 200 million text messages a day from across the globe, using them to extract data including location, contact networks and credit card details, according to top-secret documents.

The untargeted collection and storage of SMS messages – including their contacts – is revealed in a joint investigation between the Guardian and the UK’s Channel 4 News based on material provided by NSA whistleblower Edward Snowden.

The documents also reveal the UK spy agency GCHQ has made use of the NSA database to search the metadata of “untargeted and unwarranted” communications belonging to people in the UK.

The NSA program, codenamed Dishfire, collects “pretty much everything it can”, according to GCHQ documents, rather than merely storing the communications of existing surveillance targets.

The NSA has made extensive use of its vast text message database to extract information on people’s travel plans, contact books, financial transactions and more – including of individuals under no suspicion of illegal activity.

An agency presentation from 2011 – subtitled “SMS Text Messages: A Goldmine to Exploit” – reveals the program collected an average of 194 million text messages a day in April of that year. In addition to storing the messages themselves, a further program known as “Prefer” conducted automated analysis on the untargeted communications.

An NSA presentation from 2011 on the agency's Dishfire program to collect millions of text messages daily. Photograph: Guardian

The Prefer program uses automated text messages such as missed call alerts or texts sent with international roaming charges to extract information, which the agency describes as “content-derived metadata”, and explains that “such gems are not in current metadata stores and would enhance current analytics”.

On average, each day the NSA was able to extract:

• More than 5 million missed-call alerts, for use in contact-chaining analysis (working out someone’s social network from who they contact and when)

• Details of 1.6 million border crossings a day, from network roaming alerts

• More than 110,000 names, from electronic business cards, which also included the ability to extract and save images.

• Over 800,000 financial transactions, either through text-to-text payments or linking credit cards to phone users

The agency was also able to extract geolocation data from more than 76,000 text messages a day, including from “requests by people for route info” and “setting up meetings”. Other travel information was obtained from itinerary texts sent by travel companies, even including cancellations and delays to travel plans.

A slide on the Dishfire program describes the 'analytic gems' of collected metadata. Photograph: Guardian

Communications from US phone numbers, the documents suggest, were removed (or “minimized”) from the database – but those of other countries, including the UK, were retained.

The revelation the NSA is collecting and extracting personal information from hundreds of millions of global text messages a day is likely to intensify international pressure on US president Barack Obama, who on Friday is set to give his response to the report of his NSA review panel.

While US attention has focused on whether the NSA’s controversial phone metadata program will be discontinued, the panel also suggested US spy agencies should pay more consideration to the privacy rights of foreigners, and reconsider spying efforts against allied heads of state and diplomats.

In a statement to the Guardian, a spokeswoman for the NSA said any implication that the agency’s collection was “arbitrary and unconstrained is false”. The agency’s capabilities were directed only against “valid foreign intelligence targets” and were subject to stringent legal safeguards, she said.

The ways in which the UK spy agency GCHQ has made use of the NSA Dishfire database also seems likely to raise questions on the scope of its powers.

While GCHQ is not allowed to search through the content of messages without a warrant – though the contents are stored rather than deleted or “minimized” from the database – the agency’s lawyers decided analysts were able to see who UK phone numbers had been texting, and search for them in the database.

The GCHQ memo sets out in clear terms what the agency’s access to Dishfire allows it to do, before handling how UK communications should be treated. The unique property of Dishfire, it states, is how much untargeted or unselected information it stores.

“In contrast to [most] GCHQ equivalents, DISHFIRE contains a large volume of unselected SMS traffic,” it states (emphasis original). “This makes it particularly useful for the development of new targets, since it is possible to examine the content of messages sent months or even years before the target was known to be of interest.”

It later explains in plain terms how useful this capability can be. Comparing Dishfire favourably to a GCHQ counterpart which only collects against phone numbers that have specifically been targeted, it states “Dishfire collects pretty much everything it can, so you can see SMS from a selector which is not targeted”.

The document also states the database allows for broad, bulk searches of keywords which could result in a high number of hits, rather than just narrow searches against particular phone numbers: “It is also possible to search against the content in bulk (e.g. for a name or home telephone number) if the target’s mobile phone number is not known.”

Analysts are warned to be careful when searching content for terms relating to UK citizens or people currently residing in the UK, as these searches could be successful but would not be legal without a warrant or similar targeting authority.

However, a note from GCHQ’s operational legalities team, dated May 2008, states agents can search Dishfire for “events” data relating to UK numbers – who is contacting who, and when.

“You may run a search of UK numbers in DISHFIRE in order to retrieve only events data,” the note states, before setting out how an analyst can prevent himself seeing the content of messages when he searches – by toggling a single setting on the search tool.

Once this is done, the document continues, “this will now enable you to run a search without displaying the content of the SMS, especially useful for untargeted and unwarranted UK numbers.”

A separate document gives a sense of how large-scale each Dishfire search can be, asking analysts to restrain their searches to no more than 1,800 phone numbers at a time.

An NSA slide on the 'Prefer' program reveals the program collected an average of 194 million text messages a day in April 2011. Photograph: Guardian

The note warns analysts they must be careful to make sure they use the form’s toggle before searching, as otherwise the database will return the content of the UK messages – which would, without a warrant, cause the analyst to “unlawfully be seeing the content of the SMS”.

The note also adds that the NSA automatically removes all “US-related SMS” from the database, so it is not available for searching.

A GCHQ spokesman refused to comment on any particular matters, but said all its intelligence activities were in compliance with UK law and oversight.

But Vodafone, one of the world’s largest mobile phone companies with operations in 25 countries including Britain, greeted the latest revelations with shock. 

“It’s the first we’ve heard about it and naturally we’re shocked and surprised,” the group’s privacy officer and head of legal for privacy, security and content standards told Channel 4 News.

“What you’re describing sounds concerning to us because the regime that we are required to comply with is very clear and we will only disclose information to governments where we are legally compelled to do so, won’t go beyond the law and comply with due process.

“But what you’re describing is something that sounds as if that’s been circumvented. And for us as a business this is anathema because our whole business is founded on protecting privacy as a fundamental imperative.”

He said the company would be challenging the UK government over this. “From our perspective, the law is there to protect our customers and it doesn’t sound as if that is what is necessarily happening.”

The NSA’s access to, and storage of, the content of communications of UK citizens may also be contentious in the light of earlier Guardian revelations that the agency was drafting policies to facilitate spying on the citizens of its allies, including the UK and Australia, which would – if enacted – enable the agency to search its databases for UK citizens without informing GCHQ or UK politicians.

The documents seen by the Guardian were from an internal Wikipedia-style guide to the NSA program provided for GCHQ analysts, and noted the Dishfire program was “operational” at the time the site was accessed, in 2012.

The documents do not, however, state whether any rules were subsequently changed, or give estimates of how many UK text messages are collected or stored in the Dishfire system, or from where they are being intercepted.

In the statement, the NSA spokeswoman said: “As we have previously stated, the implication that NSA's collection is arbitrary and unconstrained is false.

“NSA's activities are focused and specifically deployed against – and only against – valid foreign intelligence targets in response to intelligence requirements.

“Dishfire is a system that processes and stores lawfully collected SMS data. Because some SMS data of US persons may at times be incidentally collected in NSA’s lawful foreign intelligence mission, privacy protections for US persons exist across the entire process concerning the use, handling, retention, and dissemination of SMS data in Dishfire.

“In addition, NSA actively works to remove extraneous data, to include that of innocent foreign citizens, as early as possible in the process.”

The agency draws a distinction between the bulk collection of communications and the use of that data to monitor or find specific targets.

A spokesman for GCHQ refused to respond to any specific queries regarding Dishfire, but said the agency complied with UK law and regulators.

“It is a longstanding policy that we do not comment on intelligence matters,” he said. “Furthermore, all of GCHQ's work is carried out in accordance with a strict legal and policy framework which ensures that our activities are authorised, necessary and proportionate, and that there is rigorous oversight, including from the Secretary of State, the Interception and Intelligence Services Commissioners and the Parliamentary Intelligence and Security Committee.”

GCHQ also directed the Guardian towards a statement made to the House of Commons in June 2013 by foreign secretary William Hague, in response to revelations of the agency’s use of the Prism program.

“Any data obtained by us from the US involving UK nationals is subject to proper UK statutory controls and safeguards, including the relevant sections of the Intelligence Services Act, the Human Rights Act and the Regulation of Investigatory Powers Act,” Hague told MPs.

Schneier on Security: Today I Briefed Congress on the NSA

$
0
0

Comments:"Schneier on Security: Today I Briefed Congress on the NSA"

URL:https://www.schneier.com/blog/archives/2014/01/today_i_briefed.html


 

A blog covering security and security technology.

« Edward Elgar's Ciphers |Main | SIERRAMONTANA: NSA Exploit of the Day »

January 16, 2014

Today I Briefed Congress on the NSA

This morning I spent an hour in a closed room with six Members of Congress: Rep. Logfren, Rep. Sensenbrenner, Rep. Scott, Rep. Goodlate, Rep Thompson, and Rep. Amash. No staffers, no public: just them. Lofgren asked me to brief her and a few Representatives on the NSA. She said that the NSA wasn't forthcoming about their activities, and they wanted me -- as someone with access to the Snowden documents -- to explain to them what the NSA was doing. Of course I'm not going to give details on the meeting, except to say that it was candid and interesting. And that it's extremely freaky that Congress has such a difficult time getting information out of the NSA that they have to ask me. I really want oversight to work better in this country.

Surreal part of setting up this meeting: I suggested that we hold this meeting in a SCIF, because they wanted me to talk about top secret documents that had not been made public. The problem is that I, as someone without a clearance, would not be allowed into the SCIF. So we had to have the meeting in a regular room.

EDITED TO ADD: This really was an extraordinary thing.

Tags: accountability, Edward Snowden, laws, NSA, privacy, Schneier news, secrecy, surveillance

Posted on January 16, 2014 at 12:27 PM57 Comments

To receive these entries once a month by e-mail, sign up for the Crypto-Gram Newsletter.

Wonderful that it happened, but saddening that so few Reps were present.

Thank you for doing that. And thank those representatives for seeking out information and listening, instead of buying the admin's continuing lies about the programs.

Thank you!

You probably don't want to answer this question, but I'm curious anyway: How did the folks from congress react, can you already see a goal or path that they intend to follow on this issue?

"Wonderful that it happened, but saddening that so few Reps were present."

Rep. Lofgren purposely kept it small.

I've seen news articles that basically say that Obama isn't planning substantial change to anything the NSA is doing. My comment has been that this is Congress' job. Glad to hear that some of them are trying to do it!

It is VERY disturbing that congress does not information about the NSA's activities, but a very good thing that Bruce was there to set them straight.

It's worth pointing out that all of these representatives (assuming you mean Bennie Thompson of Mississippi or Glenn Thompson of Pennsylvania, and Bobby Scott of Virginia or Austin Scott of Georgia—there are three Rep. Thompsons and three Rep. Scotts) are cosponsors of Sensenbrenner's "USA Freedom Act". In other words, they are already on record as supporters of restricting the NSA's surveillance powers. It's good news that these members are interested enough in writing further legislation to listen to you, and particularly that they are interested in restraining the NSA's monitoring of the internet—something existing proposals in Congress and the administration haven't addressed so far to an even remotely adequate degree. But this meeting can't be taken as a sign that Congress as a whole is showing a sufficient interest in restoring constitutional limits on surveillance or conducting effective oversight on intelligence agencies.

re: the SCIF
When I worked in a classified job there was a joke that we shouldn't read Pravda because the Russians might publish something that we didn't have clearance to know.

Translation: Today I informed members of the department of government responsible for total government oversight what the government is doing, as someone with literally no internal knowledge or clearance..

If America is really a democracy what's it say that the executive branch of government consists of people who have no qualifications to work in government? This isn't anti-Obama either... G.W. Bush could barely do math or speak English and drove a car through a wall of a building while high on Cocain..

Does legal provision exist for creating an empty SCIF? That is, a SCIF designed for people to bring in TS/SCI material in a secure way, share it with others within the SCIF, and then remove the materials in a secure way, so the SCIF is again empty when the meeting is over?

While I'm sure it's physically possible to do this, I suspect a SCIF is defined by the secure information contained within it. Therefore, anyone entering must be cleared for that information. Which leads to situations like yours...

@hjh54gbjhgb

Before you disparage someone as being ignorant or uneducated, perhaps you should know a bit more about them.

Bush attended the Harvard Business School, where he earned a Master of Business Administration. He is the only U.S. President to have earned an M.B.A.

His manner of speech has led many to believe he was uneducated, and he wasn't a stellar student, but I doubt you have an MBA.

I've never heard of a SCIF before now. What is the reasoning behind not allowing someone without clearance into a SCIF?

I can think of two. Perhaps you might learn things about how a SCIF is built that they want to keep secret. Or alternatively, they don't want anything discussed inside the SCIF to come out of the SCIF in the head of someone without clearance.

Of course, explanation two is particularly perverse in this situation, as you are the one bringing in the forbidden knowledge in the first place. And not allowing you in results in the discussion taking place at a Starbucks instead.

@Matt:

Without disagreeing that "this is Congress' job", I'd say that that job is so difficult and complex that the problem won't be fixed soon, if ever, and in the meantime the rumors about the administration's cosmetic reform plans still constitute catastrophic bad news.

The thing is, the policies of the Federal bureaucracy are influenced and set in proportion to the staff and budget of the various Departments and Agencies advocating those policies. This is a problem with respect to privacy and civil rights because almost all the institutions at the table with a say on such matters are the securocracies --- Justice, the various law-enforcement agencies, the intelligence bureaucracy, Defense, the NSA. All of these very powerful institutions advocate non-stop for greater technical and legal surveillance power and for more powerful tools for legal compulsion to extract information and protect secrecy.

Perhaps this is as it should be -- you could hardly expect the CIA to be tasked with making a case for protecting citizen rights. Some other institution at the table with a mandate to protect those rights should be doing that, so that the policies that emerge reflect some kind of balance. The trouble is, there is almost nobody in the Federal government who has both the mandate and the bureaucratic heft to swing that role. That's the reason that the securocracies have been hampered only by technical limitations and whatever fig-leaf of self-policing they deign to institute.

"Almost nobody": there is one major institution within the government that has both the mandate to protect citizen rights and has the power to balance the securocrats: the Office of the President. It is the President's job to make sure that those rights are correctly balanced, that the intelligence and law-enforcement types don't get everything they dream up. The President, as the only elected Constitutional officer in the Executive, is charged with protecting the rights established under the Constitution. That's his job. You can tell he's doing it if you see that the heads of the law-enforcement and intelligence bureaucracies are unhappy about limitations to their powers that they hate and complain about. Which is to say, at the moment, the President is not doing his job in this regard.

You would think that the first law professor elected President since Woodrow Wilson would understand this critical role, the nonfeasance of which is so damaging to our civil society. But in the case of Barack Obama, you would be wrong. The man's inability to conceive of an actionable belief not arrived at by averaging the beliefs of those around him makes him eat out of the palms of the securocrats he should be frustrating, who are the only people he talks to about secrets and spying. It's starting to look as if those people wrote his "reform" plan for him. I suppose that should have been expected, given his craven record in these matters.

"I've never heard of a SCIF before now. What is the reasoning behind not allowing someone without clearance into a SCIF?"

My guess is that because I am untrusted, I might plant a bug in the SCIF. So it's less that I'm not allowed in, and more that if I was allowed in they would have to re-SCIF the place.

"It's worth pointing out that all of these representatives (assuming you mean Bennie Thompson of Mississippi or Glenn Thompson of Pennsylvania, and Bobby Scott of Virginia or Austin Scott of Georgia—there are three Rep. Thompsons and three Rep. Scotts) are cosponsors of Sensenbrenner's "USA Freedom Act". In other words, they are already on record as supporters of restricting the NSA's surveillance powers. "

Bobby Scott and Mike Thompson.

And, yes, they are all people who want to rein in the NSA. I was speaking to allies.

@ Dilbert

You quoted Wikipedia but left off all the important things that came before it:

"Bush attended Yale University... was a cheerleader and a member of the Delta Kappa Epsilon, being elected the fraternity's president during his senior year. Bush also became a member of the Skull and Bones society as a senior... was a rugby union player... characterized himself as an average student. His average during his first three years at Yale was 77... similar average under a nonnumeric rating system in his final year."

So, he was an average student, ran a frat, was in a secretive pro-elitism club, was a cheerleader, and played rough in sports. Each of these traits were clear in his presidency. The good financial, management and strategic skills of an MBA? Not so much.

@Piper: I can think of a third - he might plant a bug.

@ Bruce

Good idea suggesting a SCIF whether they agreed or not.

Huh? If you have access to classified/stolen documents, you could be charged with several crimes.

I find your comment about access to a SCIF rather interesting. I tried for several years to gain employment to a company that would sponser me for an SCI clearance, but it never panned out. But the security and intelligence subculture has always interested me. That said, I saw a recent episode of The Good Wife, where the defense attorneys requested a session with the judge in a SCIF, because the prosecutors had earlier had a session in a SCIF with the judge, etc, etc.

Clearly another case of Hollywood going off the rails for poetic license.

Though I'm now even more interested in SCIF rooms. Are there rules and regulations that discuss the storage of sensitive materials in a SCIF? or rather, the prohibition of keeping sensitive data/documents in a SCIF?
I had found a website once that talked about building a SCIF, but it was a little vague. One last question: Are there non-government / non-intelligence community entities that build SCIFs? I'm thinking about corporate counter-espionage. Thoughts?

JG

@Carlo Graziani

I agree. The problem goes deeper though.

Our uninformed, risk-averse public means nobody in government has any incentive to do anything except make the surveillance state bigger, the government more powerful, and more opaque. If a Congressman successfully got a bill passed that removed the TSA security theatre, what happens when there's an attack at an airport? He's finished in politics forever, and so is everyone who voted for it, and possibly his party too. It makes no difference if the attack had anything to do with the changes in the TSA; they would be blamed regardless.

So what happens if the NSA's surveillance capabilities are curtailed? The second an attack happens, irrespective of whether the NSA could have prevented it or not, the public will clamor for the government to "do more" to protect us, not knowing, caring, or understanding what "more" actually consists of. The people who called for the NSA to be curbed will be politically ruined, their parties will lose elections nationwide, and opponents will ram through more pointless, rights-infringing legislation.

We have a situation where common sense and rational thinking in government are punished, and extreme overreaction to events is necessary for political survival. That's before you factor in the primary purpose of every government agency: perpetuating their own existence.

I expect the fallout of the Snowden leaks to be some window-dressing changes made to the NSA to satisfy the 5 second attention span of the public. Nothing will actually change, the public won't care too much, and that will be the end of it.

Re: SCIF

Standard protocol would be to have you escorted at all times. It is not so unusual that someone without an SCI clearance would need access to a SCIF. That does strike me as interesting!

A SCIF is a secured facility, but there can be provisions for sanitizing it for non-cleared people. This happens any time things like contractors moving furniture in. Basically.... clean the place up, turn monitors off, and guard the stuff.

I say this as a former US Marine with TS/SCI (and a compartment that's classified) that worked in the SCIF for awhile. Sanitizing is a pain in the butt, so they're probably just lazy.

Surely this private meeting will be included in the up-coming movie of the NSA/Snowden affairs. Who will play Bruce S.??

The SCIF paradox is like the weirdness that happened after Wikileaks, for example, the Library of Congress blocking the Wikileaks site because they were obligated to protect classified information, even if that information was available to the public.

I doubt the SCIF denial was due to Bruce being uncleared. My thoughts for the denial is "What's said in the SCIF, stays in the SCIF (or another SCIF)".

By having the meeting in a more open meeting room, the Representatives can brief others who don't possess clearances.

Re: "She said that the NSA wasn't forthcoming about their activities..."

This is an important post because it addresses a commentary repeated a thousand or more times in the on the net:

"WE KNEW THEY WERE DOING IT ALL ALONG."

Truth: NO, we did not.

The NSA and it's many counterparts hide behind a curtain of secrecy and when cornered simply lie, deceive and manipulate.

No, we didn't know and in many respects we still don't know. We can only see the literal tip of the iceberg.

However, when Congress admits they "don't know" right out loud it's is refreshing in a distressful way.

They should know. They are right to ask.

I hope they know now more than they did yesterday.

@Russell Thomas - why Chuck Norris of course.

@Dilbert: I'm just a computer engineer. Sorry I don't apply basic algebra to pseudo-sciences like psychology and applied economics and use semantics to explain why everything fails as a result..

How's that national debt? You ever looked at the increases during the Bush and Reagan offices compared to the evil Clinton and Obama offices? Math IS a science..

hjh54gbjhgb, you were off topic to begin with and are now going even more off topic. Please have some consideration for people who actually want to discuss the subject of the post, and drop this. (That goes for everyone.)

Bruce, interesting.

I would have thought that they could get up to speed just by having staff read through what you have released on your blog, etc. So, either you were there to do a little hand-holding, so to speak, in making very clear what exactly has been happening (summarizing, etc.), or you were giving them additional information that you have not released to us (the general public) yet. Perhaps you were even summarizing the materials that have yet to be released. So, essentially, that would make you a covert (not-so-covert, due to the missing SCIF) channel from the Snowden materials to (certain members of) Congress. However, since you may well have plans to release, co-release the materials with other Snowden-material recipients, perhaps it's just advancing the cause a bit faster.

At any rate, I for one am happy to have you representing those of us who are, at the least, very concerned with the NSA's role and want to see oversight restored & improved.

Thanks.

@Bruce: It's good that you do this. It seems that the NSA is out of democratic control.

By the way, one question:
As you have read more of the snowden documents:
Have you found any links to industrial spionage. I mean, is there anything that indicates, the nsa spys on industrial companys and that the NSA then gives american companys that information?

Why does NSA for example, has an european cryptologic center in Damrstadt? I mean, sometimes, I just believe the NSA guys are a bit paranoid since 9/11 and are searching for terrorists among each of us. But then, this does not make much sense. Most terrorists are probably not based in europe. Also, e.g the embassy of the european comission is not a place to find terrorists.
We know that the NSA is spying on politicians who might be of economical importance.

But then, who are the clients of the NSA? Are the ones that recieve this information really only members of the US government?

Or does the NSA also tell general motors about the new cars that are currently designed by BMW, Daimler-Benz, and Volkswagen in germany?

Such a thing would perhaps make it much more easier for NSA to introduce backdoors into US products.

A company that is promised by the NSA to get the newest classified information on its competitors is perhaps more willing to create, e.g an NSA_KEY into an operating system.

Is there anything in the slides that indicates such behavior of the NSA.

Bringing in an outside expert who is unaffiliated to the organization that Congress may potentially want to limit or restrain avoids an obvious conflict of interest that would be present when asking for a briefing from NSA directors or employees.

The only disturbing thing is the statement "This really was an extraordinary thing." It should be SOP in this kind of review.

I think the SCIFs are administered by the same people who probably aren't all that happy to facilitate a meeting between publicly adverse congressmen and a vocal information security expert with valuable insight on the Snowden information. Bluntly: if there was an easy way to make the meeting happen in an SCIF, it's doubtful that easy way was made available.

Secondarily, my understanding is that SCIFs actively discourage eavesdropping by any technical means. I'm on the fence as to whether an SCIF would or wouldn't be automatically monitored by [acronym] in current practice, but assuming that [acronym] actually desire conversations without record, and SCIFs provide such, then the follow up reason not to make an SCIF available is that someone wanted (and had the ability) to record what was said in the room that eventually was made available.

@Benni

I'm pretty sure NSA is feeding US corporations information some way or another.
They have been doing this before. Back in the days NSA helped Boeing to stop Airbus from expanding into the Saudi market. They did the same against other European companies as well as Toyota etc.

Usually NSA/CIA et al. play the act of "whistleblowers" when they see a non-US company doing (grey) business on the side and are in making of getting some big order.

So yes - this is big business.
My guess is that it's as much in the hunt for terrorists as it is to gain market shares in the private sector.

So basically you met with around 1.3% of a group of people who can't agree almost 100% of the time?

Thanks for making these efforts, and writing about them. I'm curious why the focus is still on Section 215 and phone metadata, and not on the wider collection of Internet communications, such as the Gmail user data you refer to in your post of Jan 13?

In Tuesday's senate judiciary committee testimony of the President's review group, the wider collection was not referred to once that I noticed (with the exception of a very brief comment by Richard Clarke about 12333).

Do you understand why these programs not on the radar for reform? Are the members of Congress you talked to aware of these programs?

It's worth mentioning that the security risks to (1) the Snowden materials but moreso to (2) Snowden himself increase as the amount of materials released increases.

For instance, suppose that Snowden's life is indeed being safeguarded by a "doomsday" cache of documents. As the materials are released, if the government decides that the "worst" (in some sense) materials to which Snowden might have had access have been disclosed, then the gov't might feel that it is okay to make an attempt to secure Snowden. Of course, this might mean diplomatic pressure rather than some kind of covert op (in Moscow, no less). At this point, the proliferation of people involved has escalated to the point that the security of the operation is becoming less tenable -- Greenwald, his partner, his associate in film (I've forgotten her name), some material at the Post, perhaps some with Wikileaks, now Bruce... who else has pieces of the material? If we assume (as we should) that the NSA is continuously monitoring the entire set of all materials released by everyone who has had *some* access, and *further* that it has access (perhaps in real time) to all the unencrypted communiques among these various actors, then the danger is not inconsiderable.

Just a thought.

Perhaps
"

@ John Gibson

Here is a manual on various aspects of SCIF's with plenty of references included.

http://www.dtic.mil/whs/directives/corres/pdf/...

@ everyone

I also wonder if they even want someone like Bruce in a SCIF. He's actively soliciting ways to stop NSA snooping on his blog. I've already promoted SCIF's here as a way to do it. Bruce is also notable for finding flaws in government security and publishing the NSA exploit list. Knowing their mindset, I doubt they want him anywhere near SCIF or COMSEC equipment.

I'm not saying that I think it's the reason for the denial. I just wouldn't be surprised if they got paranoid.

>Or does the NSA also tell general motors about the new cars that are currently designed by BMW, Daimler-Benz, and Volkswagen in germany?

Current evidence on the roads suggests not.

There was a conviction (in absence) of the US in France for a case where US/UK spying led to a French company losing a radar deal to a US competitor.

Pulitzer-Prize winning journalist Gary Wills argues in "A Necessary Evil" that government oversight is fundamentally incompatible with government efficiency.

The idea is that you can't be efficient if you have to justify your actions at every step.

He notes there's a chance that a large part of the American public may have historically objected to the Manhattan Project on moral grounds, citing public reaction to chemical warfare in World War I. The way the Manhattan project was run was very efficient -- time was of the essence -- but totally unaccountable.

This is part and parcel of the states secret privilege -- its purpose is to avoid oversight. The origin of this privilege is judicial, not legislative, and relates to witholding public knowledge of military negligence in the testing of an experimental airplane.

The Germans and the Soviets knew we were building the bomb -- it was just the American public that was kept in the dark. The Cambodians knew were were bombing them, but it was from the American people that these actions were kept secret. Castro knew we were trying to invade it, but it was kept secret from Americans. Terrorists in the Middle East know where we're using drones, its the American people who are kept in the dark.

Right now, half the voting public wants to cut government spending because they want more efficiency, but they're really just reducing oversight.

Even if Congress had control of the NSA, there's not political will for the costs associated with additional oversight.

The SCIF/clearance issue effectively highlights the paradox at the heart of attempting to codify the foundations of trust: do they REALLY think Bruce is about to bug the SCIF? If he's inclined to do so, he'd be much more likely to lie in the briefing (making it useless) or make up some sort of movie-lot exploit on the spot.

It feels like the U.S. authorities are in the throes of a fit of paranoia.

@Bruce

"I've never heard of a SCIF before now. What is the reasoning behind not allowing someone without clearance into a SCIF?" My guess is that because I am untrusted, I might plant a bug in the SCIF.

It's funny to consider yourself untrusted, because clearly these representative trust you more than the NSA. Otherwise they might have simply given up. ("Oh, NSA isn't cooperating... how about lunch?")

Maybe not being in a SCIF made it easier for the NSA to listen in.

@Petter
"I'm pretty sure NSA is feeding US corporations information some way or another."

Well, I also assume this. My question was, wether there is proof of this in the snowden documents.

Up to now, we only have seen some techniques to spy, and we have seen reports how much data is gathered. We also have some targets, e.g. the european embassy, a german chancellor...

A direct proof, that all this is used for industrial espionage would be important.

Not only would it severely weaken the case of politicians telling us that this spy program is all just because of war against terror.

It would also be important for industry, to know, e.g which european companys the NSA is spying on and to which US corporations the NSA has given classified information.

Such a thing was, until now, not published from the snowden fields. Is there such a proof in these files?

Re: "She said that the NSA wasn't forthcoming about their activities..."

Are any of those members of Congress members of the intelligence oversight committee, which are appointed by their leadership? Gee, maybe this has something to do with the fact that the NSA isn't running over to brief them. Did Dianne Feinstein ask you for a briefing? I don't think she is having any trouble whatsoever gettting info from the NSA on their activities.

We want more details of what it is like to deal with some real dumb schmucks that have power, but are clueless enough to not know how to use it properly for the task they have been chosen to perform.

Oversight has little concentrated power to effect beneficial change if there is in fact no true plan for what public data use represents in terms of evolutionary (graceful) steps that aid the representative ecologic balance concerning humanity.

It also strikes me--considering the committees and leadership represented in Bruce's briefing--the SCIF denial may be a political jab at the IC. A "if you don't tell the oversight committees in private, we'll make it 'public' " bit, or along those lines.

@ Benni

Well thats an intresting thought.
As we can not know how many documents Snowden came across and got his hands on it's difficult to know how much of the total amount he actually have released yet. And how much mr Greenwald is sitting on.

I can not see thats it is in anyones intrest to tell the truth on the exact amount.

If those House members have clearance - and we know they must - then they have just willingly committed several security violations by a) discussing information they are not cleared for, b) discussing it with somebody who is not cleared, and c) opening it in an insecure space. Just because it's been leaked does not mean it's UNCLAS. They would be in deep trouble if they were not immune from the rules.

Of course they are "immune" from even knowing about this information anyway, so there is that.

And Bruce would be in deep kimchi too. If the rules were being followed. But IANAL.

@John Gibson: There is a group of contractors in the DC area that specialize in building SCIFs. There are others scattered around the country as well.

Bruce, do you anticipate possibly having any similar meetings with any members of the Senate? I'm thinking specifically of Ron Wyden (OR) and/or Mark Udall (CO), who have been aggressive about uncovering more of the NSA's actions.

Now I'm REALLY looking forward to the track at Shmoo, Bruce.

@Jason Richardson-White "I would have thought that they could get up to speed just by having staff read through what you have released on your blog, etc"

This assumes that they have staff who have the necessary training and background knowledge to analyse this.

I would assume that any congressperson would have a limited budget for staff and that budget would be taken up hiring people to carry out the congressperson's normal buisiness. The Snowden leaks are atypical.

Asking the NSA or CIA for analysis in these circumstances would be counterproductive.

Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc..

 

Nimrod: A New Approach to Metaprogramming

trevorlinton/webkit.js · GitHub

$
0
0

Comments:"trevorlinton/webkit.js · GitHub"

URL:https://github.com/trevorlinton/webkit.js


webkit.js

An experimental port of WebKit (Specifically, webcore) to JavaScript aimed at running in both node.js and browsers. This is currently non-functional and is intended for developers and contributors.

Goals

  • Produce a renderer in pure JavaScript that supports rendering to WebGL/Canvas contexts.
  • Develop frameworks for image/webpage capturing and automated web testing (similar to PhantomJS).
  • Develop a framework for prototyping CSS filters, HTML elements and attributes.
  • Experimental harness for pure javascript rendering performance.
  • Develop a javascript based browser in nodejs (just for fun).

Status

Toolchain

  • (BAD) Currently will only compile with Emscripten 1.8.2 on MacOSX in Xcode.

Building

  • (GOOD) Currently the latest nightly WebKit builds.

Linking

  • (HORRIFIC) There are plenty of linking issues to be addressed; however visibility into this is minimal. Currently the system links but has quite a few unresolved symbols that will cause the renderer to crash.

Tests

  • (NON EXISTANT) Tests for javascript<->C++ needs to be developed (non-layout related code)
  • (BAD) Unit tests that integrate with WebKit's LayoutTests

Supported Features

  • (GOOD) Features that are not currently supported:

    • Accelerated 2D Canvas
    • CSS Image Resolution
    • CSS Image Orientation
    • CSS3 Text
    • Draggable Regions
    • Encrypted Media
    • Font Load Events
    • Input Speech
    • Audio
    • Video
    • Media Streams
    • JavaScript Debugger / Inspector
    • MHTML
    • PDF/PDFKit
    • Quota's
    • Web Workers
    • Web Sockets
    • Shadow DOM
    • Web Timing
    • XSLT
    • Native Widgets (IFRAME, Buttons, Text Boxes, etc.)
    • Any Resource Loaders
    • Font Rendering

Frameworks

  • (NON EXISTANT) Frameworks to easily perform common tasks with the renderer under various contexts.

Documentation

  • (NON EXISTANT) Lots and lots of documentation, currently this is it.

Building

Requirements

  • MacOS X Lion with Xcode 4+
  • Emscripten 1.8.2+
  • At least 16GB of free space (seriously)
  • At least 4GB of RAM

Building webkit.js step-by-step

  • Clone the repo.
  • Go to WebKitJS/tools and in a terminal run sh cp -pa ./WebKitJS/tools/EmscriptenXcode.xcplugin /Applications/Xcode.app/Contents/PlugIns/EmscriptenXcode.plugin vim /Applications/Xcode.app/Contents/PlugIns/EmscriptenXcode.xcplugin/Contents/Resources/GCC\ 4.5.xcspec
  • Modify line 35 and set ExecPath to the path of em++ included with Emscripten.
  • Then run:sh vim WebKit/Source/WebCore/Configurations/Base.xcconfig
  • Replace "EMSCRIPTEN_SYSTEM" with the path to the system headers for emscripten.
  • Open WebKit.xcworkspace in Xcode
  • Change the Schema to "WebcoreJS" and make sure you build in Release (some systems Normal), not Debug. Hit Play and get some coffee or go do some chores.
  • Your linking will most likely fail. If so, just run the terminal command:

    em++ -r -isysroot /Path/To/Emscripten/1.8.2/system /Path/To/Your/WebKitBuild/*.o -std=c++11 -s FULL_ES2=1 -O2 -stdlib=libc++ -s LINKABLE=1 -o /Path/To/Your/WebKitBuild/webkit.js
  • You can also create individual js files for each class in WebKit to ease debugging. Run:

    find . -name "*.o"| xargs -I {} em++ -isysroot /Path/To/Your/Emscripten/1.8.2/system {} -std=c++11 -s FULL_ES2=1 -O2 -stdlib=libc++ -s LINKABLE=1 -s SIDE_MODULE=1 -o /Some/Folder/For/Output/{}.js

Contributing

There's so much to be done any help is appreciated, at the moment I have a brutal wrapper/harness that can render DIV's with various colors to a WebGL surface in Chrome 32. It's not impressive but its a proof of concept.

Getting around the code

  • /WebKit/ This is a modified version of Apple's official repo.
  • /WebKit/Source/WebCore This is where 90% of the work is done.
  • /WebKit/Source/WebCore/bindings/scripts This is an important folder where WebKit autogenerates bindings
  • /WebKit/Source/WTF/ is a cross-platform library for common tasks such as HashMaps, etc.
  • /WebKit/Source/WTF/PlatformJS.h these are C++ pre-process settings for PLATFORM(JS)
  • /WebKit/Source/WebCore/Configurations/ these are important compile-time configurations for WebCore

  • /WebKitJS/ When compiled the output of WebKit is placed in WebKitJS/webkit.js

  • /WebKitJS/tools/ Any helpful tools i've come across (only one for now...)
  • /WebKitJS/tools/cppfilter.js This demangles C++ symbols contained in emscripten.js
  • /WebKitJS/tools/EmscriptenXcode.plugin This is a Xcode plugin, and a temporary hack for using emscripten

It's important to know

  • The code within the WebKit folder is pulled from upstream, be careful not to move any files, remove any files or heavily refactor any source file as it will cause headaches when merging.

  • Place a .gitignore on ./WebKit/WebKitBuild/ so you do not accidently commit build files.

  • Enabling/disabling settings within the Configurations folder in WebCore will have a lot of consequences, most of the disabled features are disabled because there's no possible work around for including the platform specific code (it needs to be created in JS from scratch)
  • A good amount of the bindings and code within the WebCore is auto generated from scripts, be careful when you have build errors to make sure you're not modifying a "Derived Source" otherwise you'll find your changes will be just over-written the next time the derived sources runs.
  • The current webkit.js in WebKitJS folder was compiled exposing its symbols, this results in a heavy file size (current 44MB - 1/15/14). This can come down considerably once a framework/API is developed, tests are created and optimizations can become a priority.
  • Do not modify code within ./WebKitJS/webkit.js or ./WebKitJS/debug/ these files are over-written when WebKit is built, so its somewhat pointless unless you're testing.

What's Desperately Needed

  • A build toolchain similar to GYP/gconfig. QtWebkit has one already, possibly re-map that.
  • Create "Debug" and "Release" modes that allow for easier debugging. In addition creating anything that helps debug and spot problems easier.
  • Scripts to auto-generate code with Emscripten JS Bindings (e.g., IDL generation, and some other bindings/scripts tasks)
  • Integration of WTF library into WebCore
  • Closer examination of optimization/best practices/guidance on Emscripten.
  • Closer examination of optimization/best practices/guidance on compiling WebCore/renderer.
  • Removal of "oddity" code (e.g., no mans land code, existing dead code, platform specific code)
  • Start smaller with GYP and only develop one pass layout system from CSS/HTML/DOM code with minimal features and build up.
  • Take each file one by one in ./webkitjs/debug/ and port?...
  • Conversation, topics, discussions on best practices, methods and use cases
  • Dependency and/or symbol graph that rebuilds automatically after a compile (expressed as a HTML doc?) The core reason for this is to visualize dependencies between classes, unresolved symbols still to be developed, and spot key integration points. This can be done by regex over the existing ./webkitjs/debug for symbols and building a D3 graph to show the symbols dependency possibly? Is there already key software that does this? Can emscripten/llvm spit this out?
  • Identify what key import symbols may require significant retooling.
  • Integrate libxml.js (rather than depending on browser pass through decoding to a buffer)
  • Integrate libxslt.js (currently unsupported)
  • Integrate ffmpeg.js ?.... pure javascript video support?..... .
  • Integrate libpng.js (rather than depending on browser pass through decoding to a buffer)
  • Integrate libjpeg-turbo.js/libjpeg.js (rather than depending on browser pass through decoding to a buffer)
  • Integrate zlib (rather than depending on browser pass through decoding to a buffer)
  • Use embind/cppfilter.js to automatically generate all the WebCore C++ interfaces (derived from WebCore.exp) directly into javascript, then simply reuse existing webcore demos/examples.

License

BSD License (see http://www.webkit.org/coding/bsd-license.html) Use of this source code is governed by a BSD-style license that can be found in the LICENSE files contained in the root of the respective source code.

© True Interactions 2014

[trevor linton]:mailto:trevor dot linton plus github at gmail dot com

Adventures in Wearable Electronics - Making a Light-up Dress | OfBrooklyn.com

$
0
0

Comments:"Adventures in Wearable Electronics - Making a Light-up Dress | OfBrooklyn.com"

URL:http://www.ofbrooklyn.com/2014/01/15/adventures-in-wearable-electronics-light-up-dress/


For New Year's Eve 2014, my girlfriend and I went to a dance party where wearable electronics were not only encouraged but also on display from a variety of hobbyists. I decided to use this as an opportunity to combine two of my favorite hobbies: sewing and electronics.

It is my goal to encourage more people to weave wearable electronics into their own clothing. It's the future and we might as well look the part. Plus it's easy to get started and to modify existing code.

The full source code for this dress is available on GitHub.

Hardware

I attached six addressable LED strands from Sparkfun ($20 each) to the lining of Brittany's dress, and then used a Flora module from Adafruit ($25) to control them. I then used a 3 x AAA battery holder from Adafruit ($2).

Setup

I used Adafruit's NeoPixel library to control the LEDs. There were 60 LEDs per 1 meter-long strand. We only needed 40 of the LEDs, but instead of cutting them off, we simply sewed the unused third underneath the strand and cut the software off at 40 LEDs. This way we can repurpose the LED strands when we decide to move them to a new dress.

In order to make the connections between the LED strands and the Flora module, I used 30 AWG wire, which is an extremely thin and light wire. The gauge is 0.01" and is as fine as thread. This allowed me to sew the wire into the fabric. I could have used conductive thread, but this wire wrap has a sheath that prevents it from shorting other wires when they touch. It's also extremely light-weight, so having 18 wires (3 wires per LED strand: power, ground, data) looping around the dress wasn't an issue.

I also want to mention that the code below is hard-coded for six stands. There is a fine line between a hack and a project, and for this, due to my limited time budget, was closer to hack than reusable project. You can easily abstract the code below to account for more or fewer strands, but I was able to ship before midnight on NYE, so I'm considering it a success.

void setup() {
 // To ensure that the first random color in `loop` is never the same.
 randomSeed(analogRead(0));
 led_a.begin();
 led_b.begin();
 led_c.begin();
 led_d.begin();
 led_e.begin();
 led_f.begin();
 clearLEDs(true);
}
void loop() {
 for (int i=0; i<2; i++) {
 hulahoop(randomColor(), random(20,60));
 }
 sparkle(randomColor(), random(30,70));
 raindrops(randomColor(), random(20,60));
 spiral(randomColor(), random(15,30));
}

Above we have the code for setup and loop, the two main Arduino routines. Notice that I am repeating the hula hoop routine, since it's pretty quick and looks good on repeat.

I also want to note that every single routine gets its own random color and random delay. This bit of randomness is something I weave into all of my wearable electronics, since it goes just a bit further than off-the-shelf components and shows that there was some intelligence behind the routine.

By giving a random delay to each routine I am actually changing the speed of the routine. Sometimes the raindrops would fall quickly, sometimes the hula hoop would have a slow fall and rise. It's all part of making mesmerizing patterns.

Hula hoop

For the hula hoop routine, I chose to use cylons at equal positions that move up and down the strand. To make the transition from one LED to the next, I use a cylon, which is a fancy way of saying that the LEDs immediately above and below the active LED are also on, but at a reduced brightness.

In this case the cylon is 5 LEDs wide. 100% brightness for the middle/active LED, 1/18th brightness for the two adjacent LEDs, and 1/36th brightness for the two LEDs further out from there.

So what you see to the left are actually 5 LEDs turned on per-strand, although they get quite dim as you get further from the active LED. You can adjust the dimness of the adjacent LEDs by adjusting the weight parameter.

void hulahoop(unsigned long color, byte wait) {
 // weight determines how much lighter the outer "eye" colors are
 const byte weight = 18; 
 // It'll be easier to decrement each of these colors individually
 // so we'll split them out of the 24-bit color value
 byte red = (color & 0xFF0000) >> 16;
 byte green = (color & 0x00FF00) >> 8;
 byte blue = (color & 0x0000FF);
 // Start at closest LED, and move to the outside
 for (int i=0; i<=LED_COUNT-1; i++) {
 clearLEDs(false);
 led_a.setPixelColor(i, red, green, blue);
 led_b.setPixelColor(i, red, green, blue);
 led_c.setPixelColor(i, red, green, blue);
 led_d.setPixelColor(i, red, green, blue);
 led_e.setPixelColor(i, red, green, blue);
 led_f.setPixelColor(i, red, green, blue);
 // Now set two eyes to each side to get progressively dimmer
 for (int j=1; j<3; j++) {
 byte redWJ = red/(weight*j);
 byte greenWJ = green/(weight*j);
 byte blueWJ = blue/(weight*j);
 if (i-j >= 0) {
 led_a.setPixelColor(i-j, redWJ, greenWJ, blueWJ);
 led_b.setPixelColor(i-j, redWJ, greenWJ, blueWJ);
 led_c.setPixelColor(i-j, redWJ, greenWJ, blueWJ);
 led_d.setPixelColor(i-j, redWJ, greenWJ, blueWJ);
 led_e.setPixelColor(i-j, redWJ, greenWJ, blueWJ);
 led_f.setPixelColor(i-j, redWJ, greenWJ, blueWJ);
 }
 if (i-j <= LED_COUNT) {
 led_a.setPixelColor(i+j, redWJ, greenWJ, blueWJ);
 led_b.setPixelColor(i+j, redWJ, greenWJ, blueWJ);
 led_c.setPixelColor(i+j, redWJ, greenWJ, blueWJ);
 led_d.setPixelColor(i+j, redWJ, greenWJ, blueWJ);
 led_e.setPixelColor(i+j, redWJ, greenWJ, blueWJ);
 led_f.setPixelColor(i+j, redWJ, greenWJ, blueWJ);
 }
 }
 led_a.show();
 led_b.show();
 led_c.show();
 led_d.show();
 led_e.show();
 led_f.show();
 delay(wait);
 }
 // Now we go back to where we came. Do the same thing.
 for (int i=LED_COUNT-2; i>=1; i--) {
 clearLEDs(false);
 led_a.setPixelColor(i, red, green, blue);
 led_b.setPixelColor(i, red, green, blue);
 led_c.setPixelColor(i, red, green, blue);
 led_d.setPixelColor(i, red, green, blue);
 led_e.setPixelColor(i, red, green, blue);
 led_f.setPixelColor(i, red, green, blue);
 // Now set two eyes to each side to get progressively dimmer
 for (int j=1; j<3; j++)
 {
 byte redWJ = red/(weight*j);
 byte greenWJ = green/(weight*j);
 byte blueWJ = blue/(weight*j);
 if (i-j >= 0) {
 led_a.setPixelColor(i-j, redWJ, greenWJ, blueWJ);
 led_b.setPixelColor(i-j, redWJ, greenWJ, blueWJ);
 led_c.setPixelColor(i-j, redWJ, greenWJ, blueWJ);
 led_d.setPixelColor(i-j, redWJ, greenWJ, blueWJ);
 led_e.setPixelColor(i-j, redWJ, greenWJ, blueWJ);
 led_f.setPixelColor(i-j, redWJ, greenWJ, blueWJ);
 }
 if (i-j <= LED_COUNT) {
 led_a.setPixelColor(i+j, redWJ, greenWJ, blueWJ);
 led_b.setPixelColor(i+j, redWJ, greenWJ, blueWJ);
 led_c.setPixelColor(i+j, redWJ, greenWJ, blueWJ);
 led_d.setPixelColor(i+j, redWJ, greenWJ, blueWJ);
 led_e.setPixelColor(i+j, redWJ, greenWJ, blueWJ);
 led_f.setPixelColor(i+j, redWJ, greenWJ, blueWJ);
 }
 }
 led_a.show();
 led_b.show();
 led_c.show();
 led_d.show();
 led_e.show();
 led_f.show();
 delay(wait);
 }
}

Sparkles

This is by far the easiest routine to program yet the one that brought the most attention. It simply flashes a random LED across all of the LED strips.

void sparkle(unsigned long color, uint8_t wait) {
 for (int i=0; i < LED_COUNT * STRIP_COUNT; i++) {
 clearLEDs(true);
 int strip = floor(random(STRIP_COUNT));
 int led = floor(random(LED_COUNT));
 switch (strip) {
 case 0:
 led_a.setPixelColor(led, color);
 led_a.show();
 break;
 case 1:
 led_b.setPixelColor(led, color);
 led_b.show();
 break;
 case 2:
 led_c.setPixelColor(led, color);
 led_c.show();
 break;
 case 3:
 led_d.setPixelColor(led, color);
 led_d.show();
 break;
 case 4:
 led_e.setPixelColor(led, color);
 led_e.show();
 break;
 case 5:
 led_f.setPixelColor(led, color);
 led_f.show();
 break;
 }
 delay(wait);
 }
}

Raindrops

This is by far the most difficult of the routines. This one sends down cylons (5 LEDs, with the center LED being the brightest and the adjacent LEDs becoming decreasingly bright, so as to give the impression of smoother animation).

But you should notice that the next raindrop starts when the previous raindrop is half-way down the dress. In order to make it seamless, at the beginning of the routine I actually start another LED at the half-way mark, referred to in the below code as e_alt. This means that there is no point at which the dress looks spent and is waiting for the routine to start over.

void raindrops(unsigned long color, byte wait) {
 // weight determines how much lighter the outer "eye" colors are
 const byte weight = 18; 
 // It'll be easier to decrement each of these colors individually
 // so we'll split them out of the 24-bit color value
 byte red = (color & 0xFF0000) >> 16;
 byte green = (color & 0x00FF00) >> 8;
 byte blue = (color & 0x0000FF);
 double sludge = 0.5;
 double a_offset = 0;
 double b_offset = 3;
 double c_offset = 1;
 double d_offset = 2;
 double e_offset = 4;
 double f_offset = 5;
 // Start at closest LED, and move to the outside
 for (int i=0; i<LED_COUNT*(STRIP_COUNT-1)*sludge+LED_COUNT*10; i++) {
 clearLEDs(false);
 double n = i % (int)(LED_COUNT*(STRIP_COUNT+1)*sludge-LED_COUNT*sludge);
 double led_count = (double)LED_COUNT;
 bool a_on = (sludge*a_offset*led_count) <= n && 
 n <= (sludge*a_offset*led_count+led_count);
 bool b_on = (sludge*b_offset*led_count) <= n && 
 n <= (sludge*b_offset*led_count+led_count);
 bool c_on = (sludge*c_offset*led_count) <= n && 
 n <= (sludge*c_offset*led_count+led_count);
 bool d_on = (sludge*d_offset*led_count) <= n && 
 n <= (sludge*d_offset*led_count+led_count);
 bool e_on = (sludge*e_offset*led_count) <= n && 
 n <= (sludge*e_offset*led_count+led_count);
 bool e_alt= (sludge*a_offset*led_count) <= n && 
 n <= (sludge*a_offset*led_count+led_count*sludge);
 bool f_on = (sludge*f_offset*led_count) <= n && 
 n <= (sludge*f_offset*led_count+led_count);
 if (!a_on && !b_on && !c_on && !d_on && !e_on && !f_on) {
 clearLEDs(true);
 break;
 }
 int a = n-a_offset*LED_COUNT*sludge;
 int b = n-b_offset*LED_COUNT*sludge;
 int c = n-c_offset*LED_COUNT*sludge;
 int d = n-d_offset*LED_COUNT*sludge;
 int e = n-e_offset*LED_COUNT*sludge;
 if (e_alt) {
 e = a+(LED_COUNT/2);
 }
 int f = n-f_offset*LED_COUNT*sludge;
 if (a_on) led_a.setPixelColor(a, red, green, blue);
 if (b_on) led_b.setPixelColor(b, red, green, blue);
 if (c_on) led_c.setPixelColor(c, red, green, blue);
 if (d_on) led_d.setPixelColor(d, red, green, blue);
 if (e_on || e_alt) led_e.setPixelColor(e, red, green, blue);
 if (f_on) led_f.setPixelColor(f, red, green, blue);
 // Now set two eyes to each side to get progressively dimmer
 for (int j=1; j<3; j++) {
 byte redWJ = red/(weight*j);
 byte greenWJ = green/(weight*j);
 byte blueWJ = blue/(weight*j);
 if (a-j >= 0 && a_on) 
 led_a.setPixelColor(a-j, redWJ, greenWJ, blueWJ);
 if (b-j >= 0 && b_on) 
 led_b.setPixelColor(b-j, redWJ, greenWJ, blueWJ);
 if (c-j >= 0 && c_on) 
 led_c.setPixelColor(c-j, redWJ, greenWJ, blueWJ);
 if (d-j >= 0 && d_on) 
 led_d.setPixelColor(d-j, redWJ, greenWJ, blueWJ);
 if (e-j >= 0 && e_on) 
 led_e.setPixelColor(e-j, redWJ, greenWJ, blueWJ);
 if (f-j >= 0 && f_on) 
 led_f.setPixelColor(f-j, redWJ, greenWJ, blueWJ);
 if (a-j <= LED_COUNT && a_on) 
 led_a.setPixelColor(a+j, redWJ, greenWJ, blueWJ);
 if (b-j <= LED_COUNT && b_on) 
 led_b.setPixelColor(b+j, redWJ, greenWJ, blueWJ);
 if (c-j <= LED_COUNT && c_on) 
 led_c.setPixelColor(c+j, redWJ, greenWJ, blueWJ);
 if (d-j <= LED_COUNT && d_on) 
 led_d.setPixelColor(d+j, redWJ, greenWJ, blueWJ);
 if (e-j <= LED_COUNT && e_on) 
 led_e.setPixelColor(e+j, redWJ, greenWJ, blueWJ);
 if (f-j <= LED_COUNT && f_on) 
 led_f.setPixelColor(f+j, redWJ, greenWJ, blueWJ);
 }
 led_a.show();
 led_b.show();
 led_c.show();
 led_d.show();
 led_e.show();
 led_f.show();
 delay(wait);
 }
}

Spiral

This routine is better in theory than in practice. The idea is to have a quick succession of LEDs light up in a spiral pattern. This would work a bit better with more LED strands wrapped around the dress.

In this case, I'm simply looping through the strands and lighting up successive LEDs. But the nice thing about this routine is that it has a dramatic crescendo to the top, at which point the hula hoop routine begins and it looks like the action started at the bottom and quickly worked its way up to the top, only to smoothly fall back down. It's a mesmerizing effect.

void spiral(unsigned long color, byte wait) {
 const byte weight = 18; 
 byte red = (color & 0xFF0000) >> 16;
 byte green = (color & 0x00FF00) >> 8;
 byte blue = (color & 0x0000FF);
 for (int level=LED_COUNT-1; level >= 0; level--) {
 for (int strip=0; strip < STRIP_COUNT; strip++) {
 clearLEDs(false);
 switch (strip) {
 case 0:
 led_f.setPixelColor(level, red/weight, green/weight, blue/weight);
 led_a.setPixelColor(level, color);
 led_b.setPixelColor(level, red/weight, green/weight, blue/weight);
 break;
 case 1:
 led_a.setPixelColor(level, red/weight, green/weight, blue/weight);
 led_b.setPixelColor(level, color);
 led_c.setPixelColor(level, red/weight, green/weight, blue/weight);
 break;
 case 2:
 led_b.setPixelColor(level, red/weight, green/weight, blue/weight);
 led_c.setPixelColor(level, color);
 led_d.setPixelColor(level, red/weight, green/weight, blue/weight);
 break;
 case 3:
 led_c.setPixelColor(level, red/weight, green/weight, blue/weight);
 led_d.setPixelColor(level, color);
 led_e.setPixelColor(level, red/weight, green/weight, blue/weight);
 break;
 case 4:
 led_d.setPixelColor(level, red/weight, green/weight, blue/weight);
 led_e.setPixelColor(level, color);
 led_f.setPixelColor(level, red/weight, green/weight, blue/weight);
 break;
 case 5:
 led_e.setPixelColor(level, red/weight, green/weight, blue/weight);
 led_f.setPixelColor(level, color);
 led_a.setPixelColor(level, red/weight, green/weight, blue/weight);
 break;
 }
 led_a.show();
 led_b.show();
 led_c.show();
 led_d.show();
 led_e.show();
 led_f.show();
 delay(wait);
 }
 } 
}

Random Colors

Since randomness is the spice that gives this dress its mesmerizing qualities, it's important to have a good list of colors to use.

unsigned long COLORS[] = {
 NAVY, DARKBLUE, MEDIUMBLUE, BLUE, DARKGREEN, GREEN, TEAL, DARKCYAN, 
 DEEPSKYBLUE, DARKTURQUOISE, MEDIUMSPRINGGREEN, LIME, SPRINGGREEN, 
 AQUA, CYAN, MIDNIGHTBLUE, DODGERBLUE, LIGHTSEAGREEN, FORESTGREEN, 
 SEAGREEN, DARKSLATEGRAY, LIMEGREEN, MEDIUMSEAGREEN, TURQUOISE, 
 ROYALBLUE, STEELBLUE, DARKSLATEBLUE, MEDIUMTURQUOISE, INDIGO, 
 DARKOLIVEGREEN, CADETBLUE, CORNFLOWERBLUE, MEDIUMAQUAMARINE, DIMGRAY, 
 SLATEBLUE, OLIVEDRAB, SLATEGRAY, LIGHTSLATEGRAY, MEDIUMSLATEBLUE, 
 LAWNGREEN, CHARTREUSE, AQUAMARINE, MAROON, PURPLE, OLIVE, GRAY, 
 SKYBLUE, LIGHTSKYBLUE, BLUEVIOLET, DARKRED, DARKMAGENTA, SADDLEBROWN, 
 DARKSEAGREEN, LIGHTGREEN, MEDIUMPURPLE, DARKVIOLET, PALEGREEN, 
 DARKORCHID, YELLOWGREEN, SIENNA, BROWN, DARKGRAY, LIGHTBLUE, 
 GREENYELLOW, PALETURQUOISE, LIGHTSTEELBLUE, POWDERBLUE, FIREBRICK, 
 DARKGOLDENROD, MEDIUMORCHID, ROSYBROWN, DARKKHAKI, SILVER, 
 MEDIUMVIOLETRED, INDIANRED, PERU, CHOCOLATE, TAN, LIGHTGRAY, 
 THISTLE, ORCHID, GOLDENROD, PALEVIOLETRED, CRIMSON, GAINSBORO, PLUM, 
 BURLYWOOD, LIGHTCYAN, LAVENDER, DARKSALMON, VIOLET, PALEGOLDENROD, 
 LIGHTCORAL, KHAKI, ALICEBLUE, HONEYDEW, AZURE, SANDYBROWN, WHEAT, 
 BEIGE, WHITESMOKE, MINTCREAM, GHOSTWHITE, SALMON, ANTIQUEWHITE, 
 LINEN, LIGHTGOLDENRODYELLOW, OLDLACE, RED, FUCHSIA, MAGENTA, 
 DEEPPINK, ORANGERED, TOMATO, HOTPINK, CORAL, DARKORANGE, LIGHTSALMON, 
 ORANGE, LIGHTPINK, PINK, GOLD, PEACHPUFF, NAVAJOWHITE, MOCCASIN, 
 BISQUE, MISTYROSE, BLANCHEDALMOND, PAPAYAWHIP, LAVENDERBLUSH, SEASHELL, 
 CORNSILK, LEMONCHIFFON, FLORALWHITE, SNOW, YELLOW, LIGHTYELLOW, IVORY
};
unsigned long randomColor() {
 return COLORS[random(sizeof(COLORS)/sizeof(unsigned long))];
}

Dancing to 2014

And what good is a beautiful light-up dress without a well lit dance partner? Here I am decked out in blue EL wire. I spent the better part of three hours sewing this wire into the edges of one of my jackets. But while hand-sewing can take half a day, the result is a great fit and a capable match for Brittany's dress.

The full source code for this dress is available on GitHub.


Technology and jobs: Coming to an office near you | The Economist

$
0
0

Comments:"Technology and jobs: Coming to an office near you | The Economist"

URL:http://www.economist.com/news/leaders/21594298-effect-todays-technology-tomorrows-jobs-will-be-immenseand-no-country-ready


INNOVATION, the elixir of progress, has always cost people their jobs. In the Industrial Revolution artisan weavers were swept aside by the mechanical loom. Over the past 30 years the digital revolution has displaced many of the mid-skill jobs that underpinned 20th-century middle-class life. Typists, ticket agents, bank tellers and many production-line jobs have been dispensed with, just as the weavers were.

For those, including this newspaper, who believe that technological progress has made the world a better place, such churn is a natural part of rising prosperity. Although innovation kills some jobs, it creates new and better ones, as a more productive society becomes richer and its wealthier inhabitants demand more goods and services. A hundred years ago one in three American workers was employed on a farm. Today less than 2% of them produce far more food. The millions freed from the land were not consigned to joblessness, but found better-paid work as the economy grew more sophisticated. Today the pool of secretaries has shrunk, but there are ever more computer programmers and web designers.

Remember Ironbridge

Optimism remains the right starting-point, but for workers the dislocating effects of technology may make themselves evident faster than its benefits (see article). Even if new jobs and wonderful products emerge, in the short term income gaps will widen, causing huge social dislocation and perhaps even changing politics. Technology’s impact will feel like a tornado, hitting the rich world first, but eventually sweeping through poorer countries too. No government is prepared for it.

Why be worried? It is partly just a matter of history repeating itself. In the early part of the Industrial Revolution the rewards of increasing productivity went disproportionately to capital; later on, labour reaped most of the benefits. The pattern today is similar. The prosperity unleashed by the digital revolution has gone overwhelmingly to the owners of capital and the highest-skilled workers. Over the past three decades, labour’s share of output has shrunk globally from 64% to 59%. Meanwhile, the share of income going to the top 1% in America has risen from around 9% in the 1970s to 22% today. Unemployment is at alarming levels in much of the rich world, and not just for cyclical reasons. In 2000, 65% of working-age Americans were in work; since then the proportion has fallen, during good years as well as bad, to the current level of 59%.

Worse, it seems likely that this wave of technological disruption to the job market has only just started. From driverless cars to clever household gadgets (see article), innovations that already exist could destroy swathes of jobs that have hitherto been untouched. The public sector is one obvious target: it has proved singularly resistant to tech-driven reinvention. But the step change in what computers can do will have a powerful effect on middle-class jobs in the private sector too.

Until now the jobs most vulnerable to machines were those that involved routine, repetitive tasks. But thanks to the exponential rise in processing power and the ubiquity of digitised information (“big data”), computers are increasingly able to perform complicated tasks more cheaply and effectively than people. Clever industrial robots can quickly “learn” a set of human actions. Services may be even more vulnerable. Computers can already detect intruders in a closed-circuit camera picture more reliably than a human can. By comparing reams of financial or biometric data, they can often diagnose fraud or illness more accurately than any number of accountants or doctors. One recent study by academics at Oxford University suggests that 47% of today’s jobs could be automated in the next two decades.

At the same time, the digital revolution is transforming the process of innovation itself, as our special report explains. Thanks to off-the-shelf code from the internet and platforms that host services (such as Amazon’s cloud computing), provide distribution (Apple’s app store) and offer marketing (Facebook), the number of digital startups has exploded. Just as computer-games designers invented a product that humanity never knew it needed but now cannot do without, so these firms will no doubt dream up new goods and services to employ millions. But for now they are singularly light on workers. When Instagram, a popular photo-sharing site, was sold to Facebook for about $1 billion in 2012, it had 30m customers and employed 13 people. Kodak, which filed for bankruptcy a few months earlier, employed 145,000 people in its heyday.

The problem is one of timing as much as anything. Google now employs 46,000 people. But it takes years for new industries to grow, whereas the disruption a startup causes to incumbents is felt sooner. Airbnb may turn homeowners with spare rooms into entrepreneurs, but it poses a direct threat to the lower end of the hotel business—a massive employer.

No time to be timid

If this analysis is halfway correct, the social effects will be huge. Many of the jobs most at risk are lower down the ladder (logistics, haulage), whereas the skills that are least vulnerable to automation (creativity, managerial expertise) tend to be higher up, so median wages are likely to remain stagnant for some time and income gaps are likely to widen.

Anger about rising inequality is bound to grow, but politicians will find it hard to address the problem. Shunning progress would be as futile now as the Luddites’ protests against mechanised looms were in the 1810s, because any country that tried to stop would be left behind by competitors eager to embrace new technology. The freedom to raise taxes on the rich to punitive levels will be similarly constrained by the mobility of capital and highly skilled labour.

The main way in which governments can help their people through this dislocation is through education systems. One of the reasons for the improvement in workers’ fortunes in the latter part of the Industrial Revolution was because schools were built to educate them—a dramatic change at the time. Now those schools themselves need to be changed, to foster the creativity that humans will need to set them apart from computers. There should be less rote-learning and more critical thinking. Technology itself will help, whether through MOOCs (massive open online courses) or even video games that simulate the skills needed for work.

The definition of “a state education” may also change. Far more money should be spent on pre-schooling, since the cognitive abilities and social skills that children learn in their first few years define much of their future potential. And adults will need continuous education. State education may well involve a year of study to be taken later in life, perhaps in stages.

Yet however well people are taught, their abilities will remain unequal, and in a world which is increasingly polarised economically, many will find their job prospects dimmed and wages squeezed. The best way of helping them is not, as many on the left seem to think, to push up minimum wages. Jacking up the floor too far would accelerate the shift from human workers to computers. Better to top up low wages with public money so that anyone who works has a reasonable income, through a bold expansion of the tax credits that countries such as America and Britain use.

Innovation has brought great benefits to humanity. Nobody in their right mind would want to return to the world of handloom weavers. But the benefits of technological progress are unevenly distributed, especially in the early stages of each new wave, and it is up to governments to spread them. In the 19th century it took the threat of revolution to bring about progressive reforms. Today’s governments would do well to start making the changes needed before their people get angry.

UTF-8 Everywhere

Why Real Estate Tech Is So Attractive For Founders by ezl

$
0
0

Comments:"Why Real Estate Tech Is So Attractive For Founders by ezl"

URL:http://blog.ezliu.com/why-real-estate-tech-is-so-attractive-for-founders/


Why I think real estate tech is one of the most attractive spaces:

1. We have EASIER problems to solve.

People are still doing things with pen and paper or impossible to manage excel solutions. online and mobile is a huge win for operators and they have mostly the same needs. there are very few TECHNICAL challenges in this space — development is essentially applying a known solution to a new problem space.

2. The bar for excellence is very low.

Just google for the real estate solution to any problem. The highest hits are just old companies that move very slowly. If any of their websites showed up on Hacker News, people would shred them for design and “looking sooooo web 1.0″. But they’re killing it because there isn’t an alternative that customers know about.

Consider this: the iPhone has only been around since 2007 in ANY capacity, and in the last 3 years, consumer facing apps have completely changed. These companies haven’t and are unlikely to. User interface and usability are still new ideas outside of Silicon Valley.

Beating the incumbents at product is surprisingly easy.

3. The market is huge and terribly underserved by tech.

Each of:

  • apartment discovery (like craigslist and other listing sites)
  • rental applications and tenant screening
  • rent payments
  • rental management software
  • commercial
  • residential sales process management

is worth on the order of a billion dollars a year in revenue.

Traction in one vertical facilitates traction in adjacent verticals. the prize is huge. The failure case is a lifestyle business.  There is a known, straightforward path to linear growth with very low risk of failure.

4. The prospective customers already all agree that they need better tech solutions.

Every landlord in america knows that in the coming years they’ll be switching to online applications and online rent payment. there’s no question of IF. Only WHEN. The challenge is that its hard to convince people to move TODAY.  The market is slowly shifting now, but there will be a cusp where the tides turn and people start switching en masse.

5. It’s relatively easy to reach players in the space.

I at least have the *option* of looking them up in the phone book. This is not true for a lot of other software products. “All hobbyist photographers” for example, is not an easy group to find.

*     *     *

Contrast these features to industries where there are relatively FEW dollars, customers need to be convinced that the product is needed at all, there are tons of really smart people working on it.

Instagram and Snapchat:

  • arguably have a market that spends zero dollars (claiming total US adspend as their market size is a joke),
  • are in a space with WAY more talented peers (sorry vaultware.com, I’m going to have to say I’d rather have an engineer from the photo team at Facebook),
  • have problems that are on the bleeding edge of tech so that they have to INVENT new solutions (I will never have to invent a database for my real estate problems, and realtime async stuff isn’t important for me), and
  • don’t have an obvious need or solve a clear business problem (how many people are *still* saying “I don’t get the point of Twitter”?)
  • it’s not possible to look up their prospective customers in a list (get me an email list of all 13-16 year old teenagers with a Nexus 4).

Real estate has an easier path to positive EV outcomes with a massive upside tail. That’s my flavor.

Coding Horror: The Magpie Developer

$
0
0

Comments:"Coding Horror: The Magpie Developer"

URL:http://www.codinghorror.com/blog/2008/01/the-magpie-developer.html


January 6, 2008

I've often thought that software developers were akin to Magpies, birds notorious for stealing shiny items to decorate their complex nests. Like Magpies, software developers are unusually smart and curious creatures, almost by definition. But we are too easily distracted by shiny new toys and playthings.

I no longer find Scott Hanselman's Ultimate Developer Tool list inspiring. Instead, it's fatiguing. The pace of change in the world of software is relentless. We're so inundated with the Shiny and the New that the very concepts themselves start to disintegrate, the words repeated over and over and over until they devolve into a meaningless stream of vowels and consonants. "Shiny" and "new" become mundane, even commonplace. It's no longer unique for something to be new, no longer interesting when something is shiny. Eventually, you grow weary of the endless procession of shiny new things.

I'm not alone. Jeremy Zawodny also notes the diminishing luster of shiny new things:

Over a year ago I unsubscribed from Steve's blog because he had a habit of writing in breathless fashion about the latest shiny new thing – often several times a day. I see too many people I know getting caught up in the breathless hype and forgetting to think about whether the latest shiny new thing really matters in the grand scheme of things.

Dave Slusher concurs:

[Robert Scoble] says that he gets too much email and that is ineffective for getting PR releases to him. He suggests that what you should do now is leave him a message on his Facebook wall. Dear god and/or Bob. In the time I've followed Scoble, I must have seen something like this a dozen times from him. Don't email, Twitter me. Don't Twitter, Pwnce. Jaiku me. Leave a wall message, send an SMS, just call me, email me, don't email me, don't call me. Enough already! I'm not even trying to get in contact with him, and I find this constant migration from platform to platform to be a load of shit that just wearies me. I felt the same way when I dropped TechCrunch, well over a year ago. I got so tired of hearing about another slightly different way of doing what we were already doing and why that tiny difference was worth dropping everything and moving over. I officially renounce the search for the newer and shinier.

It isn't just the neverending stream of tech news. It's also the tidal push and pull of a thousand software religious wars that continually wears us down, like errant rocks in a rapidly flowing stream. I bet the process David Megginson outlines sounds awfully familiar:

1. Elite (guru) developers notice too many riff-raff using their current programming language, and start looking for something that will distinguish them better from their mediocre colleagues. 2. Elite developers take their shopping list of current annoyances and look for a new, little-known language that apparently has fewer of them. 3. Elite developers start to drive the development of the new language, contributing code, writing libraries, etc., then evangelize the new language. Sub-elite (senior) developers follow the elite developers to the new language, creating a market for books, training, etc., and also accelerating the development and testing of the language. 4. Sub-elite developers, who have huge influence (elite developers tend to work in isolation on research projects rather than on production development teams), begin pushing for the new language in the workplace. 5. The huge mass of regular developers realize that they have to start buying books and taking courses to learn a new language. 6. Elite developers notice too many riff-raff using their current programming language, and start looking for something that will distinguish them better from their mediocre colleagues.

I hope you're sitting down, because I've got some bad news for you. That Ruby on Rails thing you were so interested in? That's so last year. We've moved on.

If you consider that, statistically, the vast majority of programmers have yet to experience a dynamic language of any kind – much less Ruby – the absurdity here is sublime. Some dynamic language features are trickling down to the bastions of Java and .NET, but slowly, and with varying levels of success. These so-called thought leaders have left a virtual ghost town before anyone else had a chance to arrive.

I became a programmer because I love computers, and to love computers, you must love change. And I do. But I think the magpie developer sometimes loves change to the detriment of his own craft. Andy Hunt and Dave Thomas, the Pragmatic Programmers who were a big part of the last sea change in Ruby, said it quite well in a 2004 IEEE column (pdf).

Users don't care whether you use J2EE, Cobol, or a pair of magic rocks. They want their credit card authorization to process correctly and their inventory reports to print. You help them discover what they really need and jointly imagine a system. Instead of getting carried away with the difficult race up the cutting edge of the latest technology, Pete concentrated on building a system [in COBOL] that works for him and his clients. It's simple, perhaps almost primitive by our lofty standards. But it's easy to use, easy to understand, and fast to deploy. Pete's framework uses a mixture of technologies: some modeling, some code generation, some reusable components, and so on. He applies the fundamental pragmatic principle and uses what works, not what's merely new or fashionable. We fail (as an industry) when we try to come up with the all-singing, all-dancing applications framework to end all applications frameworks. Maybe that's because there is no grand, unified theory waiting to emerge. One of the hallmarks of postmodernism – which some think is a distinguishing feature of our times – is that there's no "grand narrative," no overarching story to guide us. Instead, there are lots of little stories.

Don't feel inadequate if you aren't lining your nest with the shiniest, newest things possible. Who cares what technology you use, as long as it works, and both you and your users are happy with it?

That's the beauty of new things: there's always a new one coming along. Don't let the pursuit of new, shiny things accidentally become your goal. Avoid becoming a magpie developer. Be selective in your pursuit of the shiny and new, and you may find yourself a better developer for it.

Posted by Jeff Atwood

"Doesn't that mean that if you stick with Java or C# you will not have to worry about any of this? "

A little bit. At least for the next five years this will be the case. I've noticed that C++ almost never adds anything significant anymore. the Java folks seem to have been burned by implementing generics and seem a lot more hesitant to do continuations. C# is newer than Java and they seem to throw stuff in at a pretty consistent basis.

What I take from this is that languages accumulate some momentum and it becomes harder and harder to change direction the longer they have been rolling. Libraries, legacy code and backwards compatibility are big burdens for successful languages.

I bet over half of the code in the world is still written in COBOL!

Well, at least a third...

Talk about old and dull, but it still keeps processing those millions of insurance claims every night.

I've grown weary of the new Paradigms. I am happy as long as I can code, compile, run and test without much hassles. I don't even want to learn the intricacies of new features, I just want to know what's the new syntax for my old tricks. What's the name and syntax of the new queue, deque, list, arraylist, collection, stack whatever you want to call it... How do I concatenate my strings and index them...

Jeff,

I am curious as to what exceptions would qualify a transition to a new shiny technology as not MagPie? If the new shiny technology would say reduce code complexity and developer effort by 10% would it not be worth it? Or is only bringing on the new technology just for the sheer fact that it would be "cool" constitute as being MagPie?

What are your thoughts? How do the exceptions fall into place?

The only "shiny" new trinket I use these days is Notepad++. Just give me Visual Studio and Notepad++ and I'm pretty happy.

The worst part about chasing all the shiny objects is that eventually it wears you down to the point that you start to dislike that which you loved before...

I bailed on programming for a few years just because I got so sick of the continual chase, year after year after year. I've had a long enough vacation now that I feel the love again and am getting back in. :) Started off with some nice easy x86 asm and C (where most of my time was spent before), then tried some Ruby on Rails (ugh, what WERE they thinking? Easy to do "quick and dirty" web stuff, but maintainability is a nightmare), and finally settled on Java (which appears to have become a nice stable business platform, entirely different from the Java that was coming out just as I was bailing on programming...).

Excellent post. It's important to always that real-life experience is part of the "best tool for the job" evaluation and will in many cases out-weigh subtle "fewer lines of code" benefits.

Well written and a good point. I've often thought about the same thing - to use all the latest toys, I have to spend more time researching/learning those then writing actual code.


Lately I've started to use plain "vanilla" Visual Studio - no fancy refactor tools, no snippet inserter, no nothing. And I get along just fine.

I honestly think this was all obvious to real developers from the start but it was again a well-researched post. As you have said you are blogging for your own sake, there's really no point in saying stuff like "Well. Duh!".

I am a little curious though; what makes you think software developers are smart and curious, almost by definition? What's the definition you are using, really?

It's turles all the way down. For every developer getting sucked into the new and shiny, there's eight managers helping weight down the bandwagon hoping for the silver bullet to solve all their problems.


Shiny new things look really appealing when you don't have a life. Instead of Magpies I was thinking of Monkeys that get fascinated by some shining object in a stream.

God help us with what the VS team is dreaming up. 2008 doesn't seem to have one thing that isn't shiny, new, and totally useless unless you have an IQ of 170.

Hear hear! Vanilla Visual Studio!

Seeing Scot's big page of shiny stuff, I just have to browse through them! Maybe I should try out something better then notepad paint...

The thing with some of those programs is that once you're used to them, there's no going back! Good example would be tabbed browsing.

As to Sander:
Have to copy XML text in Visual Studio. Glad to use the addin "Smartpaster" which splits the lines and adds double quotes. Did save quite a bit of time.

The flip side to this is: "what *are* the important changes that should be seriously considered as reasons to change". And since it is easier to use hindsight than prescience rephrase to "what were the important changes of the last 50 years in software development?".

I would suggest, in roughly chronological (and probably importance):

1) Representation of the software as an abstraction from the hardware it runs on (Basically compilers rather than assemblers, the world of fortran and C plus all those that died out).

2) Functional languages, principally for their enabling of the elite gurus to some of the more potent tools and testbeds on which many of the later improvements would be based.

3) Provision of large API's not simply as a means of interacting with the OS but to do more common tasks/data structures independent of the actual environment.

4) Managed Code. I.E. letting the system handle memory allocation and deallocation for you, the resulting sanity and safety checks that result.

5) Sophisticated Integrated Development Environments where the code/build/[test|debug] cycle can be made faster and simpler.

Whilst I like OOP in many ways I have not included it because I don't believe it justifies it's presence amongst the above (though you could argue a well designed OOP language can make the the provision of the IDE easier/faster compared to many of the alternates)

I have not included the provision of code/data in electronic form (i.e. moving off punched cards) since the change was obviously massive but was more a change in the hardware.

IT should be noted that almost all of these changes required the increasing power (and readily available fast memory) of the computers of the time so you could argue that anything which enables moving to the latest shiny new computer hardware is good. As such you could argue the highly portable nature of modern languages are the most importance change...

I do not know where UI frameworks should feature here. They seem kind of orthogonal to the concept (since they improve in a similar fashion throughout but do not take over most needs in the same way (the creation of compilers eliminated the need for the vast majority of people to need to code in assembly, the provision of WIMP did not remove the need for decent command line tools in the same way). Certainly knowing how to do a quick WIMP system became an important part of many developers skill set.

In the same fashion Source Control must go in there somewhere as the most useful tool, but no really a programming paradigm shift like the others.

I think the really elite programmers use low level languages and assembler/byte code... and perhaps wonder why so much time is wasted on all this new flashy jazz. :P

Ruby on rails is a great example of something I deemed a complete waste of time when it came about... I couldn't see anything great about it other than another learning curve.

Stick to the tried and tested... to quote the old addage "if it ain't broke, don't fix it".

I've never needed a tool beyond the latest Visual Studio... although I will use libraries etc... its a bit different... stuff like OpenGL, flex, bison etc... none of which are shiny or new. :)

What's all this "buying books and taking courses to learn a new language" business I hear mention of? Over 20+ years of programming and learning new languages (from assembly onwards...), I've never done either of those weird things :)

I think its amazing with all the shiny and new stuff that I still find myself using vim/emacs and sed/awk when I really get backed into a corner.

Jeff , you must admit that all "shiny new toys" come with additional features and by your definition : "software developers are curious".Thus we (software developers) can't stop asking "what does this new feature do ?" actually meaning "how much of my work does this new feature do for me" .The hacks they provide make our day since we are lazy and wouldn't want to do them manually.The next batch of new toys come out and and the cycle continues.

Erik I wasn't trying to say one approach was better than the other in any particular field. I'm sure one could find plenty of examples of one approach being selectively better than the other at different times depending on the metric you used. And frankly I don't think the hydraulic flight stick really goes against any particular point I was making -- mine was a point about cultures of production, not specific things produced.

Jeff , you must admit that all "shiny new toys" come with additional
features and by your definition : "software developers are curious".

Jeff's a .net coder, so he's probably installing vs2008 and wondering what it's broken; then the first SP will be out; if it's worth learning LINQ; what Silverlight will bring to the table etc etc!

All hail the sub-"huge mass of regular developers" who are just slow enough to see the tarnish appearing on the shiny new things.

We raise a glass to the tarnished not-so-new things and toast their brevity while silently praying: "there but for the grace of god, go I"

i see only one problem in "being selective in your pursuit of the shiny and new" -- it almost immediately means that you have to now what is the shiny and new -- you have to work with it to determine if it fits to you -- you are not selective anymore -- you have to invest a lot of time to find out if the shiny and new ( probably its only shiny and new for you but not anymore for the crowd ) fits to your needs

a perfect example is ajax. a lot of companies do web development for example using asp.net. now microsoft releases an ajax library on top of this -- what are you going to do? -- miss the train and stick to pure asp.net or jump on the shiny and new ajax stuff of .net 3.5 ...

it is quite hard to find your way between sticking to the known or using the new technologies ... even if everything is situated in the same family of technologies like in my example the .net platform

what we do here is to read about lots of different stuff but wait at least half a year until we incorporate something in our products ... not even this approach is perfect ...

Excellent post .. couldnt agree more :) The shiney things call to me on a daily basis its hard to resist.

I try out new tools regularly, not because they are shiny and new, though they often are, but because they may be *better*. If they are better then I will stick with them, if not, I go back. This is why I still use Vim, because no other text editor I have used so far provides me the balance of power, speed and usability that I find Vim (and a few years experience with it) lends me.

I think to give up on new products is as foolish as blindly following their allure. Yes, there are fashions in software applications and in software development as an industry -- sometimes these fashions are led by great innovation and sometimes by marketing, popularity and hype. The easy option is to denounce it all, leave the fashion industry completely, maybe even find a niche. It is a little more effort to try to filter out the hype, judge the products for what they are and what they add and to pick the ones that work best for you, but ultimately, in doing so, you will be a stronger developer, concentrating more of your time on the difficult task of solving the business problem.

As to programming language, it is all very well saying that programmers should not worry about following the industry and be happy with what works, but at the end of each appointment comes the need to find another and there is nothing better than experience in the currently most popular technologies when trying to find a job in any particular job market, whether you like those languages or not.

Dozens of interview candidates the last year:
"So, what software/technologies/platforms do you use here?"

My response to every single one of them:
"We use whatever works."

I think perhaps you meant 'effete' everywhere you used 'elite'. An elite programmer is one who is superlatively productive, writing code that communicates clearly the problems it is solving and how it is solving them. The effete programmer is dedicated to living a decadent life of excessive tool-lust and no sense of responsibility beyond constant satisfaction of his own voracious ego.

Some of us picked up our shiny stuff back in the 1980's, and settled in to wait for the rest of computerdom to catch up. I think that with about five years of effort focused on performance, Ruby and Python will catch up to Common Lisp and CLOS.

(And if we're really lucky, the CL community will settle on decent cross-platform standards for sockets and Unicode...)

I've just bitched about this a href="http://enfranchisedmind.com/blog/2007/12/29/functional-programming-language-de-saison/"over at my blog/a, too, although in a more particular case: functional languages themselves seem to have a popularity turnover about every three months. You site Scala as the next big thing, but it's really on its way out: mark my words.

Go hit programming.reddit.com and type in "Erlang" -- the flood of posts were from last summer, and it was basically inaugerated with the Seaside presentation in front of all the Railsists. Then people seemed to get bored with Erlang in about 3 months, and then Haskell had a longer run (about 4 months). Then it died out and Ocaml had a couple of months, and now Scala is on its rise.

Now, as an Ocaml developer, it's taken me three years just to wrap my head around all the a href="http://enfranchisedmind.com/blog/2007/01/01/ocaml-lazy-lists-an-introduction/"new ideas/a and a href="http://enfranchisedmind.com/blog/2007/08/06/a-monad-tutorial-for-ocaml/"other weirdness/a in that language, and I'm still getting used to it. I must just be stupid if everyone else is able to grok those languages in a matter of a couple of months...

vi and Makefiles ftw!

Visual Studio in Windows though :/

Oh dear god I had forgotten about ever going back to check out Scott Hanselman's list.

There goes my morning!

Although I have found sometimes there are some real gems to be found as you build your nest. Just need discipline to go nest building AFTER you've had time to actually get real work done.

On the other hand you don't want to write ASP Classic apps with notepad for the next 50 years ...

As a programmer with a blog named "Shiny Things", I'm full aware of my magpie nature. When I find a new language or technology I try to learn something new from it and take it back to the technologies I use regularly.

You eventually learn that most of the time shiny and new is repackaged and re-hashed...

Not that things like Ruby represent a whatever forward, the real question is, is it really worth it? Will it really bring about a new leap in productivity?

And then there are the fence sitters, you know the ones I'm talking about, "Yeh, I'd buy a computer this week but I'm waiting for the next greatest and latest in oh, about six months or so, don't want to be obsolete right out of the box..." There are programmers with the same mentality.

The point is, we make a living doing what we are doing and we are for the most part, priveleged to do so. I pick my toolset to make my living. Period.

However, we all have to keep up so that we don't get left in the technological dust.

I guess the point is we have a need for shiny and new, just so long as the shiny and new can make us some money. At least I think that is still the reason I and about 200 million others like me in the USA go to work.

The key point to avoiding the Magpies syndrome is to stay focused on the product itself. The Magpies are ones who are erronesouly focused on the tools and/or the language they are using.

Here is a rule of thumb: Never change the language you are using. Change the tools perhaps a little more often....

As others have pointed out, your users don't care about your tools. Focus your energy on adding features to your product and making it better...mastering your tool set and occasionally improving the tools over time using tested and well verified techniques demonstrated by others. The last thing you want to be coding in is a language or tools that had a life cycle of bad fashion fad....that nobody knows anything about in 3 to 5 years.

Jeff you had claimed Joel for had "Jumped the Shark" at FogCreek for writing their own compiler to avoid a rewrite as their product evolved. Read your own post here again and you might understand better why they did that.

Jeff,

You've made the comment in the last couple of weeks that end users don't care what language an application is written in, I disagree. I work for a small consulting firm that was a Visual FoxPro house; now that VFP has been relegated to the back burner at Microsoft, no new customers are appearing; and well they shouldn't. Most customers are savy enough to realize that any custom software needs to be written in the language that currently fits into the "riff-raff" category so that there is a base of developers to support it that aren't elite or so few and far between that they are hard to find.

Vertigo looks like a magpie company!

Where does one download those "magic rocks" of which you write?

Great post, Jeff. I've only been doing professional software development for 3 years and I'm already dreaming of escaping to management for precisely this reason. New tools and methods are great, but I find it frustrating that so many of our peers would rather engage in endless religious wars over the "best" technology than build useful software.

That's an English Magpie. I didn't think they looked like that in America. Is that right?

That's why I love Python, you get the best of both worlds : an ever evolving and shinier language built on top of robust and clean foundations.

I wonder what people find shiny and new in ruby, scala, java, C#, etc... they still have braces and begin/end tags ! It may look like a detail, but that's one of the differences that make Python code so much smaller, and thus more readable. Readability being one of the most important aspects of a good program, Python made an improvement here where others failed miserably or even ignored the problem.

The debate between dynamic and compiled languages should not even exist, the optimization lies in the algorithms, not in the tiny little bits of performance you get from compilation or clever lines rearranging. When you really need to optimize to the limit, just write those ten lines of C/assembler your program spends most of its time in, and if you find yourself writing too much C then it may be time to rethink your algorithm ...

Scott Hanselman list effectively seems boring, I stopped reading after Notepad2++alpha4. Really need an advanced text editor ? learn Vim or Emacs. Yes you will have to throw all the good things you learned from Notepad (?!), but in the end you will thank the 20+ years of user interface refinements that drive these programs.

I think that's what this post shows finally. Good tools that present real improvements like Python stay in the shade because of the programmers laziness, while things that are not really new but faster to learn get all the "attention".

I think there are too many riff-raff using C#. I'm going to switch to COBOL.NET. Seriously though, I did invest some time in learning Eiffel.NET when I wanted use something obscure to keep a project from being outsourced.

JavaScript Object Notation is shiny and new but I've found it is actually quite useful so I'm doing a lot with it.

Visual Studio 2008 has caused me a problem. I now have msiexec.exe bogging down my PC whenever I run Visual Studio 2005 or 2008. I think it is trying to repair something.

I think part of the issue is in how easily is it to integrate the shiny new thing into your work flow and project. Refactoring tools are easy to adopt - they dont do anything but make the source easier to manage (hopefully) and in the end the code still has to compile before it can being testing.

LINQ is a much harder proposition. I just cant see people with years of investment in stored procedures switching to LINQ because they can. And lest we forget the VB6 revolt caused by VB.NET being backward compatible.

One probably just needs to be pragmatic (as we always have) in selecting tools for use in our daily jobs. The trick is in not giving into the hype. Always remember that the demos always work, the devil is in the details of doing something not in the demo. When I evaluate a new tool if I can do some small projecct that I feel should be easy with the tool, into the crapper it goes.

Nicely said. If you follow programming.reddit.com, you get the impression that everyone is using functional languages and C disappeared back in the 1980s.

Your post makes me think of Ecclesiastes (1:8-9):

"All things are wearisome, more than one can say. The eye never has enough of seeing, nor the ear its fill of hearing. What has been will be again, what has been done will be done again; there is nothing new under the sun."

Whenever I start a project at home I spend some time looking at language features, modules or libraries that could help me. This usually leads me to spend lots of time reading terrible documentation for their APIs. I generally end up concuding that the libraries are terrible and it would be less effort to write it all myself.

I've noticed this too with Microsoft APIs. They have done this with GDIPlus (it is now in maintenance mode), with WinForms (in favour of the new, shiny WPF) and countless other APIs.

Just when you've caught up with the new technology from Microsoft, they have moved on to something new.

Put another way...

It's not better *just* because it's written in:

1) Ajax

2) Ruby

3) Python

4) .Net

5) Fad language of the month designed for students/academics who don't have to make money but do have the time and leisure to play with cool new stuff while someone else is paying their rent.

It's better because it DOES something BETTER. Clicking a link is not a significant improvement on clicking a button.

Finally!

I was surprised someone of your experience, having gone through so many technologies (I too got my start with LOGO) wasn't sufficiently sceptical about Ruby.

I've seen the claims so many times, when Ruby came out, I just sighed. A good idea, yes, but it's been tried before. A matter of time before people started adding to the technology to get it to do what it didn't do before, and it would joing the never ending sea of frameworks out there.

I read recently, nationwide, the job percentages are 22% C++ programmers, 12% .Net programmers, 8% Java programmers. 58% are divided among the rest (Deplhi, JavaScript, ASP legacy, PHP, Cold Fusion, etc etc etc)

I found it interesting, the highest percentage belonged to the most mature of the technologies.... hmmm... food for thought

I posted pretty much the exact same thing about four-and-a-half years ago:

Do Yourself a Favor and Stop Learning
http://gadgetopia.com/post/1081

I've never fallen victim to the obsession with new, shiny languages. As a matter of fact, I still do the vast bulk of my web programming in ASP VBScript and PHP. They both work well, they deploy quickly, and they're easy to find help with because they are well established. I have only recently reached the limit of ASP, but PHP appears to be limitless. Unfortunately, my employer frowns upon the use of open-source languages.

Now, the fact that I'm not a Magpie doesn't say anything great about me, it's probably just because I'm such an isolated developer. The only other developer I work with pushes .NET, and I already hate .NET, so no pressure there, although it appears that .NET is my only option to move forward (oh Python, how I wish we could be together!). My point is, it's only natural to compete with your peers, and in Software you do it by pushing the language edge.

"Don't feel inadequate if you aren't lining your nest with the shiniest, newest things possible. Who cares what technology you use, as long as it works, and both you and your users are happy with it? "

Problem is too often it doesn't work. With so many software projects failing and so many users upset with the results of projects these days and so many critical bugs emerging after deployment, is it any wonder that people are constantly looking for something new? The issue isn't that we have too many solutions without problems, its that we have too many problems without (good) solutions.

Brilliant, I needed to hear that. Now I can rest from always thinking I need to be actively learning everything new thing that comes along. Good to hear it from someone else.

Thanks Jeff

Nick wrote:
we have too many problems without (good) solutions.

And how has *any* new programming language or development process addressed this?

Great claims have been made for OOP, extreme programming, SCRUM, Agile, etc.

Software still sucks, by and large. There are small improvements, largely the result of sustained attention to detail by the development team. None has proven to be a magic bullet.

"And how has *any* new programming language or development process addressed this?"

They've certainly addressed those problems, whether or not they succeeded may well be a different story. I wouldn't consider them all complete failures; as you say there have been improvements (even if they have been small). The fact that we still have problems just means we still have room for improvement (which means there will be more "shiny things" trying to be that improvement in the future).

On a completely different note, what is the point of that "Enter the word" test for comments? It seems to always be the same word, so if it is supposed to be a CAPTCHA test its not a very good one...

If with every new, inspired attempt to a perfect language we come a little step closer to mature software development in general, I say GO shiny new toys.

Please let all elite, senior, and regular software developers contribute to shiny new toys, at the expense even of their effectivity as software developers. By the time the next next thing is there and we jump to yet another shiny new toy, we might have learnt something from the previous shiny new toy and gotten more effective as a whole.

I agree with this post. I have a working set of systems built in Rails that didn't take a lot of pain to develop. I don't care about the shiny stuff. I just want to get things done quickly. I like shiny stuff and play with it in my own time. The reason there is so much awful Java out there is because it used to be shiny and was written by people who didn't understand object orientation or even good programming techniques. Most Java code I've come across isn't o-o, just the same old procedural stuff hidden behind some badly named objects (most of which are static).

That said, we use Rails because it's quick to develop stuff and we were starting from scratch. I also find writing interfaces really easy because Ruby underlies it all and I can get something written in half a day that would have taken most of a month using the technologies I used to use.

I worked for a Java shop running J2EE for a few months (even have the Sun Java Certs) and it drove me insane. Never have so many XML config files contributes so little for so long. (Struts? Are you insane?)

I suspect that Microsoft tools win because they automate a lot of the garbage you have to deal with as a result of the alleged agnosticism of the Java platform and (pick one) the OSS tools you've chosen. Rails wins because it's opinionated, as in the config is part of the conventions used by the framework - this is a good idea if you agree with the opinions, or don't have an opinion in the first place. MS is opinionated by default, think about it.

The sad thing is that the guy who wrote Mongrel, who should have ended up with a good job or being a high-paid consultant from his efforts has thrown his toys out of the pram. I suspect that it's because he comes across as someone who is difficult to work with (Elite developers are a self-selecting pain in the ...). So he's moved on to something else but still using Ruby. Fine. I don't care.

Also, remember all those "stable" and "old" technologies were all "shiny new toys" at one point. Java, C++, hell even C were once new technologies, and all faced skepticism that they would help (ok, I'll admit, I am too young to remember C++ and C coming out, and Java had just come out a few years ago when I started playing around with programming). Obviously they had some redeeming qualities as they stuck around for as long as they did.

I think your argument is short sighted. I see this argument a lot, primarily from the older developers that have been through the cycle a number of times. Frankly, getting old means you've already learned a lot, and learning becomes tiring after x number of iterations.

Now think back to when you were a scrapping new programmer, learning all the latest stuff. It was exhilerating, and new. And you found all kinds of cool things you didn't really know before.

Learning is a young mans game, sad to say. Sure, there's plenty of old geezers (like me) that still enjoy learning new stuff, but each year it becomes more and more of a chore. At some point i'll toss in my hat and relegate myself to "legacy" technologies.

The thing us geezers forget, though, is that the shiny new thing always brings with it *something* new and useful. And this technology will likely eventually filter down into the mass market. Without people doing the exploration, it's hard to tell what ideas are really worthwile. Sure, the pioneers will move on to the next new technology next year, but they will bring with them the ideas they learned with the last new technology, and those ideas will begin to filter down the chain.

Just because you no longer want to be on the cutting edge doesn't mean the cutting edge is no longer useful. It may not be for you, but it is for the programming profession in general... it just may take some time to filter down.

I mean, where would we be if scientists just stopped trying new things? Thank god there's always a new crop of fresh faces who aren't tarnished and jaded by years of experimentation.

I have been programming for about 20 years and the tools I use are just that. The real meat of programming for me was and continues to be writing leaner and maintainable code that causes as few errors as possible.

Tools help a lot. To that extent they are very important. For example, the first few programs I wrote were in Norton Editor and today I can't imagine writing one without a syntax highlighting one.

The other notable changes occurring during my programming career have been the rise in level of abstraction - from structured programming to OOP, patterns, services, etc. If one was to shy away from learning this stuff, one wouldn't have a job for long.

Still there are times when one needs to do bit manipulation, write a tight loop, parse an expression, use the right sorting algorithm, create just the right design and so on. The limelight no longer shines upon these skills but that doesn't mean they don't matter. The mainstream takes these things for granted, places more demands upon it and causes the elite to bring out better stuff.

What font is this written in? It looks sucky on firefox in XP.

Hey Now Jeff,
I don't want to be a Magpie dev!
Coding Horror Fan,
Catto

Well said Jeff. Urge to stand out in thr crowd is what comes through in most blogs on technology. May be that writing Software is really a mundane task. May be that we dont need so many great intellects in this field.

Now, if only Microsoft resisted the urge to release a new version of the language/framework every year........

I have to agree with Nobody Real. A good developer should always put aside a reasonable amount of time to explore new (new to you anyway) tools, languages, techniques, etc.

I've certainly tried my share of dead-end stuff, but there are quite a few tools in my toolbox that help me immensely that I wouldn't have if I didn't try new stuff on a regular basis.

If you are interested in some of the tools that I use regularly, I've posted an article on my website that lists them. Many of these tools came from Scott Hanselman's list (though my list isn't anywhere near as extensive).

- Favorite Tools -
http://www.redwerb.com/blogs/redwerb/pages/favorite-tools.aspx.

I've actually managed to make my J2EE applications a lot meaner and cleaner since learning Ruby on Rails. Even though we don't use it where I work it was worth learning just to pick up a lot of neat tricks for writing clean web applications.

The issue isn't that we have too many solutions without problems, its that we have too many problems without (good) solutions.

A man I've learned much from, Gerald Weinberg, wrote his first couple of books on the technology of programming. Then he switched, and wrote or coauthored 50 more on the process of programming, and he is most famous for saying "no matter what they tell you, it's always a people problem."

http://www.artima.com/weblogs/viewpost.jsp?thread=221622

I liked this post and understand your angle. I was just thinking about this recently...

But... don't you agree this kind of relates to our obsession with hardware too? Didn't you recently post about a DLink router? I embarrassingly am a college student (painfully poor) with 2 computers I upgrade/replace every year.

Isn't this an important part of the toolbox? Old stuff works, but what about the effect on productivity. Customers don't care about how many developers, hours, and lines of code either (unless it means its going to cost them). If you ask me, considerations into aspects like these determine my tools.

On a completely different note, what is the point of that "Enter the word" test for comments? It seems to always be the same word, so if it is supposed to be a CAPTCHA test its not a very good one...
Nick on January 7, 2008 08:28 AM


http://www.codinghorror.com/blog/archives/000712.html

This is going to sound like hype, but it's just my $.02 and honest opinion.

Before I found ruby, I kind of hated programming, much of the time. It was just a pain. There were parts that were fun, enjoyable, etc.

But there seemed to be a pattern, which, perhaps might repeat itself w/ RoR for me one day, which was:

* be excited about learning all the cool whizbang technology of some new language (new for me)
* actually learn how to do the basic stuff in that technology
* get bored frequently, because every task seemed to be all about doing the same boilerplate things again, just slightly different each time

Perhaps I wasn't lucky enough to work on cool, unique projects all the time.

But now, even when doing my Xth social networking site in RoR, it's still interesting to work on and get down into the internals of the language, pushing the limits of the framework.

... my point is though, I'd probably still be hating life right now at a Java, .Net or PHP company if I hadn't of taken the time to tinker with Rails one weekend.

Shit, maybe I'll say the same thing about Scala, Lua, Scheme/Lisp, etc one day... but I'm *far* from being elite or even sr. junior elite. :)

It's refreshing to hear that there are people who realize that the sheer amount of information overload has negative effects on us mere mortals. I used to read Scott's blog, but the guy puts up so much information that it just becomes noise and becomes anti-informative.

I think that the wave of new languages owes something to the ubiquitousness of the internet. In the past, new languages tended to fester and percolate in research labs and academia for years. The good stuff would eventually gain commercial traction and the lesser stuff would become geek trivia questions.

Now, everything new gets thrown at the wall to see what "sticks" as soon as someone can upload it. You don't know what has legs or what's better left on the drawing board.

a href="http://www.artima.com/weblogs/viewpost.jsp?thread=221622"http://www.artima.com/weblogs/viewpost.jsp?thread=221622/a
Summary
In November, I gave the commencement address for Neumont University, a school in Salt Lake City dedicated to teaching computer science where my coauthor lectures. This is that speech.

How odd...
Part of my living comes from tutoring students at that college on subjects the teachers don't know or just happen to get horribly wrong.

It's easy to confuse "the search for newer shinier" with the "search for a better way".

At some point, the sub-elite and the regular programmers just get snowed by their awe of the "elite". They jump on the next bandwagon and waste tons of time making shiny new apps rather than solving problems more efficiently. But it's a human problem as much as anything else. We want "new tech", not "better solutions". The latter reeks of "productivity gains" and so much corporate speak; the former smells of new car leather.

Here's a good one. We're on VS 2008, 3rd iteration (or is that 3.5th iteration?) 2005 finally stopped f*ing around with our HTML syntax and 2008 has bunch of shiny features. Now don't get me wrong, DLINQ and Extensions will definitely provide me with some increased productivity, but they're very "back-end-centric". But you know what else small devs waste a ton of time on? UIs!

So now we get WPF, but MS still hasn't cooked up a good ASP "input control" for all of the standard types. What do I mean? I mean that every input on a standard front page to a DB is actually like 3-4 controls (sometimes more). You have your label control, then your textbox/calendar/drop-down list, then your validation control (with or w/o AJAX?). Sometimes you even have multiple text-boxes or validation controls. But at the end of the day, you're always doing the same things: standar text input, phone input, credit card # input, postal/zip code input, "must-be-a-number" input that inevitably fails when someone types "-1" into the box, date range inputs, etc.

I have wasted so much time in the last year alone in both building these UIs and testing/rejecting the stuff that's submitted. Goodness knows that if MS just included a good library for this type of pre-cut "input control", then VS would simply pay for itself.

Instead, we get lots of "shiny" and a dose of "better". I think it's a human flaw as much as anything else. *sigh*

There's an interesting article from awhile back in the science studies literature about comparative jet design in the USA and the USSR during the Cold War. The USA favored "shiny" technology -- top of the line, state of the art, caring more about raw performance of the machine than they did the cultures that made the machine or how much it cost. By contrast, the USSR had a different model of what constituted the "best" technology: they wanted versatile technology that could be used in many different theaters, at a good price, with interchangeable parts that could be serviced by anyone who knew how to repair a tractor. They had entirely different modes of production but ended up with a fleet of powerful jets that didn't have all of the frills of the US ones but cost a great deal less, could be more easily repaired and serviced, and could be easily produced in larger numbers. I always thought that was an interesting contrast in terms of technological cultures -- there is no one way to judge which is the "best" technical solution, it really depends on what your priorities are.

(The citation, if anyone cares, is Leon Trilling, "Styles of Military Technical Development: Soviet and U.S. Jet Fighters - 1945-1960," in E. Mendelsohn, et al, eds., _Science, Technology and the MIlitary_, Springer 1989.)

...getting old means you've already learned a lot, and learning becomes tiring after x number of iterations.

Actually getting old in this business means that you've already learned the same thing 5 different ways and it's getting harder and harder to see how learning to read a text file a 6th way is going to benefit anybody.

I think it all ties back to the "Real Artists Ship" or Getting things done mentality. Accomplishing a goal, no matter what the technology, is more important than all the shiny new things (or dull things) you used to build it.

Shmork: What you fail to mention (and which, coincidentally, has been mentioned on this blog before) is that the whiz-bang hydraulic flight controls on Sabres actually wound up making a huge difference in their combat performance against the MiG-15.

http://www.codinghorror.com/blog/archives/000788.html

By definition, aren't windows developers always trapped working on the shiniest thing around? Waste time re-writing working programs because MS re-invented a new shinier OS?

Whenever I interact with fellow IT folks for the first time typical question asked is "In What Platform you work?"

I don't have any answers to that. I worked in variety of stuff COBOL, RPG, CL, C, Java, JSP, ASP, VB, C#, Oracle, SQL, DB2, ... too many to list.

My current project uses too many frameworks (Struts, Spring, Hibernate ...) to do simple CRUD database stuff. Developer are carried away with these facinating framework and miss fundamendal points such as database concurrency.

What you "use" doesn't matter, what you "deliver" matters. People fail to learn basics, very sad.

Best code language in the world that will last forever...

LOL Code (http://lolcode.com/)
Not shiny at all...very dirty in fact

@Thomas David Baker:

"On the other hand you don't want to write ASP Classic apps with notepad for the next 50 years..."

Not even if it works well, keeps your clients happy, and earns you a good living?

To Aaron G,
Your summary is too simplistic.

I programmed for a few years before using a garbage collected language. Then, when I used .Net I was surprised by how many common mistakes just weren't possible anymore. Later I read about the blub paradox. Now a days, I live in fear that there is something much better out there that I don't know about.

The good news is that I work with a team of people which prevents me from jumping to every shiny thing I see.

I think the next big thing I have to discover is dynamic/implicit typing. I've experimented with it, but I don't think it will hit me until I work with it full time.


"In a 5 year period we get one superb programming language - only we can't control when the 5 year period will begin."
Alan Perlis

Welcome to the post-Singularity. You can't possibly keep up, but you must try or the developer in the next cube over who puts in more hours will code you out of a job.

"Some dynamic language features are trickling down to the bastions of Java and .NET, but slowly, and with varying levels of success."

Doesn't that mean that if you stick with Java or C# you will not have to worry about any of this?

I guess what it boils down to is this:

Do you want to get work done, or do you want to advance the technology baseline?

We could all still be writing in assembly if a few people hadn't gone out on a limb and invented higher level languages. Then, we had to have people go out on a limb and actually start to USE those languages.

C++ would not exist if it weren't for the "shiny new thing". Nor would Java. Nor would C# or the CLR. We probably wouldn't have an internet, or a world wide web.

I guarantee you, those crusty old COBOL programmers you used to make fun of when you were fresh out of college felt much the same way as this article and it's supporters do. Congratulations. You've achieved nerdvana.

Nerdvana is that place where programmers go when they stop learning. They feel their skills are valuable and will continue with the status quo until one day they wake up and realize there aren't many jobs left, and those remaining are all maintenance.

At that point, you can choose to leave Nerdvana, and go to Management, or you can join the rest of your colleagues back on the learning treadmill and update your skillset. Either way, you're not going to keep doing what you were.

I am a young programmer (24yo) and have learned pretty much everything to programming before the web languages started to appear. From basic to assembler, I think that covers more than enough territory for programming.

I can't believe a language which started as "Pretty Home Page" finally ended up as a professional language. At first glance I thought the name implied corny website written by 12 yo kids full of animated smileys and blinking lights.

The only real contender to established tools I have seen is Java. But I never used it seriously because it generates huge bloatwares memory sucker. I have seen many pseudo benchmark Java vs C and I can't believe for a second their speed is comparable.

I am glad I did stick with desktop development so I don't have to keep up with all those "fancy", buggy and hard to install/configure new languages. Microsoft provides pretty much all of the tools needed for the major OS which is Windows.

I simply love oranges, especially while typing comments.

What we could do is make a nice big public wiki-like
table - Excel Live or something - and have these columns:

Skillset (VB, VB.Net, VC++, C/C++, C#, C Omega, Linq, WPF, ....)
Feature (Templates, dynamic typing, lambdas/function pointers, UDFs, one-line serialization ;-), , ...)
HelpedMeOrHelpedMeNot (1,2,3,...10)
WantAsDirect? y/n
WantAsWorkaround? y/n
IThinkIsGoodThing y/n

and so on....

Someone has to figure out the moderation of content.
Just hope you get a correct statistical sample.
Hope. Anything else is incorrect or unreal ;-)

Jeff,

have you considered and alternate explanation: you are just getting old :)

I started using emacs this past year (June 2007, for the recordbooks) because I wanted something non-shiny. Well, also because I wanted a flexible editor with good scripting and regular expression support, and had always wanted to learn LISP since my wee days in the 1980s.

Too bad its Lisp is e instead of common, but that's okay.

As far as non-shiny goes, I'm stuck using VB6/VBA for most things at work. But, hey -- no reason to not write good code.

Reading this while I'm supposed to be coding in LabVIEW, with a tiny bit of C added in. Non-shiny for the win!

(but I do Django on the train commute...)

Great post, Jeff!!
Does make sense.

Users don't care whether you use J2EE, Cobol, or a pair of magic rocks.

Users don't, but as I'm freelancer, customers almost always do. They wanted ror - I started writing on ror, if they will want xxx++, i will start learn it. This is what makes all that "top languages" lists.

I never used the magpie, I always used a raccoon. My old boss was a raccoon, easily distracted by every new shiny piece of tinfoil he saw in the garbage heap.

I spent alot of years trying to get him to realize what he was doing but it was always 'his ball, his bat, his field, his way or the highway'.

Shiny items are sometimes good but it's so easy for those in power (and not necessarily knowledge) to lose their way.

I really really look forward to ruby becoming a "thing of the past". Then I can go back to ruby coding in peace and quiet, like I did before the arrival of the rails hordes.

I agree that the pursuit of the new and shiny for its own sake is counter-productive...now somebody needs to get that info to Microsoft who profits the most from constantly introducing "new and shiny".
And to "encourage" adoption, older tech is "deprecated" routinely to make sure that the pursuit of new and shiny is first and foremost on the minds of every developer worthy of the name!

Even though I've not always agreed with you (Steven McConnell), sometimes you really hit the tack on the head with a satisfying force that could humble Thor himself. Many thanks for your well-written blog; it is one of the best on the 'net for this industry.

Jeff. I was an intern at Microsoft not too long ago. They're churning a lot of new tools and forced employees to use the beta version. It's very painful. One example was the new version of ADO.NET. The other being PowerShell (or Monad, whatever).

ADO.NET vNext was something. I can't even differentiate between ADO.NET vNext (or EDM?), LINQ, DLINQ, what else there? who knew?

They can advertise me how powerful PowerShell is but Python can do a lot of things in Linux/Unix/Windows.

Fast forward to the present:
- I decided to dump .NET
- I decided to pick up JEE 5 (JSF, EJB3, GlassFish)
- Say NO to Hibernate, Spring, Struts, whatever the fad
- I decided to learn Zope/Plone
- I decided to learn PHP (I know syntaxes, but haven't written anything)

I have some basic knowledge of Python and I decided to pursue it further.

A little bit of fact-checking:
- Zope: not pretty, but works well and rock solid since '97
- Plone: quirky, no Drupal-blingie, but Govt, NGO, Univs love it
- Python: Silicon Valley's best kept secret

Microsoft with its .NET always come up with something new every 2-3 years.
- It was MFC - WinForms - WPF
- It was Biztalk - WCF + WF
- It was Web Service, .NET Remoting, whatever-else - WCF
- It was ASP - ASP.NET 1.0/1.1 - ASP.NET 2.0 - ASP.NET 3.5
- Windows 2003 - 2008 - 7
- Active Directory: every new release is a big release
- SharePoint: same as AD

In between, as you know it, they'll have CTPs and a big marketing around it to gather early adopters. I'm just too tired.

People can say "just because it's there, it doesn't mean you have to use it". Well dude, if your manager told you so (for street-creds during MS expo), YOU'RE THE ONE TO IMPLEMENT IT.

People can say "just use whatever you need". How am I suppose to know what I need when those tutorials and books are covering every single corner of the latest technology using BETA PREVIEW (this is a common practice for MS Press books). It's like finding a needle in a haystack.

At the end of the day, those books are just eliminating more trees and end up being obsolete in a matter of months. What a waste.

nuff said.

»

OpenBSD Looking At Funding Shortfall In 2014 - Slashdot

$
0
0

Comments:"OpenBSD Looking At Funding Shortfall In 2014 - Slashdot"

URL:http://bsd.slashdot.org/story/14/01/15/1719244/openbsd-looking-at-funding-shortfall-in-2014


Problems with donating to OpenBSD

(1) The donations are being requested after the end of the tax year

Most charitable donations occur in October/November/December for tax reasons. This is true of both deductible and non-deductible donations.

(2) Donations are not tax deductible

This isn't a huge problem, if the OpenBSD Foundation were willing to invoice the company for the amount; then it could still be deducted as a business expense, but then the OpenBSD Foundation would have to claim it as income and pay tax on it. This isn't terrifically onerous for them in any case, as they are not a charity, and thus have to pay tax anyway, unless they can just get someone to pay their power bill directly instead (something they've requested).

Another option would be to have a U.S. OpenBSD non-profit that could then support work by OpenBSD under contract, even if that work were something like "provide nightly builds of OpenBSD binaries in exchange for grant funds". They don't seem interested in/able to utilize, this approach.

(3) Invoicing would not exactly require some measure of editorial control, but...

There would be at least an implied expectation of quid-pro-quo, even if none exactly existed, since an audit of the company that was invoiced could require at least a paper justification for the value obtained in exchange for the invoiced amount. It doesn't have to be a great deal for the company, and it could actually be a completely lopsided deal, but there would need to be a token exchange of goods and/or services for the invoiced amount.

(4) If someone is willing to pay their power, they demand they be a Canadian company

I can understand the ramifications for this coming from a non-Canadian company; OpenBSD needs to understand the ramifications of "any port in a storm". There really aren't that many Canadian technology companies in this sector, compared to the U.S.; the highest percentage of OpenBSD-based products are in fact German.

(5) There are not a large number products based directly on OpenBSD

The companies that do have products based on it are generally not hugely profitable, and the small number that there are are listed here: http://www.openbsd.org/products.html [openbsd.org] which gives you some indication of their market penetration.

(6) The OpenBSD folks don't have the most stellar relationship with the rest of the Open Source community

Without assigning specific blame, this should probably be addressed sooner, rather than later.

--

All in all, it's rather difficult to set up a legal fiction that would let it be advantageous to a business to donate.

It's not that they do not provide valuable software, it's just that most of the value they provide is not in the OpenBSD OS itself, it's in the ancillary projects that are associated with the same people.

The State of Hy | Development & Stuff

Tcl the misunderstood

$
0
0

Comments:"Tcl the misunderstood"

URL:http://antirez.com/articoli/tclmisunderstood.html


Tcl the misunderstood Tcl the Misunderstood Salvatore antirez Sanfilippo, 6 March 2006Why Tcl is not a toy language, but a very powerful one

In an article recently linked from reddit entitled Tour de Babel you can read (among lots of other nonsense):Heck, people still use Tcl as an embedded interpreter, even though Python is far superior to Tcl in every conceivable way -- except, that is, for the frost thing.

Ok, the whole article is well.. not very valid, but unfortunately while many misconceptions are promptly recognized by the informed reader, this one against Tcl is generally believed at face value. I hope this article will convince people that Tcl is not that bad.

Prologue

In my programming life I have used a lot of languages to write different kind of applications: many free/paywork programs in C, a web CMS in Scheme, a number of networking/web applications in Tcl, a shop management system in Python, and so on. I used to play with a number of other programming languages like Smalltalk, Self, FORTH, Ruby, Joy,... And yet, I have no doubt, that there is no language that is as misunderstood in the programming community as Tcl is.

Tcl is not without faults, but most of its limitations are not hard coded in the language design, they are just the result of the fact that Tcl lost its "father" (John Ousterhout) a number of years ago, and together with him any kind of single-minded strong leadership that was able to take strong decisions. With the right changes it is possible to overcome most of the limitations of Tcl, and at the same time preserve the power of the language. If you don't trust Tcl is remarkably powerful please take the time to read this article first. Maybe you still won't like it afterwards, but hopefully you will respect it, and you will certainly have strong arguments against the Tcl is a toy language misconception that's even more petty thanLisp has too many parenthesis.

Before we begin, I'll spend some time explaining how Tcl works. Like the best languages in the world, Tcl has a few concepts that, combined together, allow for programming freedom and expressiveness.

After this short introduction to Tcl, you'll see how in Tcl things very similar to Lisp macros just happen using normal procedures (in a much more powerful way than Ruby blocks), how it's possible to redefine almost every part of the language itself, and how it is possible to mostly ignore types when programming. The Tcl community developed a number of OOP systems, radical language modifications, macro systems, and many other interesting things, just writing Tcl programs. If you like programmable programming languages I bet you'll at least look on it with interest.

Tcl in five minutes

Concept 1: Programs are composed of commands

The first idea of the Tcl language is: commands. Programs are commands, one after the other. For example to set the variable 'a' to 5 and print its value you write two commands:

set a 5
puts $a

Commands are space separated words. A command ends with a newline or with a ; character.Everything is a command in Tcl - as you can see there is no assignment operator. To set a variable you need a command, the set command, that sets the variable specified as the first argument to the value specified as the second argument.

Almost every Tcl command returns a value, For example the set command returns the value assigned to the variable. If theset command is called with just one argument (the variable name), the current value of the variable is returned.

Concept 2: Command substitution

The second idea is command substitution. In a command some arguments may appear between [ and ] braces. If so the argument is substituted with the return value of the code included inside the braces. For example:

set a 5
puts [set a]

The first argument of the second command, [set a], will be substituted with the return value of "set a" (that's 5). After the substitution step the command will be converted from:

puts [set a]

to

puts 5

And, at that point, it will be executed.

Concept 3: Variable substitution

Always using the set command for variable substitution would be too verbose, so even if not strictly needed, variable substitution was introduced at some time during the early development of Tcl. If a variable name is preceded by the $ character it is substituted with its value. So instead of

puts [set a]

it's possible to write

puts $a

Concept 4: Grouping

If commands are space separated words, how to deal with the need for arguments that may contain spaces? For example:

puts Hello World

is an incorrect program as Hello and World are two different arguments. This problem is solved by grouping. Text inside "" is considered a single argument, so the right program is:

puts "Hello World"

Commands and variables substitution work inside this kind of grouping, For example I can write:

set a 5
set b foobar
puts "Hello $a World [string length $b]"

And the result will be "Hello 5 World 6". Also, escapes like \t, \n will do what you think. There is, however, another kind of grouping where every kind of special character is just considered verbatim without any kind of substitution step. Everything between { and } is seen by Tcl as a unique argument where no substitutions are performed. So:

set a 5
puts {Hello $a World}

Will print Hello $a World.

Concept 1 again: Everything is a command

Concept 1 was: programs are composed of commands. Actually it is much more true than you may think. For example in the program:

set a 5
if $a {
 puts Hello!
}

if is a command, with two arguments. The first is the value of the variable a substituted, the second is the string { ... puts Hello! ... }. The if command uses a special version of Eval that we'll see in a moment to run the script passed as the second argument, and returns the result. Of course, you can write your version of if or any other control structure if you want. You may even redefine if itself and add some feature to it!

Concept 5: Everything is a string - no types

The following program works and does what you think:

set a pu
set b ts
$a$b "Hello World"

Yes, in Tcl everything happens at runtime and is dynamic: it's the ultimate late binding programming language, and there are no types. The command name is not a special type but just a string. Numbers are also just strings, so is Tcl code, a string (remember we passed a string to the if command as second argument?). In Tcl what a string represents is up to the command that's manipulating it. the string "5" will be seen as a string of characters by the "string length 5" command, and as a boolean value by "if $a ..." command. Of course commands check that values have a suitable form, If I try to add "foo" to "bar" Tcl will produce an exception because it can't parse "foo" nor "bar" as numbers. This kind of checks in Tcl are very strict, so you'll not get the PHP-alike effect of silent absurd type conversions. The type conversion only happens if the string makes sense interpreted as the thing the command needs as arguments.

So Tcl is so dynamic, but guess what? It is more or less as fast as current Ruby implementations. There is a trick in the implementation of Tcl: objects (not in the OOP sense, but C structs representing Tcl values) cache the native value of the last use of a given string. If a Tcl value is always used as a number the C struct representing it will contain an integer inside, and as long as the next commands continue to use it as an integer, the string representation of the object is not touched at all. It's a bit more complex than this, but the result is that the programmer doesn't need to think about types, and programs still work as fast as other dynamic programming languages where types are more explicit.

Concept 6: Tcl lists

One of the more interesting types (or better.. string formats) Tcl uses is lists. Lists are mostly the central structure of a Tcl program: a Tcl list is always a valid Tcl command! (and both are just strings, in the end). In the simplest form lists are like commands: space separated words. For example the string "a b foo bar" is a list with four elements. There are commands to take a range of elements from a list, to add elements, and so on. Of course, lists may have elements containing spaces, so in order to create well formatted lists the list command is used. Example:

set l [list a b foo "hello world"]
puts [llength $l]

llength returns the length of the list, so the above program will print 4 as output. lindex will instead return the element at the specified position, so "lindex $l 2" will return "foo", and so on. Like in Lisp, in Tcl most programmers use the list type to model as many concepts as possible in programs.

Concept 7: Math in Tcl

I bet most Lisp hackers already noted how Tcl is a prefix-notation language, so you may think like in Lisp, math in Tcl is performed using math operators as commands, like: puts [+ 1 2]. Instead, things work in a different way: in order to make Tcl more friendly there is a command taking infix math expressions as argument and evaluating them. This command is called expr, and math in Tcl works like this:

set a 10
set b 20
puts [expr $a+$b]

Commands like if and while use expr internally in order to evaluate expressions, for instance:

while {$a < $b} { puts Hello }

where the while command takes two arguments - the first string being evaluated as an expression to check if it's true at every iteration, and the second evaluated itself each time. I think it's a design error that math commands are not builtins, I see expr like a cool tool to have where there is complex math to do, but to just add two numbers [+ $a $b] is more convenient. It's worth noting that this has been formally proposed as a change to the language.

Concept 8: Procedures

Naturally, nothing stops a Tcl programmer from writing a procedure (that's a user defined command) in order to use math operators as commands. Like this:

proc + {a b} {
 expr {$a+$b}
}

The proc command is used to create a procedure: its first argument is the procedure name, the second is the list of arguments the procedure takes as input, and finally the last argument is the body of the procedure. Note that the second argument, the arguments list, is a Tcl list. As you can see the return value of the last command in a procedure is used as return value of the procedure (unless the return command is used explicitly). But wait... Everything is a command in Tcl right? So we can create the procedures for +, -, *, ... in a simpler way instead of writing four different procedures:

set operators [list + - * /]
foreach o $operators {
 proc $o {a b} [list expr "\$a $o \$b"]
}

After this we can use [+ 1 2], [/ 10 2] and so on. Of course it's smarter to create these procedures as varargs like Scheme's procedures. In Tcl procedures can have the same names as built in commands, so you can redefine Tcl itself. For example, in order to write a macro system for Tcl I redefined proc. Redefining proc is also useful for writing profilers (Tcl profilers are developed in Tcl itself usually). After a built in command is redefined you can still call it if yourenamed it to some other name prior to overwriting it with proc.

Concept 9: Eval and Uplevel

If you are reading this article you already know what Eval is. The command eval {puts hello} will of course evaluate the code passed as argument, as happens in many other programming languages. In Tcl there is another beast, a command called uplevel that can evaluate code in the context of the calling procedure, or for what it's worth, in the context of the caller of the caller (or directly at the top level). What this means is that what in Lisp are macros, in Tcl are just simple procedures. Example: in Tcl there is no "built-in" for a command repeat to be used like this:

repeat 5 {
 puts "Hello five times"
}

But to write it is trivial.

proc repeat {n body} {
 set res ""
 while {$n} {
 incr n -1
 set res [uplevel $body]
 }
 return $res
}

Note that we take care to save the result of the last evaluation, so our repeat will (like most Tcl commands) return the last evaluated result. An example of usage:

set a 10
repeat 5 {incr a} ;# Repeat will return 15

As you can guess, the incr command is used to increment an integer var by one (if you omit its second argument). "incr a" is executed in the context of the calling procedure, (i.e. the previous stack frame).

Congratulations, you know more than 90% of Tcl concepts!

Why is Tcl powerful?

I am not going to show you every single Tcl feature, but I want to give an idea of advanced programming tasks that are solved in a very nice way with Tcl. I want to stress that I think Tcl has a number of faults, but most of them are not in the main ideas of the language itself. I think there is room for a Tcl-derived language that can compete with Ruby, Lisp and Python today in interesting domains like web programming, network programming, GUI development, DSL and as scripting language.

Simple syntax that scales

Tcl syntax is so simple that you can write a parser for Tcl in few lines of code in Tcl itself. I wrotea macro system for Tcl in Tcl as I already mentioned, which is able to do source level transformations complex enough to allow tail call optimization. At the same time, Tcl syntax is able to scale to appear more algol-like, it depends on your programming style.

No types, but strict format checks

There are no types, and you don't need to perform conversions, however, you aren't likely to introduce bugs because the checks on the format of the strings are very strict. Even better, you don't need serialization. Have a big complex Tcl list and want to send it via a TCP socket? Just write: puts $socket $mylist. On the other side of the socket read it as set mylist [read $socket]. and you are done.

Powerful, event driven I/O model

Tcl has built-in event-driven programming, integrated with the I/O library. To write complex networking programs with just what is provided in the core language is so simple it's funny. An example: the following program is a concurrent (internally select(2) based) TCP server that outputs the current time to every client.

socket -server handler 9999
proc handler {fd clientaddr clientport} {
 set t [clock format [clock seconds]]
 puts $fd "Hello $clientaddr:$clientport, current date is $t"
 close $fd
}
vwait forever

Non-blocking I/O and events are handled so well that you can even write to a socket where there is no longer output buffer and Tcl will automatically buffer it in userland and send it in background to the socket when there is again space on the socket's output buffer.

Python users know a good idea when they see it - Python's "Twisted" framework makes use of the same select-driven IO concepts that Tcl has had natively for years.

Multiple paradigms

In Tcl you can write object oriented code, functional style code, and imperative code in a mix, like it happens in Common Lisp more or less. A number of OOP systems and Functional Programming primitives where implemented in the past. There are everything from prototype-based OOP systems to SmallTalk-like ones, and many are implemented in Tcl itself (or were initially, as a proof-of-concept). Furthermore, because code in Tcl is first class, it is very simple to write functional language primitives that play well with the logic of the language. An example is lmap:

lmap i {1 2 3 4 5} {
 expr $i*$i
}

which will return a list of squares, 1 4 9 12 25. You can write a map-like function based on a version of lamba (also developed in Tcl itself), but Tcl has already what you need to allow for a more natural functional programming than the Lisp way (which works well for Lisp but maybe not for everything else). Note what happens when you try to add functional programming to a language that's too rigid: Python and the endless debate of its functional primitives.

Central data structure: the list

If you are a Lisp programmer you know how beautiful is to have a flexible data structure like the list everywhere in your programs, especially when the literal is as simple as "foo bar 3 4 5 6" in most cases.

Programmable programming language via uplevel

Via eval, uplevel, upvar and the very powerful introspection capabilities of Tcl you can redefine the language and invent new ways of solving problems. For example, the following interesting command if called as first command in a function will automagically make it a memoizing version of the function:

proc memoize {} {
 set cmd [info level -1]
 if {[info level] > 2 && [lindex [info level -2] 0] eq "memoize"} return
 if {![info exists ::Memo($cmd)]} {set ::Memo($cmd) [eval $cmd]}
 return -code return $::Memo($cmd)
}

Then, when you write a procedure just write something like:

proc myMemoizingProcedure { ... } {
 memoize
 ... the rest of the code ...
} 

i18n just happens

Tcl is probably the language with the best internationalization support. Every string is internally encoded in utf-8, all the string operations are Unicode-safe, including the regular expression engine. Basically, in Tcl programs, encodings are not a problem - they just work.

Radical language modifications = DSL

If you define a procedure called unknown it is called with a Tcl list representing arguments of every command Tcl tried to execute, but failed because the command name was not defined. You can do what you like with it, and return a value, or raise an error. If you just return a value, the command will appear to work even if unknown to Tcl, and the return value returned by unknown will be used as return value of the not defined command. Add this to uplevel and upvar, and the language itself that's almost syntax free, and what you get is an impressive environment for Domain Specific Languages development. Tcl has almost no syntax, like Lisp and FORTH, but there are different ways to have no syntax. Tcl looks like a configuration file by default:

disable ssl
validUsers jim barbara carmelo
hostname foobar {
 allow from 2:00 to 8:00
}

The above is a valid Tcl program, once you define the commands used, disable, validUsers and hostname.

Much more

Unfortunately there isn't room to show a lot of interesting features: most Tcl commands just do one single thing well with easy to remember names. Strings operations, introspection and other features are implemented as single commands with subcommands, for examplestring length, string range and so on. Every part of the language that gets indexes as argument support an end-num notation, so for example to take all the elements of a list but not the first nor the last you just write:

lrange $mylist 1 end-1

And in general there is a lot of good design and optimization for the common case inside. Moreover the Tcl source code is one of the best written C programs you'll find, and the quality of the interpreter is amazing: commercial grade in the best sense of the word. Another interesting thing about the implementation is that it works exactly the same in different environments, from Windows to Unix, to Mac OS X. No quality difference among different operating systems (yes, including Tk, the main GUI library of Tcl).

Conclusion

I don't claim everybody should like Tcl. What I claim is that Tcl is a powerful language and not a Toy, and it's possible to create a new Tcl-alike language without most of the limitations of Tcl but with all of its power. I tried this myself, and the Jim interpreter is the result: the code is there and working and can run most Tcl programs, but then I had no longer time to work for free to language development, so the project is now more or less abandoned. Another attempt to develop a Tcl-alike language,Hecl, is currently in progress, as a scripting language for Java applications, where the author (David Welton) exploits the fact that the Tcl core implementation is small, and the command based design simple to use as a glue between the two languages (This is not typical with modern dynamic languages, but both the ideas apply to Scheme too). I'll be very glad if, after reading this article, you no longer think of Tcl as a Toy. Thank you. Salvatore.

Vote on Reddit.com

p.s. want to learn more about Tcl? Visit theTclers Wiki.


Netflix Picks Up an Oscar Nomination for 'The Square' Documentary

$
0
0

Comments:"Netflix Picks Up an Oscar Nomination for 'The Square' Documentary"

URL:http://thenextweb.com/media/2014/01/16/netflix-picks-oscar-nomination-square-documentary-egyptian-revolution/


After winning three Emmys last year and a Golden Globe earlier this month, Netflix has received further recognition from the TV and film industry with its first ever Oscar nomination.

The Square, an observational documentary focusing on protestors in the Egyptian Revolution, has been shortlisted in the Documentary Feature category. It will premiere on Netflix tomorrow (January 17), but has already won accolades at the Sundance and New York film festivals.

To be considered for such as prestigious award is testament to the quality of Netflix’s original content. TV shows such as Orange is the New Black and House of Cards, the latter of which has won three Emmys and a Golden Globe, have proven that it can compete with established TV networks and studios.

To pick up its first Oscar, Netflix will need to beat its fellow nominees first: Dirty Wars, Cutie and the Boxer, The Act of Killing and 20 Feet from Stardom.

The Oscars

Photo credit: ROBERT SULLIVAN/AFP/Getty Images

Obama’s Path From Critic to Overseer of Spying

.NET Fiddle

Belgian startup DPTechnics releases OpenWRT module | Shop

Eero Saarinen's Bell Labs, Now Devoid of Life - Point of View - January 2014

$
0
0

Comments:"Eero Saarinen's Bell Labs, Now Devoid of Life - Point of View - January 2014"

URL:http://www.metropolismag.com/Point-of-View/January-2014/In-Photos-Eero-Saarinens-Bell-Labs/


The METROPOLIS Blog

Samuel Medina

Empty: Bell Labs in Holmdel, New Jersey, as it awaits redevelopment.

Photography by Rob Dobi

At its peak, thousands passed through its massive, light-filled atrium. Today, Bell Labs Holmdel stands empty, all of its 1.9-million-square-feet utterly without life. An iconic example of the now-disparaged office park, the campus in central Jersey, was shuttered in 2007 and vacated soon after. Years later, it remains in an abandoned, if not unkept state. The grounds are cared for, the floors swept clean, and the interior plantings trimmed, however haphazardly. (That's saying something; in the laboratory's heyday, plastic shrubbery filled its glorious central hall.)

For zombie fans, it isn't much of a stretch to imagine the luckless protagonists of the Walking Dead holed up here, fenced off from the rest of the world by six-story high glass walls. (Alternatively, it would make a great lair for the Governor.) Of course, in such a scenario, it's plausible that the virus capable of raising the dead would have originated inside the lab itself. As is often noted, the building is as highly prized by scientists as it is by architects. It was here, in Saarninen's quarter-mile fortress, that has housed some of the last century's most significant scientific discoveries. 

And it's here that the building's new owner, Somerset Development, imagines a new urbanist temple to commerce. Plans are in place to revitalize the site as a town center for Holmdel, complete with urban ammenities like shops and a coffee shop. But as Fred Bernstein wrote in last month's cover story, the building's uncompromising layout complicates Bell Lab's adaptive reuse. New York architect Alexander Gorlin is currently exploring strategies that will bring life back to the historic complex, while still preserving Saarinen's graceful design. It will be interesting to see how he navigates that process.

In the meantime, take a tour of the building through Rob Dobi's striking photography of the building in its current state. It might be your last chance before the place is overrun with mocha-wielding teens.  

Viewing all 9433 articles
Browse latest View live