Quantcast
Channel: Hacker News 50
Viewing all 9433 articles
Browse latest View live

Dell Wasn't Joking About That 28-Inch Sub-$1000 4K Monitor; It's Only $699 - Forbes

$
0
0

Comments:"Dell Wasn't Joking About That 28-Inch Sub-$1000 4K Monitor; It's Only $699 - Forbes"

URL:http://www.forbes.com/sites/jasonevangelho/2014/01/07/dell-wasnt-joking-about-that-28-inch-sub-1000-4k-monitor-its-only-699/


Dell’s P2815Q Ultra HD Monitor

Last month DellDell launched a pair of UltraSharp monitors boasting 4K resolution, and dangled a sweet carrot in front of our early adopting paws: a forthcoming 28-inch Ultra HD monitor that would retail for less than $1000. Today at CES 2014 Dell revealed it, along with an aggressive price tag: $699.

I have a 1-on-1 with Dell tomorrow afternoon and this is now on my short list of products to test on the show floor. Until then, unfortunately, I only have a few new details to share.

The P2815Q packs an IPS LED display will have a full 3840 x 2160 4K resolution. It launches globally on January 23. Dell hasn’t yet discussed things like refresh rate or range of inputs (I’m sure DisplayPort is a given), but they do promise the same “screen performance” as the new UltraSharp 32 and UltraSharp 24 Ultra HD monitors. That’s certainly encouraging since their UltraSharp line is normally a cut above when it comes to professional displays.

The monitor will even include the ability to pivot to portrait mode as well as a range of adjustable viewing heights and angles. They’ll be selling accessories too, like a stereo sound bar and monitor arm.

My largest concern as a gamer will be whether the panel supports a native 60Hz refresh rate or higher via DisplayPort 1.2, since 4K gaming is a bit of a drag at 30Hz. Will this be a true 3840 x 2160 panel or two stitched together like the ASUS PQ321Q?

The tech industry at large is making a concentrated push toward mainstream Ultra HD adoption this year from the hardware side of things (it’s still 5 years away), and content providers like Netflix will begin shooting shows like “House of Cards” in 4K. A wealth of readily available 4K content is an important driver, and pricepoints like $699 certainly don’t hurt.

Continue the conversation by following me on Twitter


The New York Times - Breaking News, World News & Multimedia

When Doctors ‘Google’ Their Patients

Dijkstra on Haskell and Java

$
0
0

Comments:"Dijkstra on Haskell and Java"

URL:http://chrisdone.com/posts/dijkstra-haskell-java


By Chris Done

In 2001, Edsger W. Dijkstra wrote a letter to the Budget Council of The University of Texas. A PDF is available here, I’ve typed it up so that everyone can read it. Sadly, the curriculum was changed to Java. Relatedly, the algorithmic language Scheme was replaced by Python in MIT’s The Structure and Interpretation of Computer Programs version 6.01.

To the members of the Budget Council

I write to you because of a rumor of efforts to replace the introductory programming course of our undergraduate curriculum the functional programming language Haskell by the imperative language Java, and because I think that in this case the Budget Council has to take responsibility lest the decision be taken at the wrong level.

You see, it is no minor matter. Colleagues from outside the state (still!) often wonder how I can survive in a place like Austin, Texas, automatically assuming that Texas’s solid conservatism guarantees equally solid mediocrity. My usual answer is something like “Don’t worry. The CS Department is quite an enlightened place, for instance for introductory programming we introduce our freshmen to Haskell”; they react first almost with disbelief, and then with envy —usually it turns out that their undergraduate curriculum has not recovered from the transition from Pascal to something like C++ or Java.

A very practical reason for preferring functional programming in a freshman course is that most students already have a certain familiarity with imperative programming. Facing them with the novelty of functional programming immediately drives home the message that there is more to programming than they thought. And quickly they will observe that functional programming elegantly admits solutions that are very hard (or impossible) to formulate with the programming vehicle of their high school days.

A fundamental reason for the preference is that functional programs are much more readily appreciated as mathematical objects than imperative ones, so that you can teach what rigorous reasoning about programs amounts to. The additional advantage of functional programming with “lazy evaluation” is that it provides an environment that discourages operational reasoning.

Finally, in the specific comparison of Haskell versus Java, Haskell, though not perfect, is of a quality that is several orders of magnitude higher than Java, which is a mess (and needed an extensive advertizing campaign and aggressive salesmanship for its commercial acceptance). It is bad enough that, on the whole, industry accepts designs of well-identified lousiness as “de facto” standards. Personally I think that the University should keep the healthier alternatives alive.

It is not only the violin that shapes the violinist, we are all shaped by the tools we train ourselves to use, and in this respect programming languages have a devious influence: they shape our thinking habits. This circumstance makes the choice of first programming language so important. One would like to use the introductory course as a means of creating a culture that can serve as a basis for computing science curriculum, rather than be forced to start with a lot of unlearning (if that is possible at all: what has become our past, forever remains so). The choice implies a grave responsibility towards our undergraduate students, and that is why it can not be left to a random chairman of something but has to be done by the Budget Council. This is not something that can be left to the civil servants or the politicians; here statesmen are needed.

Austin, 12 April 2001

Edsger W. Dijkstra

© 2014-01-08 Chris Done <chrisdone@gmail.com> RSS

FLIR Systems | Thermal Imaging, Night Vision and Infrared Camera Systems

FriendCode/codebox · GitHub

$
0
0

Comments:"FriendCode/codebox · GitHub"

URL:https://github.com/FriendCode/codebox


Codebox

"Open source cloud & desktop IDE."

Codebox is a complete and modular Cloud IDE. It can run on any unix-like machine (Linux, Mac OS X). It is an open source component of codebox.io (Cloud IDE as a Service).

The IDE can run on your desktop (Linux or Mac), on your server or the cloud. You can use the codebox.io service to host and manage IDE instances.

Codebox is built with web technologies: node.js, javascript, html and less. The IDE possesses a very modular and extensible architecture, that allows you to build your own features with through add-ons. Codebox is the first open and modular IDE capable of running both on the Desktop and in the cloud (with offline support).

Install

Install Codebox globally using NPM:

npm install -g codebox

Usage

Run Codebox with:

codebox run ./myworkspace

Get help and list of commands with:

codebox --help

Follow updates about Codebox on Twitter and Youtube.

The IDE's documentation can be found in the docs folder. Feel free to ask any questions or signal problems by adding issues.

Codebox accepts pull-requests, please see the Contributing to Codebox guide for information on contributing to this project

Extras

Screencast: A screencast of the IDE is available on Youtube.

License

The project is open source under the Apache 2.0 license.

alyssa frazee

$
0
0

Comments:"alyssa frazee"

URL:http://alyssafrazee.com/introducing-R.html


Thu 02 January 2014 | -- (permalink)

My sister is a senior undergraduate majoring in sociology. She just landed an awesome analyst job for next semester and was told she'll be using some R in the course of her work. She asked me to show her the ropes during winter vacation, and of course I said yes! What better way to while away the days of a Minnesota winter?!1

One catch: the day we planned to work, it turned out we only had an hour of overlapping free time. YIKES!

Challenge accepted. One hour to introduce R to my sociologist sister. Here's what I did. I didn't prepare this in advance, and I'm absolutely sure I made mistakes, glossed over key ideas, and/or harped on about something that's not important. Feedback is absolutely welcome! (I'm genuinely interested in others' "R in an hour" ideas!)

(1) download R and RStudio

I'm impressed that RStudio is both accessible/helpful for beginners and useful for experts. Particularly for beginners: the point-and-click options are decent, and the Workspace panel is really useful for conceptualizing the R environment. I didn't even bother showing my sister the default R IDE -- I had her download RStudio right away. You still have to download plain old R though, and when we did this, I learned that r-project.org could really use a design overhaul, (a) because it's not very pretty and (b) downloading R is kind of confusing if you don't know what a "CRAN mirror" is.

(2) console and script

The first thing we did after getting set up was type two lines into the console:

It wasn't exactly "hello world", but it illustrated some concepts like "assignment" and "variables" and "evaluation"2.

The next thing I had my sister do was save those two lines of code in an R script. (I think it's important to teach beginners how to save code in a script right when they start using the language.) Then I taught her how to do Cmd - Enter to execute those lines in the console.

In the course of explaining this stuff, I learned that "console" and "script" are kind of jargon-y words, so I tried to give specific definitions for each of them. I also had to be careful to use those exact words rather than things like "REPL" or "prompt."

(3) comments

# COMMENTS ARE SUPER IMPORTANT so we learned about them

(4) graphics

Scripts and comments and consoles can get a little dry, so at this point, it was time for some fun with graphics! Here is the plot we made:

x=rnorm(1000,mean=100,sd=3)hist(x)

Teaching my sister about this code involved explaining what a "function" is (since both rnorm and hist are functions) and explaining what "function arguments" are and why you can refer to them by name but don't have to.

I also showed her how to save a graphic - it's easy in RStudio thanks to the handy dandy "Export" button in the graphics window.

(5) getting help

I think "getting help" is the most important concept to go over during this kind of session. Obviously you're not going to learn everything in an hour, so what you really need are the tools to go find that information when you need it on the job. Here is the syntax I introduced:

# if you know the function name, but not how to use it:?chisq.test# if you know what you want to do, but don't know the function name:??chisquare

Given that function docs aren't terribly accessible thing to non-programmers, this might not have been the right tactic here. I considered stressing the importance of Googling skillz (the single most useful thing I've learned in grad school), or introducing StackOverflow or R-help, but settled on explaining the official doc system. I figured the answer to one of the most common beginner questions, "how do I do X in R?", would likely be "use the function Y()" -- so it's important to be able to figure out how Y() is used.

I think the other most common beginner question is "I got this error message, Z. How do I fix it?" To address this issue, I demoed some common errors (object not found, unexpected <X> constant, etc.) and explained what they meant.

(6) data types

Looking at help files reminded me that the docs often specify that certain function arguments must have a specific type, so we should probably discuss types. I went over:

vectors

# character vector>y=c("apple","apple","banana","kiwi","bear","strawberry","strawberry")>length(y)[1]7# numeric vector>numbers=rep(3,99)>numbers[1]33333333333333333333333333333333333333[39]33333333333333333333333333333333333333[77]33333333333333333333333

matrices

>mymatrix=matrix(c(10,15,3,29),nrow=2,byrow=TRUE)>mymatrix[,1][,2][1,]1015[2,]329>t(mymatrix)[,1][,2][1,]103[2,]1529>solve(mymatrix)[,1][,2][1,]0.1183673-0.06122449[2,]-0.01224490.04081633>mymatrix%*%solve(mymatrix)[,1][,2][1,]10[2,]01>chisq.test(mymatrix)Pearson'sChi-squaredtestwithYates'continuitycorrectiondata:mymatrixX-squared=5.8385,df=1,p-value=0.01568

data frames (the mother of all R data types)

# set working directorysetwd("~/Documents/R_intro")# read in a datasetwages=read.table("wages.csv",sep=",",header=TRUE)

So we had discussed some data types with examples and learned some important stuff along the way, like how to find the number of elements in a vector, what a working directory is, and how to read in a data file.

(7) exploratory data analysis

Once you load in a dataset, things start to get fun. We learned a whole bunch of stuff from this data frame, like how to do basic tabulations and calculate summary statistics, how to figure out if you have missing data, and how to fit a simple linear model. This part was pretty fun because my sister started leading the session: instead of me saying "I'm going to show you how to do this," it was her asking "Hey, could we make a scatterplot?" or "Do you think we could put the best-fit line on that plot?" I was really glad this happened - I hope it meant she was engaged and enjoying herself!

>names(wages)[1]"edlevel""south""sex""workyr""union""wage""age"[8]"race""marital">class(wages$marital)[1]"integer">table(wages$union)notunionmemberunionmember43896>summary(wages$workyr)Min.1stQu.MedianMean3rdQu.Max.0.008.0015.0017.8226.0055.00>nrow(wages)[1]534>length(which(is.na(wages$sex)))[1]0>linmod=lm(workyr~age,data=wages)>summary(linmod)

We also learned some more about graphics, like how to make good histograms and how to create scatterplots with superimposed regression lines:

hist(wages$wage,xlab="hourly wage",main="wages in our dataset",col="purple")plot(wages$age,wages$workyr,xlab="age",ylab="years worked",main="age vs. years worked")abline(lm(wages$workyr~wages$age),col="red",lwd=2)

Aaaaand, time's up.

What did I miss? What could have gone better? The things that occurred to me afterward were:

  • subsetting with [. This is CLUTCH. It applies to all the data types I introduced and it's super useful. I wish I would have had time to ask my sister to make a histogram of wages for, say, only females.

  • programming-type things: loops, if statements, user-defined functions, etc. I'm okay with leaving this stuff out -- I taught R here as a data analysis environment rather than a programming language, given my audience.

  • saving .rda files and/or your workspace

  • installing and loading packages

  • other data types (e.g., lists)

  • other (better?) resources/tips/tricks for getting help

Final thoughts

Overall I had fun introducing R in an hour, and I think (hope?) my sister did too. I sent her off with some other resources: this, this, and this, none of which I'm terribly familiar with - but I know you need a lot more than an hourlong session from me to be able to analyze real data with R. I think I covered most of the basics, and my sister said it was pretty helpful. I'd love to hear how you'd approach the "R in an hour for a non-programmer" challenge!

footnotes

It's seriously cold here, even for Minnesota. The temperature has been hovering around 0 for about a month now. On Monday, the high is -12. Fahrenheit. I don't even. You may have noticed that I use = for assignment and that I have now passed that habit on to my sister. I've thought about this, and I stand by it. I think <- is a waste of keystrokes and I've only found it useful when I'm assigning something inside a system.time call.

A Short Talk about Richard Feynman

$
0
0

Comments:"A Short Talk about Richard Feynman"

URL:http://www.stephenwolfram.com/publications/short-talk-about-richard-feynman/


Boston Public Library, April 2005
in connection with the publication of his Collected Letters

I first met Richard Feynman when I was 18, and he was 60. And over the course of ten years, I think I got to know him fairly well. First when I was in the physics group at Caltech. And then later when we both consulted for a once-thriving Boston company called Thinking Machines Corporation.

I actually don't think I've ever talked about Feynman in public before. And there's really so much to say, I'm not sure where to start.

But if there's one moment that summarizes Richard Feynman and my relationship with him, perhaps it's this.

It was probably 1982. I'd been at Feynman's house, and our conversation had turned to some kind of unpleasant situation that was going on. I was about to leave. And Feynman stops me and says: "You know, you and I are very lucky. Because whatever else is going on, we've always got our physics."

Feynman loved doing physics. I think what he loved most was the process of it. Of calculating. Of figuring things out.

It didn't seem to matter to him so much if what came out was big and important. Or esoteric and weird. What mattered to him was the process of finding it. And he was often quite competitive about it.

Some scientists (myself probably included) are driven by the ambition to build grand intellectual edifices. I think Feynman—at least in the years I knew him—was much more driven by the pure pleasure of actually doing the science. He seemed to like best to spend his time figuring things out, and calculating.

And he was a great calculator. All around perhaps the best human calculator there's ever been.

Here's a page from my files: quintessential Feynman. Calculating a Feynman diagram.

It's kind of interesting to look at. His style was always very much the same. He always just used regular calculus and things. Essentially nineteenth-century mathematics. He never trusted much else.

But wherever one could go with that, Feynman could go. Like no one else.

I always found it incredible. He would start with some problem, and fill up pages with calculations. And at the end of it, he would actually get the right answer!

But he usually wasn't satisfied with that. Once he'd got the answer, he'd go back and try to figure out why it was obvious.

And often he'd come up with one of those classic Feynman straightforward-sounding explanations. And he'd never tell people about all the calculations behind it. Sometimes it was kind of a game for him: having people be flabbergasted by his seemingly instant physical intuition. Not knowing that really it was based on some long, hard calculation he'd done.

He always had a fantastic formal intuition about the innards of his calculations. Knowing what kind of result some integral should have, whether some special case should matter, and so on.

And he was always trying to sharpen his intuition.

You know, I remember a time—it must have been the summer of 1985—when I'd just discovered a thing called rule 30. That's probably my own all-time favorite scientific discovery. And that's what launched a lot of the whole new kind of science that I've spent 20 years building. [See A New Kind of Science, page 27.]

Well, Feynman and I were both visiting Boston, and we'd spent much of an afternoon talking about rule 30. About how it manages to go from that little black square at the top to make all this complicated stuff. And about what that means for physics and so on. [See A New Kind of Science, page 30.]

Well, we'd just been crawling around the floor—with help with some other people—trying to use meter rules to measure some feature of a giant printout of it. And Feynman takes me aside, rather conspiratorially, and says: "Look, I just want to ask you one thing: how did you know rule 30 would do all this crazy stuff?" "You know me," I said. "I didn't. I just had a computer try all the possible rules. And I found it." "Ah," he said, "now I feel much better. I was worried you had some way to figure it out."

Feynman and I talked a bunch more about rule 30. He really wanted to get an intuition for how it worked. He tried bashing it with all his usual tools. Like he tried to work out what the slope of the line between order and chaos is. And he calculated. Using all his usual calculus and so on. He and his son Carl even spent a bunch of time trying to crack rule 30 using a computer.

And one day he calls me and says: "OK, Wolfram, I can't crack it. I think you're onto something." Which was very encouraging.

Feynman and I tried to work together on a bunch of things over the years. On quantum computers before anyone had ever heard of those. On trying to make a chip that would generate perfect physical randomness—or eventually showing that that wasn't possible. On whether all the computation needed to evaluate Feynman diagrams really was necessary. On whether it was a coincidence or not that there's an e-H t in statistical mechanics and anei H t in quantum mechanics. On what the simplest essential phenomenon of quantum mechanics really is.

I remember often when we were both consulting for Thinking Machines in Boston, Feynman would say, "Let's hide away and do some physics." This was a typical scenario. Yes, I think we thought nobody was noticing that we were off at the back of a press conference about a new computer system talking about the nonlinear sigma model. Typically, Feynman would do some calculation. With me continually protesting that we should just go and use a computer. Eventually I'd do that. Then I'd get some results. And he'd get some results. And then we'd have an argument about whose intuition about the results was better.

I should say, by the way, that it wasn't that Feynman didn't like computers. He even had gone to some trouble to get an early Commodore PET personal computer. And enjoyed doing things with it.

And in 1979, when I started working on the forerunner of what would become Mathematica, he was very interested. We talked a lot about how it should work. He was keen to explain his methodologies for solving problems: for doing integrals, for notation, for organizing his work. I even managed to get him a little interested in the problem of language design. Though I don't think there's anything directly from Feynman that has survived in Mathematica. But his favorite integrals we can certainly do.

You know, it was sometimes a bit of a liability having Feynman involved. Like when I was working on SMP—the forerunner of Mathematica—I organized some seminars by people who'd worked on other systems. And Feynman used to come. And one day a chap from a well-known computer science department came to speak. I think he was a little tired. And he ended up giving what was admittedly not a good talk. Which degenerated at some point into essentially telling puns about the name of the system they'd built.

Well, Feynman got more and more annoyed. And eventually stood up and gave a whole speech about how "If this is what computer science is about, it's all nonsense...." I think the chap who gave the talk thought I'd put Feynman up to this. And has hated me for 25 years....

You know, in many ways, Feynman was a loner. Other than for social reasons, he really didn't like to work with other people. And he was mostly interested in his own work. He didn't read or listen too much; he wanted the pleasure of doing things himself.

He did used to come to physics seminars, though. Although he had rather a habit of using them as problem-solving exercises.

And he wasn't always incredibly sensitive to the speakers. In fact, there was a period of time when I organized the theoretical physics seminars at Caltech. And he often egged me on to compete to find fatal flaws in what the speakers were saying. Which led to some very unfortunate incidents. But also led to some interesting science.

One thing about Feynman is that he went to some trouble to arrange his life so that he wasn't particularly busy—and so he could just work on what he felt like. Usually he had a good supply of problems. Though sometimes his long-time assistant would say: "You should go and talk to him. Or he's going to start working on trying to decode Mayan hieroglyphs again."

He always cultivated an air of irresponsibility. Though I would say more towards institutions than people.

And I was certainly very grateful that he spent considerable time trying to give me advice—even if I was not always great at taking it.

One of the things he often said was that "peace of mind is the most important prerequisite for creative work." And he thought one should do everything one could to achieve that. And he thought that meant, among other things, that one should always stay away from anything worldly, like management.

Feynman himself, of course, spent his life in academia—though I think he found most academics rather dull. And I don't think he liked their standard view of the outside world very much.

And he himself often preferred more unusual folk.

Quite often he'd introduce me to the odd characters who'd visit him.

I remember once we ended up having dinner with the rather charismatic founder of a semi-cult called EST. It was a curious dinner. And afterwards, Feynman and I talked for hours about leadership. About leaders like Robert Oppenheimer. And Brigham Young. He was fascinated—and mystified—by what it is that lets great leaders lead people to do incredible things. He wanted to get an intuition for that.

You know, it's funny. For all Feynman's independence, he was surprisingly diligent. I remember once he was preparing some fairly minor conference talk. He was quite concerned about it. I said, "You're a great speaker; what are you worrying about?" He said, "Yes, everyone thinks I'm a great speaker. So that means they expect more from me."

And in fact, sometimes it was those throwaway conference talks that have ended up being some of Feynman's most popular pieces. On nanotechnology. Or foundations of quantum theory. Or other things.

You know, Feynman spent most of his life working on prominent current problems in physics. But he was a confident problem solver. And occasionally he would venture outside. Bringing his "one can solve any problem just by thinking about it" attitude with him.

It did have some limits, though. I think he never really believed it applied to human affairs, for example. Like when we were both consulting for Thinking Machines in Boston, I would always be jumping up and down about how if the management of the company didn't do this or that, they would fail. He would just say: "Why don't you let these people run their company; we can't figure out this kind of stuff." Sadly, the company did in the end fail. But that's another story.

Well, there's lots more I could say about Feynman. But let me stop here. It's really fun to read his letters. I'd seen a small sampling before. But together they make a fascinating portrait of a terrific man. And it was nice of him to write such nice things about me.


Article 28

'Strings Attached' Co-Author Offers Solutions for Education - WSJ.com

$
0
0

Comments:"'Strings Attached' Co-Author Offers Solutions for Education - WSJ.com"

URL:http://online.wsj.com/news/articles/SB10001424052702304213904579095303368899132


Sept. 27, 2013 7:17 p.m. ET

I had a teacher once who called his students "idiots" when they screwed up. He was our orchestra conductor, a fierce Ukrainian immigrant named Jerry Kupchynsky, and when someone played out of tune, he would stop the entire group to yell, "Who eez deaf in first violins!?" He made us rehearse until our fingers almost bled. He corrected our wayward hands and arms by poking at us with a pencil.

Today, he'd be fired. But when he died a few years ago, he was celebrated: Forty years' worth of former students and colleagues flew back to my New Jersey hometown from every corner of the country, old instruments in tow, to play a concert in his memory. I was among them, toting my long-neglected viola. When the curtain rose on our concert that day, we had formed a symphony orchestra the size of the New York Philharmonic.

I was stunned by the outpouring for the gruff old teacher we knew as Mr. K. But I was equally struck by the success of his former students. Some were musicians, but most had distinguished themselves in other fields, like law, academia and medicine. Research tells us that there is a positive correlation between music education and academic achievement. But that alone didn't explain the belated surge of gratitude for a teacher who basically tortured us through adolescence.

We're in the midst of a national wave of self-recrimination over the U.S. education system. Every day there is hand-wringing over our students falling behind the rest of the world. Fifteen-year-olds in the U.S. trail students in 12 other nations in science and 17 in math, bested by their counterparts not just in Asia but in Finland, Estonia and the Netherlands, too. An entire industry of books and consultants has grown up that capitalizes on our collective fear that American education is inadequate and asks what American educators are doing wrong.

I would ask a different question. What did Mr. K do right? What can we learn from a teacher whose methods fly in the face of everything we think we know about education today, but who was undeniably effective?

As it turns out, quite a lot. Comparing Mr. K's methods with the latest findings in fields from music to math to medicine leads to a single, startling conclusion: It's time to revive old-fashioned education. Not just traditional but old-fashioned in the sense that so many of us knew as kids, with strict discipline and unyielding demands. Because here's the thing: It works.

Now I'm not calling for abuse; I'd be the first to complain if a teacher called my kids names. But the latest evidence backs up my modest proposal. Studies have now shown, among other things, the benefits of moderate childhood stress; how praise kills kids' self-esteem; and why grit is a better predictor of success than SAT scores.

All of which flies in the face of the kinder, gentler philosophy that has dominated American education over the past few decades. The conventional wisdom holds that teachers are supposed to tease knowledge out of students, rather than pound it into their heads. Projects and collaborative learning are applauded; traditional methods like lecturing and memorization—derided as "drill and kill"—are frowned upon, dismissed as a surefire way to suck young minds dry of creativity and motivation.

But the conventional wisdom is wrong. And the following eight principles—a manifesto if you will, a battle cry inspired by my old teacher and buttressed by new research—explain why.

1. A little pain is good for you.

Psychologist K. Anders Ericsson gained fame for his research showing that true expertise requires about 10,000 hours of practice, a notion popularized by Malcolm Gladwell in his book "Outliers." But an often-overlooked finding from the same study is equally important: True expertise requires teachers who give "constructive, even painful, feedback," as Dr. Ericsson put it in a 2007 Harvard Business Review article. He assessed research on top performers in fields ranging from violin performance to surgery to computer programming to chess. And he found that all of them "deliberately picked unsentimental coaches who would challenge them and drive them to higher levels of performance."

2. Drill, baby, drill.

Rote learning, long discredited, is now recognized as one reason that children whose families come from India (where memorization is still prized) are creaming their peers in the National Spelling Bee Championship. This cultural difference also helps to explain why students in China (and Chinese families in the U.S.) are better at math. Meanwhile, American students struggle with complex math problems because, as research makes abundantly clear, they lack fluency in basic addition and subtraction—and few of them were made to memorize their times tables.

William Klemm of Texas A&M University argues that the U.S. needs to reverse the bias against memorization. Even the U.S. Department of Education raised alarm bells, chastising American schools in a 2008 report that bemoaned the lack of math fluency (a notion it mentioned no fewer than 17 times). It concluded that schools need to embrace the dreaded "drill and practice."

3. Failure is an option.

Kids who understand that failure is a necessary aspect of learning actually perform better. In a 2012 study, 111 French sixth-graders were given anagram problems that were too difficult for them to solve. One group was then told that failure and trying again are part of the learning process. On subsequent tests, those children consistently outperformed their peers.

The fear, of course is that failure will traumatize our kids, sapping them of self-esteem. Wrong again. In a 2006 study, a Bowling Green State University graduate student followed 31 Ohio band students who were required to audition for placement and found that even students who placed lowest "did not decrease in their motivation and self-esteem in the long term." The study concluded that educators need "not be as concerned about the negative effects" of picking winners and losers.

4. Strict is better than nice.

What makes a teacher successful? To find out, starting in 2005 a team of researchers led by Claremont Graduate University education professor Mary Poplin spent five years observing 31 of the most highly effective teachers (measured by student test scores) in the worst schools of Los Angeles, in neighborhoods like South Central and Watts. Their No. 1 finding: "They were strict," she says. "None of us expected that."

The researchers had assumed that the most effective teachers would lead students to knowledge through collaborative learning and discussion. Instead, they found disciplinarians who relied on traditional methods of explicit instruction, like lectures. "The core belief of these teachers was, 'Every student in my room is underperforming based on their potential, and it's my job to do something about it—and I can do something about it,'" says Prof. Poplin.

She reported her findings in a lengthy academic paper. But she says that a fourth-grader summarized her conclusions much more succinctly this way: "When I was in first grade and second grade and third grade, when I cried my teachers coddled me. When I got to Mrs. T's room, she told me to suck it up and get to work. I think she's right. I need to work harder."

5. Creativity can be learned.

The rap on traditional education is that it kills children's' creativity. But Temple University psychology professor Robert W. Weisberg's research suggests just the opposite. Prof. Weisberg has studied creative geniuses including Thomas Edison, Frank Lloyd Wright and Picasso—and has concluded that there is no such thing as a born genius. Most creative giants work ferociously hard and, through a series of incremental steps, achieve things that appear (to the outside world) like epiphanies and breakthroughs.

Prof. Weisberg analyzed Picasso's 1937 masterpiece Guernica, for instance, which was painted after the Spanish city was bombed by the Germans. The painting is considered a fresh and original concept, but Prof. Weisberg found instead that it was closely related to several of Picasso's earlier works and drew upon his study of paintings by Goya and then-prevalent Communist Party imagery. The bottom line, Prof. Weisberg told me, is that creativity goes back in many ways to the basics. "You have to immerse yourself in a discipline before you create in that discipline. It is built on a foundation of learning the discipline, which is what your music teacher was requiring of you."

6. Grit trumps talent.

In recent years, University of Pennsylvania psychology professor Angela Duckworth has studied spelling bee champs, Ivy League undergrads and cadets at the U.S. Military Academy in West Point, N.Y.—all together, over 2,800 subjects. In all of them, she found that grit—defined as passion and perseverance for long-term goals—is the best predictor of success. In fact, grit is usually unrelated or even negatively correlated with talent.

Prof. Duckworth, who started her career as a public school math teacher and just won a 2013 MacArthur "genius grant," developed a "Grit Scale" that asks people to rate themselves on a dozen statements, like "I finish whatever I begin" and "I become interested in new pursuits every few months." When she applied the scale to incoming West Point cadets, she found that those who scored higher were less likely to drop out of the school's notoriously brutal summer boot camp known as "Beast Barracks." West Point's own measure—an index that includes SAT scores, class rank, leadership and physical aptitude—wasn't able to predict retention.

Prof. Duckworth believes that grit can be taught. One surprisingly simple factor, she says, is optimism—the belief among both teachers and students that they have the ability to change and thus to improve. In a 2009 study of newly minted teachers, she rated each for optimism (as measured by a questionnaire) before the school year began. At the end of the year, the students whose teachers were optimists had made greater academic gains.

7. Praise makes you weak…

My old teacher Mr. K seldom praised us. His highest compliment was "not bad." It turns out he was onto something. Stanford psychology professor Carol Dweck has found that 10-year-olds praised for being "smart" became less confident. But kids told that they were "hard workers" became more confident and better performers.

"The whole point of intelligence praise is to boost confidence and motivation, but both were gone in a flash," wrote Prof. Dweck in a 2007 article in the journal Educational Leadership. "If success meant they were smart, then struggling meant they were not."

8.…while stress makes you strong.

A 2011 University at Buffalo study found that a moderate amount of stress in childhood promotes resilience. Psychology professor Mark D. Seery gave healthy undergraduates a stress assessment based on their exposure to 37 different kinds of significant negative events, such as death or illness of a family member. Then he plunged their hands into ice water. The students who had experienced a moderate number of stressful events actually felt less pain than those who had experienced no stress at all.

"Having this history of dealing with these negative things leads people to be more likely to have a propensity for general resilience," Prof. Seery told me. "They are better equipped to deal with even mundane, everyday stressors."

Prof. Seery's findings build on research by University of Nebraska psychologist Richard Dienstbier, who pioneered the concept of "toughness"—the idea that dealing with even routine stresses makes you stronger. How would you define routine stresses? "Mundane things, like having a hardass kind of teacher," Prof. Seery says.

My tough old teacher Mr. K could have written the book on any one of these principles. Admittedly, individually, these are forbidding precepts: cold, unyielding, and kind of scary.

But collectively, they convey something very different: confidence. At their core is the belief, the faith really, in students' ability to do better. There is something to be said about a teacher who is demanding and tough not because he thinks students will never learn but because he is so absolutely certain that they will.

Decades later, Mr. K's former students finally figured it out, too. "He taught us discipline," explained a violinist who went on to become an Ivy League-trained doctor. "Self-motivation," added a tech executive who once played the cello. "Resilience," said a professional cellist. "He taught us how to fail—and how to pick ourselves up again."

Clearly, Mr. K's methods aren't for everyone. But you can't argue with his results. And that's a lesson we can all learn from.

Ms. Lipman is co-author, with Melanie Kupchynsky, of "Strings Attached: One Tough Teacher and the Gift of Great Expectations," to be published by Hyperion on Oct. 1. She is a former deputy managing editor of The Wall Street Journal and former editor-in-chief of Condé Nast Portfolio.

France-UAE satellite deal shaky after US spy tech discovered onboard

$
0
0

Comments:"France-UAE satellite deal shaky after US spy tech discovered onboard"

URL:http://www.spacewar.com/reports/France_UAE_satellite_deal_shaky_after_US_spy_tech_discovered_onboard_999.html


The sale of two intelligence satellites to the UAE by France for nearly a billion dollars could go south after they were found to contain American technology designed to intercept data transmitted to the ground station.

The equipment, costing 3.4 billion dirhams ($930 million), constitutes two high-resolution Pleiades-type Falcon Eye military intelligence satellites, which a top UAE defense source has said contain specific US-made components designed to intercept the satellites' communications with their accompanying ground station, Defensenews.com said in a report.

"The discovery [of the US-made components] was reported to the [office of the] deputy supreme commander [Sheikh Mohamm ed Bin Zayed] in September," an unnamed defense source said. "We have requested the French to change these components and also consulted with the Russian and Chinese firms."

"If this issue is not resolved, the UAE is willing to scrap the whole deal," said the source, adding that the incident has seen an increase in talks with Moscow, which - along with Beijing - has also been a frequent defense tech supplier to the Gulf state.

However, it is not clear whether the US equipment can be taken off the French satellites.

The satellites come courtesy of prime contractor Airbus Defence and Space and payload manufacturer Thales Alenia, neither of whom could be reached for comment.

The system, comprised of satellites and a ground station, will require 20 trained engineers to operate. Under the July 22 deal, signed by Sheikh Mohammed, Crown Prince of Dubai and deputy supreme commander of the armed forces, and French Defense Minister Jean-Yves Le Drian, delivery of the satellites and the ground station was to be made sometime in 2018.

A total of 11 international bidders were competing in the Flacon Eye race for more than a decade to ship their technologies to the UAE, which in late 2012 announced that they had chosen to go with the French and the Americans.

According to the source, the French won because of the filters which their rival Americans imposed on the use of the equipment - a policy dubbed "shutter control." The US government restricts sale of commercial high resolution satellite images from spacecraft it licenses, if they are deemed a threat to its national security.

One French defense specialist found it surprising that France had had US spy technology onboard its equipment, especially when France's use of the Pleiades surveillance system is considered to be of critical importance to its national security.

According to Defensenews.com, UAE's threats to call off the deal are seen by some commentators as a way to secure a better bargain from the French, because "the satellites would be part of a big package deal... it's not surprising the UAE drives a hard bargain. They're using it as a layer of power."

The unnamed defense specialist referred to the possibility that the Emirates may wish to drive the price down for other equipment, such as the Dessault Aviation Rafale fighter jet.

Source: Voice of Russia

Amusing ourselves to death | On the Path of Knowledge

$
0
0

Comments:"Amusing ourselves to death | On the Path of Knowledge"

URL:http://onthepathofknowledge.wordpress.com/2014/01/03/amusing-ourselves-to-death/


This is perhaps one of the most striking passages I have read for a while. It describes the modern world with startling accuracy. In our fear of an increasingly authoritarian rule, we have allowed a far more dangerous vision to come true: heedlessness

Below is the foreward of Neil Postman’s book “Amusing Ourselves to Death: Public Discourse in the Age of Show Business“, accompanied by a comic illustration of the two ideas. It gives a concise comparison of the two authors views and what they foresaw society will become. But perhaps the remarkable part of this whole story passage lies beyond its lines with us:

Most of us will read this and continue living our life exactly the same way as before

…wake up

————————————————————–

We were keeping our eye on 1984. When the year came and the prophecy didn’t, thoughtful Americans sang softly in praise of themselves. The roots of liberal democracy had held. Wherever else the terror had happened, we, at least, had not been visited by Orwellian nightmares. But we had forgotten that alongside Orwell’s dark vision, there was another – slightly older, slightly less well known, equally chilling: Aldous Huxley’s Brave New World. Contrary to common belief even among the educated, Huxley and Orwell did not prophesy the same thing. Orwell warns that we will be overcome by an externally imposed oppression. But in Huxley’s vision, no Big Brother is required to deprive people of their autonomy, maturity and history. As he saw it, people will come to love their oppression, to adore the technologies that undo their capacities to think. What Orwell feared were those who would ban books. What Huxley feared was that there would be no reason to ban a book, for there would be no one who wanted to read one. Orwell feared those who would deprive us of information. Huxley feared those who would give us so much that we would be reduced to passivity and egoism. Orwell feared that the truth would be concealed from us. Huxley feared the truth would be drowned in a sea of irrelevance. Orwell feared we would become a captive culture. Huxley feared we would become a trivial culture, preoccupied with some equivalent of the feelies, the orgy porgy, and the centrifugal bumblepuppy. As Huxley remarked in Brave New World Revisited, the civil libertarians and rationalists who are ever on the alert to oppose tyranny “failed to take into account man’s almost infinite appetite for distractions”. In 1984, Huxley added, people are controlled by inflicting pain. In Brave New World, they are controlled by inflicting pleasure. In short, Orwell feared that what we hate will ruin us. Huxley feared that what we love will ruin us. This book is about the possibility that Huxley, not Orwell, was right [Neil Postman - Amusing ourselves to death]

[The comic is Stuart McMillen’s interpretation of media theorist Niel Postman’s book Amusing Ourselves to Death (1985), subtitled “Public Discourse in the Age of Show Business”]

Like this:

LikeLoading...

 

Hire by Auditions, Not Resumes - Matt Mullenweg - Harvard Business Review

$
0
0

Comments:"Hire by Auditions, Not Resumes - Matt Mullenweg - Harvard Business Review"

URL:http://blogs.hbr.org/2014/01/hire-by-auditions-not-resumes/


by Matt Mullenweg  |   2:00 PM January 7, 2014

Automattic employs 225 people. We’re located all over the world, in 190 different cities. We have a headquarters in San Francisco and it operates similar to a co-working space. For those who live in the Bay Area, they can work from the office, if they’d like. But in general, the majority of our employees work somewhere other than our home base.

To us, this arrangement makes sense — we work in open source software, which is a decentralized product. Outsiders were dubious. “That works great until you’re at 10 or 15 people, but when you get to 30, it falls apart,” they’d say. Then we passed 30 people, and we started hearing that the magic number was 100 people. Then people said Dunbar’s number — 150 — would be the point at which it didn’t work. But we keep blowing past these thresholds and will hire 120 new people this year. But we probably won’t do it the way most companies do.

It all starts with the way we think about work.

In a lot of businesses, if someone shows up in the morning and he isn’t drunk, he doesn’t sleep at his desk, and he’s dressed nicely, it’s assumed that he’s working. But none of that takes into account what he’s actually created during the day. Many people create great things without living up to those norms. We measure work based on outputs. I don’t care what hours you work. I don’t care if you sleep late, or if you pick a child up from school in the afternoon. It’s all about your output.

This arrangement isn’t for everyone. But a lot of people like the autonomy we offer, and that’s important. So we’ve arrived at an unorthodox system that serves our needs perfectly.

We hire all of our employees on a contract basis, and all go through a trial. They can do the work at night or over the weekend, so they don’t have to leave their current job in the meantime. We pay a standard rate of $25 per hour, regardless of whether you’re applying to be an engineer or the chief financial officer.

During the trials, we give the applicants actual work.  If you’re applying to work in customer support, you’ll answer tickets. If you’re an engineer, you’ll work on engineering problems. If you’re a designer, you’ll design.

There’s nothing like being in the trenches with someone, working with them day by day. It tells you something you can’t learn from resumes, interviews, or reference checks. At the end of the trial, everyone involved has a great sense of whether they want to work together going forward. And, yes, that means everyone — it’s a mutual tryout. Some people decide we’re not the right fit for them.

Overall, we end up hiring about 40% of the people who tryout with us. It’s a huge time commitment, coordinating the short-term work being done by job applicants, but it leads to extremely low turnover. In the past 8 years, we’ve had maybe 10 people leave the company, and another 25 or 30 we’ve let go. So it’s a system we plan to keep utilizing.

Today, I spend at least a third of my time on hiring. And even though it’s a small part of our process, I still look at every resume the company receives and do the final interview for everyone who joins. Nothing has the impact of putting the right people around the table. The aphorism is true: You can’t manage your way out of a bad team. We’ve done experiments to find the best way to hire based on our unique structure; your business can do the same.

This post is adapted from a talk by the author at the December 2013 Lean Startup Conference. 

Talent and the New World of Hiring
An HBR Insight Center

Digital Ocean vs. Linode

$
0
0

Comments:"Digital Ocean vs. Linode"

URL:http://blog.schneidmaster.com/digital-ocean-vs-linode/


After two years as a Linode customer, I've just finished switching and migrating the sites/apps I manage over to Digital Ocean. While both services provide fantastic offerings and I'd recommend either, I found Digital Ocean to be more modern and flexible, with better features for the cost. I decided to write up a brief rundown of the differences and how I made my decision.

Where Linode Wins

Linode has earned and maintained my trust over the past two years, while Digital Ocean is relatively new. I can definitely say this: Linode has rock-solid uptime and performance, and I never had any complaints in that regard. Linode has also done a pretty good job of increasing the specs of their offerings (possibly in response to Digital Ocean and other VPS upstarts); in April, they functionally doubled the RAM and disk space of each plan tier at the same price.

Why Digital Ocean Rocks

  • Price- Digital Ocean's VPS offerings are essentially half the cost of the equivalent Linode. The most basic Linode plan offers 1GB RAM, 8 CPU, 48GB storage, and 2TB transfer for $20/mo; the comparable Digital Ocean tier offers 1GB RAM, 1 CPU, 30GB storage, and 2TB transfer for $10/mo. The CPU difference may be relevant for some applications, but for my use cases, it doesn't make a noticeable difference. Additionally, Digital Ocean offers a basic $5/mo plan, which I find ideal when I need to spin up a quick dev server for a temporary project. Finally, all of Digital Ocean's servers run on SSDs, which can seriously decrease read/write times for database-intensive applications and APIs.
  • Billing- Linode bills on a flat monthly basis. Digital Ocean actually bills on an hourly basis, capped at the monthly rate. This is ideal because I like to spin up a fresh environment for apps I'm developing and beta testing, to keep the environment consistent with what's out in the wild; with Digital Ocean, I easily can create and kill VPSs ("droplets" in the Digital Ocean vernacular) without having to pay the full monthly rate for each one- that kind of flexibility is both rare and awesome.
  • Development and Management- Digital Ocean offers some seriously awesome management features for developing with droplets. You can spin up a new droplet in 55 seconds, and install either a clean version of a Linux server distro (Ubuntu, CentOS, Debian, Arch, Fedora) or a prebuilt distro containing an application, from full stacks like LAMP and Rails to development tools like GitLab and Docker to blogging applications like Ghost and Wordpress. You can also take an image or backup of an existing droplet and use it to spin up a new droplet, making it easy to clone droplets or environments.
  • Look and Feel- Digital Ocean and Linode offer similar management panels for managing instances, but I find Digital Ocean's to be much more modern and appealing. The design is clean and easy to navigate.
  • API- Digital Ocean offers a full-featured API that provides all of the functionality of the control panel- creating, resizing, and deleting droplets, managing images and snapshots, and more. This API has been used by a handful of fairly awesome 3rd-party management apps; I'm particularly fond of DigitalOcean Manager for the iPhone.

To conclude, while Linode is a solid VPS provider, Digital Ocean really kicks VPS service up to the next level, with a modern interface, a wide and useful set of features, and rock-bottom competitive prices. I plan to host my projects on Digital Ocean in the future and would definitely recommend it to anyone in the VPS market.

Facebook Buys Bangalore Based App Monitoring Co Little Eye Labs - MediaNama

$
0
0

Comments:"Facebook Buys Bangalore Based App Monitoring Co Little Eye Labs - MediaNama"

URL:http://www.medianama.com/2014/01/223-facebook-little-eye-labs/


Bangalore based mobile app performance monitoring company Little Eye Labs has become Facebook’s first acquisition in India, the company has announcedThe entire Little Eye Labs team will move to Facebook’s headquarters in Menlo Park, California. Techcrunch has put the acquisition amount at between $10-15 million. Business Standard has first reported this deal last month.

Little Eye Labs was founded last year by five former IBM employees, backed by Rajesh Sawhney’s GSF and VenturEast Tenet Fund, and released its first official version in early April last year. It was a part of GSF’s first batch of 15 startups.

The company has developed a tool that measures the amount of resources an app consumes on smartphones, allowing you to measure the performance of the phones processor, battery usage and bandwidth consumption, as well as correlate performance with a simultaneous play of the video of the apps usage. According to WSJ, the company would charge $50 per month or $500 per year from customers.

This slideshow requires JavaScript.

Following the acquisition by Facebook, current customers of Little Eye labs will be offered a free version of Little Eye Labs till June 30th 2014. It’s likely that the will then work on building Facebook’s mobile analytics: ”From there, we’ll be able to leverage Facebook’s world-class infrastructure and help improve performance of their already awesome apps. For us, this is an opportunity to make an impact on the more than 1 billion people who use Facebook,” the company has said.


Out of this world first light images emerge from Gemini Planet Imager

$
0
0

Comments:"Out of this world first light images emerge from Gemini Planet Imager "

URL:https://www.llnl.gov/news/newsreleases/2014/Jan/NR-14-01-01.html



GPI team during the first light run in November 2013. Left to right: Pascale Hibon (Gemini), Stephen Goodsell (Gemini), Markus Hartung (Gemini), Fredrik Rantakyro (Gemini), Jeffrey Chilcote (UCLA), Jennifer Dunn (HIA), Sandrine Thomas (NASA Ames), Bruce Macintosh (LLNL), Dave Palmer (LLNL), Dmitry Savransky (LLNL), Marshall Perrin (STScI), Naru Sadakuni (Gemini); not shown Andrew Cardwell, Carlos Quiroz, Leslie Saddlemyer. Photo by Jeffry Chilcote/UCLA.

After nearly a decade of development, construction and testing, the world's most advanced instrument for directly imaging and analyzing planets orbiting around other stars is pointing skyward and collecting light from distant worlds.

"Even these early first-light images are almost a factor of 10 better than the previous generation of instruments. In one minute, we were seeing planets that used to take us an hour to detect," says Bruce Macintosh of Lawrence Livermore National Laboratory, who led the team who built the instrument.

For the past decade, Lawrence Livermore has been leading a multi-institutional team in the design, engineering, building and optimization of the instrument, called the Gemini Planet Imager (GPI), which will be used for high-contrast imaging to better study faint planets or dusty disks next to bright stars. Astronomers -- including a team at LLNL-- have made direct images of a handful of extrasolar planets by adapting astronomical cameras built for other purposes. GPI is the first fully optimized planet imager, designed from the ground up for exoplanet imaging deployed on one of the world's biggest telescopes, the 8-meter Gemini South telescope in Chile.

Probing the environments of distant stars in a search for planets has required the development of next-generation, high-contrast adaptive optics (AO) systems, in which Livermore is a leader. These systems are sometimes referred to as extreme AO.

Macintosh said direct imaging of planets is challenging because planets such as Jupiter are a billion times fainter than their parent stars. "Detection of the youngest and brightest planets is barely within reach of today's AO systems," he said. "To see other solar systems, we need new tools."

And those new tools are installed in the Gemini Planet Imager with the most advanced AO system in the world. In addition to leading the whole project, LLNL also was responsible for the AO system. Designed to be the world's "most sophisticated" astronomical system for compensating turbulence in the Earth's atmosphere - an ongoing problem for ground-based telescopes -- the system senses atmospheric turbulence and corrects it with a a 2-centimeter-square deformable mirror with 4,000 actuators. This deformable mirror is made of etched silicon, similar to microchips, rather than the large reflective glass mirrors used on other AO systems. This allows GPI to be compact and stable. The new mirror corrects for atmospheric distortions by adjusting its shape 1,000 times per second with accuracy better than 1 nanometer. Together with the other parts of GPI, astronomers can directly image extra-solar planets that are 1 million to 10 million times fainter than their host stars.

GPI carried out its first observations in November 2013 - during an extremely smooth debut for an extraordinarily complex astronomical instrument the size of a small car. "The GPI team's huge amount of high quality work has begun to pay off and now holds the promise of many years of important science to come," said LLNL Project Manager David Palmer.

For GPI's first observations, it targeted previously known planetary systems - the 4-planet HR8799 system (co-discovered by an LLNL-led team at the Gemini and Keck Observatory in 2008) and the Beta Pictoris system, among others. GPI has obtained the first-ever spectrum of the very young planet Beta Pictoris b.

The first-light team also used the instrument's unique polarization mode - tuned to look at starlight scattered by tiny particles - in order to study a ring of dust orbiting the very young star HR4796. With previous instruments, only the edges of this dust ring (which may be the debris remaining from planet formation) could be seen. GPI can follow the entire circumference of the ring. The images were released today at the 223rd meeting of the American Astronomical Society in Washington D.C., Jan. 5-9.

"GPI's performance requirements are extremely challenging," explained LLNL engineer Lisa Poyneer, who developed the algorithms used to correct for atmospheric turbulence, and led the testing of the adaptive optics system in the laboratory and at the telescope. "As a result, the AO system features several original technologies that were designed specifically for exoplanet science. After years of development and testing, it is very rewarding to see the AO system operating so well and enabling these remarkable images."

Imaging exoplanets is highly complementary to other exoplanet success stories like NASA's Kepler mission. Kepler is extremely sensitive to small planets close to their parent star and focuses on mature stars - GPI detects infrared radiation from young Jupiter-like objects in wide orbits, the equivalent of the giant planets in our solar system not long after their formation.

"GPI represents a critical step in the road toward understanding how planetary systems form and develop," said Dmitry Savransky, an LLNL postdoc who worked on the integration and testing of GPI before moving to a position at Cornell. "While broad survey missions, such as Kepler, have revealed the variety of planets that exist in our galaxy, GPI will allow us to study a few dozen planets in exquisite detail."

GPI is an international project led by LLNL under Gemini's supervision, with Macintosh serving as principal investigator and LLNL engineer Palmer as project manager. In addition to Macintosh, Palmer and Poyneer, the LLNL team consisted of Brian Bauman, Scot Olivier and former employees Dmitry Savransky, Steve Jones, Christian Marois, Quinn Konopacky, Gary Sommargren and Julia Evans.

I, Cringely . The Pulpit . The $200 Billion Rip-Off | PBS

$
0
0

Comments:"I, Cringely . The Pulpit . The $200 Billion Rip-Off | PBS"

URL:http://www.pbs.org/cringely/pulpit/2007/pulpit_20070810_002683.html?ref


This is part three of my explanation of how America went from having the fastest and cheapest Internet service in the world to what we have today -- not very fast, not very cheap Internet service that is hurting our ability to compete economically with the rest of the world. Part one detailed expected improvements in U.S. broadband based on emerging competitive factors, yet decried that it was too little too late. Part two explained how U.S. broadband ISPs are different from most overseas ISPs and how those differences make it unlikely that we'll ever regain leadership in this space. And this week's final part explains that this all came about because Americans were deceived and defrauded by many of their telephone companies to the tune of $200 billion -- money that was supposed to have gone to pay for a broadband future we don't -- and never will -- have.

I feel qualified to write about this because, for a short time, I was right in the middle of it. A key term here is video dial tone, which referred in the mid-1990s to the provision of video-on-demand and cable TV-equivalent service by U.S. telephone companies at (bidirectional!) speeds up to 45 megabits per second over fiber and hybrid fiber-coax networks. Much of the publicity back then was generated by an outfit called Tele-TV, which was a video partnership of Nynex, Bell Atlantic, and Pacific Bell. Howard Stringer, former president of CBS and current CEO of Sony, ran Tele-TV, which had some ambitious plans to deploy video service to millions of homes. The company twice asked vendors to submit proposals to build set-top boxes for this ambitious network. In those days I designed set-top boxes and in the case of both Tele-TV bids, my designs came in at the lowest price, first under the name of my own start-up and then under the Fujitsu brand after Tele-TV urged me to find a big manufacturing partner. I can't claim to have actually WON either time, though, because not a single box was ever built or paid for and Tele-TV went out of business without ever actually having been IN business. At the time I had no idea what was going on, but today I know and now so will you.

The National Information Infrastructure as codified in the Telecommunications Act of 1996 existed on two levels -- federal and state. As a federal law, the Act specified certain data services that were to be made available to schools, libraries, hospitals, and public safety agencies and paid for through special surcharges and some tax credits. Looking solely at the Federal side of the story, the so-called Information Superhighway still doesn't appear to have been a success, but it wasn't a criminal failure. Many schools and libraries were wired at considerable expense though the health care and public safety components never amounted to much.

It is on the state level where one can find the greatest excesses of the Telecommunications Act. All 50 U.S. states and the District of Columbia contracted with their local telecommunication utilities for the build-out of fiber and hybrid fiber-coax networks intended to bring bidirectional digital video service to millions of homes by the year 2000. The Telecom Act set the mandate but, as it works with phone companies, the details were left to the states. Fifty-one plans were laid and 51 plans failed.

Failure is not foreign to the information technology business. Big development projects fail all the time and I have written several times about this and how those failures come to be and how they can be avoided. But I find it hard to remember any company or industry segment ever going zero for 51. This is a failure rate so amazing that any statistician would question the motives of those even entering such an endeavor. Did they actually expect to succeed? Or did they actually expect to fail? We may never know and it probably doesn't even matter, but one thing is sure: they expected to be paid and they were.

Over the decade from 1994-2004 the major telephone companies profited from higher phone rates paid by all of us, accelerated depreciation on their networks, and direct tax credits an average of $2,000 per subscriber for which the companies delivered precisely nothing in terms of service to customers. That's $200 billion with nothing to be shown for it.

In a Federal Communications Commission (FCC) report from 1994 there were requests from U.S. telephone companies to provide video dial-tone service at unprecedented levels. Bell Atlantic (now part of Verizon) wanted to install service to 3.5 million homes in its service area. Nynex (now also a part of Verizon) requested permission to install service to 400,000 homes. Pacific Bell (now part of AT&T) wanted to install service to 1.3 million homes. Ameritech (now part of AT&T) wanted to install service to 1.2 million homes. GTE (now part of Verizon) wanted to install service to 1.1 million homes.

Note that these applications were all prior to the Telecommunications Act of 1996 being passed, so they were covered under the prior 1934 Act. And by 1995 most of these applications had been withdrawn by the telephone companies, though the FCC oddly continued to act as though the applications were still valid.

What went wrong? First there were technical problems. Bidirectional 45-megabit-per-second service was going to be harder to install and more expensive than expected, though oddly more than one Regional Bell Operating Company tried to demur because of stated fears that the proposed technology would become obsolete, not that it was too advanced.

Then there were regulatory problems as the FCC tried to control deployment centrally while states and cities tended to view video dial tone as just another cable company to be taxed and regulated. Bell Atlantic switched its plans to MMDS (Multichannel Multipoint Distribution Service -- so-called "wireless cable") as did GTE, but MMDS suffered from interference by trees and was never fully reliable, though some of that spectrum is today being redeployed for WiMax networks.

When the 1996 Act was finally passed, though, the idea of video dial tone had been converted to a justification for deploying ADSL. Where telephone companies had been promising EITHER 45 mbps bidirectional service OR at least the ability to carry HDTV (nominally 20 mbps) suddenly it was an acceptable alternative to substitute ADSL, which for most users would be limited to 1.5 mbps downstream and 128 kbps upstream, which isn't today considered adequate for any video service of higher quality than YouTube.

This could all be credited to technology misadventure and forgotten if it weren't for the money. The telcos played games with state utility commissions, cutting deals with the states to deploy new technologies in exchange for "incentives," which were new charges and new ways of charging customers. One typical ploy was to offer to freeze basic telephone rates for a period of years (typically five) then deploy a bunch of new services, which would be sold on an a la carte basis. The problem with this is that it applied analog economics to what were now digital services. The cost of providing digital services is always going DOWN, not up, so the telcos that might have been forced to cut rates instead offered to freeze them, locking in an effective multiyear rate increase.

It is an ugly story of greed and poor regulation that you can read in excruciating detail in a 406-page e-book that is among this week's links.

The RBOCs cut heads, cut spending, cut construction, increased depreciation rates, failed to deliver promised services, increased telephone bills, and had booming profits as a result. Then each mega-merger brought with it new contortions that inevitably led to poorer service and higher charges. Twenty-two percent of telco equipment, for example, SIMPLY DISAPPEARED. Penalties for missing service goals were often folded into merger payments, so instead of paying the states a penalty for not doing what they had promised to, the companies paid themselves.

As just a small example of the way the phone companies took advantage of ineffectual regulation, they charged an average of $1 per month per customer to run Bellcore, the research organization set up to replace Bell Labs after the 1983 split up of AT&T. But when Bellcore was later sold and the profits from that sale distributed to the telephone companies, not to the customers, ALL BUT ONE RBOC CONTINUED THE $1 CHARGE DESPITE THE FACT THAT IT NO LONGER DIRECTLY SUPPORTED ANYTHING.

There are no good guys in this story. Misguided and incompetent regulation combined with utilities that found ways to game the system resulted in what had been the best communication system in the world becoming just so-so, though very profitable. We as consumers were consistently sold ideas that were impractical only to have those be replaced later by less-ambitious technologies that, in turn, were still under-delivered. Congress set mandates then provided little or no oversight. The FCC was (and probably still is) managed for the benefit of the companies and their lobbyists, not for you and me. And the upshot is that I could move to Japan and pay $14 per month for 100-megabit-per-second Internet service but I can't do that here and will probably never be able to.

Despite this, the FCC says America has the highest broadband deployment rate in the world and President Bush has set a goal of having broadband available to every U.S. home by the end of this year. What have these guys been smoking? Nothing, actually, they simply redefined "broadband" as any Internet service with a download speed of 200 kilobits per second or better. That's less than one percent the target speed set in 1994 that we were supposed to have achieved by 2000 under regulations that still remain in place.

silentbicycle/greatest · GitHub

$
0
0

Comments:"silentbicycle/greatest · GitHub"

URL:https://github.com/silentbicycle/greatest


A unit testing system for C, contained in 1 file. It doesn't use dynamic allocation or depend on anything beyond ANSI C89, and the test scaffolding should build without warnings under -Wall -pedantic.

To use, just #include greatest.h in your project.

Note that there are some compile time options, and slightly nicer syntax for parametric testing (running tests with arguments) is available if compiled with -std=c99.

Also, I wrote a blog post with more information.

Basic Usage

$ cat simple.c
#include "greatest.h"
TEST x_should_equal_1() {
 int x = 1;
 ASSERT_EQ(1, x); /* default message */
 ASSERT_EQm("yikes, x doesn't equal 1", 1, x); /* custom message */
 PASS();
}
SUITE(the_suite) {
 RUN_TEST(x_should_equal_1);
}
/* Add definitions that need to be in the test runner's main file. */
GREATEST_MAIN_DEFS();
int main(int argc, char **argv) {
 GREATEST_MAIN_BEGIN(); /* command-line arguments, initialization. */
 RUN_SUITE(the_suite);
 GREATEST_MAIN_END(); /* display results */
}
$ make simple && ./simple
cc -g -Wall -Werror -pedantic simple.c -o simple
* Suite the_suite:
.
1 tests - 1 pass, 0 fail, 0 skipped (5 ticks, 0.000 sec)
Total: 1 tests (47 ticks, 0.000 sec)
Pass: 1, fail: 0, skip: 0.

(For more examples, look at example.c and example-suite.c.)

Command Line Options

Test runners build with the following command line options:

Usage: (test_runner) [-hlfv] [-s SUITE] [-t TEST]
 -h print this Help
 -l List suites and their tests, then exit
 -f Stop runner after first failure
 -v Verbose output
 -s SUITE only run suite w/ name containing SUITE substring
 -t TEST only run test w/ name containing TEST substring

If you want to run multiple test suites in parallel, look atparade.

CentOS Governance

$
0
0

Comments:"CentOS Governance"

URL:http://www.centos.org/about/governance/


The CentOS Project governance structure has two main tiers:

  • The Governing Board, a group of 8 to 11 people, responsible for overall oversight of the CentOS Project
  • Special Interest Groups (SIGs), teams within the community that focus on either enabling a technology solution as an add-on to the core CentOS release, or building and maintaining a functional aspect of the Project, such as infrastructure or documentation.

The Governing Board is like a greenhouse, providing support for starting and maturing a SIG the way a greenhouse uses sunlight, water, nutrients, and soil to turn seeds in to fruiting plants.

The CentOS Governing Board

The focus of the Governing Board is to assist and guide in the progress and development of the various SIGs, as well as to lead and promote CentOS.

The CentOS Governing Board is the governing body responsible for the overall oversight of the CentOS Project and SIGs, the creation of new SIGs, and the election (and re-election) of new board members. The Board also has the responsibility to ensure the goals, brands, and marks of the CentOS Project and community are protected. The Board serves as the final authority within the CentOS Project.

Current Sitting Board

The initial CentOS Governing Board will be made up of members of the CentOS Project, many of whom have been around since the creation of the Project, as well as new members from Red Hat who were instrumental in bringing the new relationship together.

The CentOS Governing Board is:

More information

Cambridge, MA City Council adopts resolution to commemorate thirty years of GNU — Free Software Foundation — working together for free software

$
0
0

Comments:"Cambridge, MA City Council adopts resolution to commemorate thirty years of GNU — Free Software Foundation — working together for free software"

URL:https://www.fsf.org/blogs/community/cambridge-ma-city-council-adopts-resolution-to-commemorate-thirty-years-of-gnu


On September 27, 1983, a computer scientist named Richard Stallman announced the plan to develop a free software Unix-like operating system called GNU, for "GNU is not Unix." GNU is the only operating system developed specifically for the sake of users' freedom. Today, the GNU system includes not only a fully free operating system, but a universe of software that serves a vast array of functions, from word processing to advanced scientific data manipulation, and everything in between.

To commemorate this occasion, the Cambridge City Council issued a statement in support of GNU and software freedom. All nine councilors, including Mayor Davis, signed resolution R-29, which reads:

"The City of Cambridge has long been a hub of innovation, ideas and most importantly, action; and ... Celebrating such a momentous occasion [as the 30th anniversary of the GNU System] at the Massachusetts Institute of Technology seems fitting, as [it] was conceived by Richard Stallman at the school in 1983 in an effort to rekindle the collaborative computing spirit that once dominated the software industry; and Stallman sought to achieve this goal through the development of a complete free software system, upward-compatible with Unix, that empowers users to look beyond proprietary software by providing them with four specific freedoms: to run the program as they wish, to copy and distribute the program, to change the program as they wish with full access to the source code and to strengthen the program going forward by distributing improved versions ... [It is resolved] that the City Council go on record congratulating Richard Stallman, the leader of GNU and the free software movement, on the occasion of GNU's 30th anniversary celebratory hackathon at the Massachusetts Institute of Technology on September 28 and 29, 2013."

On December 30, we asked our supporters to give $30 dollars to commemorate thirty years of the GNU System. We're excited about the future of GNU and want to do more. If you haven't already, please help us take GNU into the next thirty years. Your contribution will help us meet our $450,000 annual fundraising goal.

Viewing all 9433 articles
Browse latest View live