Quantcast
Channel: Hacker News 50
Viewing all 9433 articles
Browse latest View live

Einstein Was Right: Space-Time Smooth, Not Foamy | Space.com

0
0

Comments:" Einstein Was Right: Space-Time Smooth, Not Foamy | Space.com "

URL:http://www.space.com/19202-einstein-space-time-smooth.html


Space-time is smooth rather than foamy, a new study suggests, scoring a possible victory for Einstein over some quantum theorists who came after him.

In his general theory of relativity, Einstein described space-time as fundamentally smooth, warping only under the strain of energy and matter. Some quantum-theory interpretations disagree, however, viewing space-time as being composed of a froth of minute particles that constantly pop into and out of existence.

It appears Albert Einstein may have been right yet again.

A team of researchers came to this conclusion after tracing the long journey three photons took through intergalactic space. The photons were blasted out by an intense explosion known as a gamma-ray burst about 7 billion light-years from Earth. They finally barreled into the detectors of NASA's Fermi Gamma-ray Space Telescope in May 2009, arriving just a millisecond apart.

Their dead-heat finish strongly supports the Einsteinian view of space-time, researchers said. The wavelengths of gamma-ray burst photons are so small that they should be able to interact with the even tinier "bubbles" in the quantum theorists' proposed space-time foam.

If this foam indeed exists, the three protons should have been knocked around a bit during their epic voyage. In such a scenario, the chances of all three reaching the Fermi telescope at virtually the same time are very low, researchers said.

So the new study is a strike against the foam's existence as currently imagined, though not a death blow.

"If foaminess exists at all, we think it must be at a scale far smaller than the Planck length, indicating that other physics might be involved," study leader Robert Nemiroff, of Michigan Technological University, said in a statement. (The Planck length is an almost inconceivably short distance, about one trillionth of a trillionth the diameter of a hydrogen atom.)

"There is a possibility of a statistical fluke, or that space-time foam interacts with light differently than we imagined," added Nemiroff, who presented the results Wednesday (Jan. 9) at the 221st meeting of the American Astronomical Society in Long Beach, Calif.

If the study holds up, the implications are big, researchers said.

"If future gamma-ray bursts confirm this, we will have learned something very fundamental about our universe," Bradley Schaefer of Louisiana State University said in statement.

Follow SPACE.com on Twitter @Spacedotcom. We're also on Facebook & Google+


Kickstarter

'(fwd) Greetings from the Safari team at Apple Computer' - MARC

Safari is released to the world

0
0

Comments:"Safari is released to the world"

URL:http://donmelton.com/2013/01/10/safari-is-released-to-the-world/


During the early development of Safari, I didn’t just worry about leaking our secret project through Apple’s IP address or our browser’s user agent string. It also concerned me that curious gawkers on the outside would notice who I was hiring at Apple.

Other than a bit part in a documentary about Netscape that aired on PBS, I wasn’t known to anyone but a few dozen other geeks in The Valley. Of course, several of those folks were aware I was now at Apple and working on some project I wouldn’t say anything about. And it doesn’t take many people in this town to snowball a bit of idle speculation.

I found out later that Andy Hertzfeld, an Apple veteran who I worked with at Eazel, had figured it all out by the time I showed up for my first day to work on the browser on June 25, 2001. Andy was very insightful that way. But thankfully he was also quiet about it at the time.

Hiring Darin Adler, also ex-Apple and ex-Eazel, in the Spring of 2002 was likely visible to others in the industry since he was much more well known than me. But because Darin had never worked on a dedicated Web browser like I had, no one made the connection.

However, when I hired Dave Hyatt in July 2002, then guesses started flying fast.

While at Netscape, Dave built the Chimera (now known as Camino) browser for Mac OS X and co-created the project that would later become Firefox. Both of these applications were based on the Mozilla Gecko layout engine on which Dave also worked. He was a true celebrity in the Web browser world, having his hands in just about every Mozilla project.

So, during the Summer of 2002, several bloggers and tech websites speculated that Dave must be bringing Chimera to the Mac. Except that Chimera was already a Mac application and didn’t need to be ported. So what the hell was Dave doing at Apple? Building another Gecko-based Mac browser? No one knew. And none of this made much sense. Which is probably why the rumors subsided so quickly.

But people would remember all of this when Safari debuted at Macworld in San Francisco on January 7, 2003. And at least one of them would remember it at full volume while Steve Jobs was on stage making that announcement.

Until I watched that video I found and posted of the Macworld keynote, I had completely forgotten what else was announced that day. Which is pretty sad considering I saw Steve rehearse the whole thing at least four times.

But you have to realize I was totally focused on Safari. And Scott Forstall, my boss, wanted me at those rehearsals in case something went wrong with it.

There’s nothing that can fill your underwear faster than seeing your product fail during a Steve Jobs demo.

One of my concerns at the time was network reliability. So, I brought Ken Kocienda, the first Safari engineer, with me to troubleshoot since he wrote so much of our networking code. If necessary, Ken could also diagnose and duct tape any other part of Safari too. He coined one of our team aphorisms, “If it doesn’t fit, you’re not shoving hard enough.”

Ken and I started at Apple on the same day so, technically, he’s the only original Safari team member I didn’t hire. But because we both worked at Eazel together, I knew that Ken was a world-class propellor-head and insisted Forstall assign him to my team — essentially a requirement for me taking the job.

Most of the time during those rehearsals, Ken and I had nothing to do except sit in the then empty audience and watch The Master Presenter at work — crafting his keynote. What a privilege to be a spectator during that process. At Apple, we were actually all students, not just spectators. When I see other companies clumsily announce products these days, I realize again how much the rest of the world lost now that Steve is gone.

At one rehearsal, Safari hung during Steve’s demo — unable to load any content. Before my pants could load any of its own, Ken discovered the entire network connection had failed. Nothing we could do. The IT folks fixed the problem quickly and set up a redundant system. But I still worried that it might happen again when it really mattered.

On the day of actual keynote, only a few of us from the Safari team were in the audience. Employee passes are always limited at these events for obvious reasons. But we did have great seats, just a few rows from the front — you didn’t want to be too close in case something really went wrong.

Steve started the Safari presentation with, “So, buckle up.” And that’s what I wished I could do then — seatbelt myself down. Then he defined one of our product goals as, “Speed. Speed.” So, I tensed up. Not that I didn’t agree, of course. I just knew what was coming soon:

Demo time.

And for the entire six minutes and 32 seconds that Steve used Safari on stage, I don’t remember taking a single breath. I was thinking about that network failure during rehearsal and screaming inside my head, “Stay online, stay online!” We only had one chance to make a first impression.

Of course, Steve, Safari and the network performed flawlessly. I shouldn’t have worried.

Then it was back to slides and Steve talking about how we built it. “We based Safari on an HTML rendering engine that is open source.” And right then is when everybody else remembered all those rumors from the Summer about Dave Hyatt bringing Chimera to Apple.

But I chose the engine we used — with my team’s and my management chain’s support, of course — a year before Dave joined the project. Dave thought it was a great decision too, once he arrived. But that engine wasn’t Gecko, the code inside Chimera.

It was KHTML. Specifically KHTML and KJS— the code inside KDE’s Konqueror Web browser on Linux. After the keynote was over, I sent this email to the KDE team to thank them and introduce ourselves. I did it right from where I was sitting too, once they turned the WiFi back on.

You can argue whether KHTML was the right decision — go ahead, after 10 years it doesn’t faze me anymore. I’ll detail my reasons in a later post. Spoiler alert: I don’t hate Gecko.

But back to Steve’s presentation.

Everyone was clapping that Apple embraced open source. Happy, happy, happy. And they were just certain what was coming next. Then Steve moved a new slide onto the screen. With only one word, “KHTML” — six-foot-high white letters on a blue background.

If you listen to that video I posted, notice that no one applauds here. Why? I’m guessing confusion and complete lack of recognition.

What you also can’t hear on the video is someone about 15 to 20 rows behind where we were sitting — obviously expecting the word “Gecko” up there — shout at what seemed like the top of his lungs:

“WHAT THE FUCK!?”

KHTML may have been a bigger surprise than Apple doing a browser at all. And that moment was glorious. We had punk’d the entire crowd.

CES 2013: Monoprice Announces 27-Inch 2560x1440 Monitor for $390 - Tested

0
0

Comments:"CES 2013: Monoprice Announces 27-Inch 2560x1440 Monitor for $390 - Tested"

URL:http://www.tested.com/tech/pcs/452766-monoprice-announces-27-inch-2560x1440-monitor-390/


The Samsungs, Sharps and Sonys of CES love to show off the best cutting-edge technology they have to offer. And cutting-edge technology is invariably expensive. 4K TVs ruled the show this year, but you won't see one in a store in 2013 for less than $12,000. Then there are vendors like Monoprice, who show up to CES with products that are A) Affordable and B) Worth using now, not five years in the future. The Internet's favorite source for cheap audio/video cables announced a new 27-inch monitor at CES that will compete with those increasingly popular Korean imports like the Yamakasi Catleap.

We've covered budget Korean monitors in-depth before--basically, some smart eBay shopping can net you a 27-inch 2560x1440 IPS monitor (with the same LG panels that Apple puts into its monitors) for $300 or $400. They're barebones, but the screens are beautiful. A few US retailers like Micro Center have started offering their own alternatives at similar bargain prices. Monoprice's new Crystal Pro comes in at $390.

Why buy from Monoprice instead of importing from Korea? It's all about the warranty. Monoprice offers a five dead pixel return policy, which isn't bad, and the monitor comes with a general three year warranty. If one of the Korean monitors breaks down, you're basically out of luck. Monoprice is a bit easier to reach.

The $390 monitor is back-ordered at Monoprice but should ship on March 2nd.

zmusic-ng - ZX2C4 Music web application that serves and transcodes tagged music libraries using Flask and Backbone.js.

0
0

Comments:"zmusic-ng - ZX2C4 Music web application that serves and transcodes tagged music libraries using Flask and Backbone.js."

URL:http://git.zx2c4.com/zmusic-ng/about/


ZX2C4 Music provides a web interface for playing and downloading music files using metadata.

Features

  • HTML5 <audio> support.
  • Transcodes unsupported formats on the fly.
  • Serves zip files of chosen songs on the fly.
  • Supports multiple formats: mp3, aac, ogg, webm, flac, musepack, wav, wma, and more.
  • Clean minimalistic design.
  • Handles very large directory trees.
  • Full metadata extraction.
  • Advanced search queries.
  • Statistics logging.
  • Simple RESTful JSON API design.
  • Integration with nginx's X-Accel-Redirect.
  • Supports multiple different database backends.
  • Can run stand-alone or with uwsgi/nginx.

Dependencies

Frontend

All frontend dependencies are included in the source.

Backend

All backend dependencies must be present on system.

Downloading

The source may be downloaded using git:

$ git clone http://git.zx2c4.com/zmusic-ng/

Building

The entire project is built using standard makefiles:

zmusic-ng $ make

The makefiles have the usual targets, such as clean and all, and some others discussed below. If the environment variable DEBUG is set to 1, js and css files are not minified.

Want to run the app immediately? Skip down to Running Standalone.

URLs and APIs

Frontend

GET /

The frontend interface supports query strings for controlling the initial state of the application. These query strings wil be replaced with proper HTML5 pushState in the near future. Accepted query string keys:

  • username and password: Automatically log in using provided credentials.
  • query: Initial search query. If unset, chooses a search at random from predefined (see below) list.
  • play: Integer (1-indexed) specifying which song in the list to autoplay. No song autoplays if not set.

Backend

The provided frontend uses the following API calls. Third parties might implement their own applications using this simple API. All API endpoints return JSON unless otherwise specified.

GET /query/<search query>

Queries server for music metadata. Each word is matched as a substring against the artist, album, and title of each song. Prefixes of artist:, album:, and title: can be used to match exact strings against the respective keys. * can be used as a wildcard when matches are exact. Posix shell-style quoting and escaping is honored. Example searches:

  • charles ming
  • changes mingus
  • artist:"Charles Mingus"
  • artist:charles*
  • artist:charles* album:"Changes Two"
  • goodbye pork artist:"Charles Mingus"

Requires logged in user. The query strings offset and limit may be used to limit the number of returned entries.

GET /song/<id>.<ext>

Returns the data in the music file specified by <id> in the format given by <ext>. <ext> may be the original format of the file, or mp3, ogg, webm, or wav. If a format is requested that is not the song's original, the server will transcode it. Use of original formats is thus preferred, to cut down on server load and to enable seeking using HTTP Content-Range.

Requires logged in user. The server will add the X-Content-Duration HTTP header containing the duration in seconds as a floating point number.

POST /login

Takes form parameters username and password. Returns whether or not login was successful.

GET /login

Returns whether or not user is currently logged in.

GET /logout

Logs user out. This request lacks proper CSRF protection. Requires logged in user.

GET /scan

Scans the library for new songs and extracts metadata. This request lacks proper CSRF protection. Requires logged in admin user.

GET /stats

Returns IP addresses and host names of all downloaders in time-order. Requires logged in admin user.

GET /stats/<ip>

Returns all downloads and song-plays from a given <ip> address in time-order. Requires logged in admin user.

Authentication

All end points that require a logged in user may use the cookie set by /login. Alternatively, the query strings username and password may be sent for a one-off authentication.

Configuration

Frontend

The frontend should be relatively straight forward to customize. Change the title of the project in frontend/index.html, and change the default randomly selected search queries in frontend/js/app.js. A more comprehensible configuration system might be implemented at some point, but these two tweaks are easy enough that for now it will suffice.

Backend

The backend is configured by modifying the entries in backend/app.cfg. Valid configuration options are:

  • SQLAlchemy keys: The keys listed on the Flask-SQLAlchemy configuration page, with SQLALCHEMY_DATABASE_URI being of particular note.
  • Flask keys: The keys listed on the Flask configuration page. Be sure to change SECRET_KEY and set DEBUG to False for deployment.
  • STATIC_PATH: The relative path of the frontend directory from the backend directory. The way this package is shipped, the default value of ../frontend is best.
  • MUSIC_PATH: The path of a directory tree containing music files you'd like to be served.
  • ACCEL_STATIC_PREFIX: By default False, but if set to a path, this path is used as a prefix for fetching static files via nginx's X-Accel-Redirect. nginx must be configured correctly for this to work.
  • ACCEL_MUSIC_PREFIX: By default False, but if set to a path, this path is used as a prefix for fetching music files via nginx's X-Accel-Redirect. nginx must be configured correctly for this to work.
  • MUSIC_USER and MUSIC_PASSWORD: The username and password of the user allowed to listen to music.
  • ADMIN_USER and ADMIN_PASSWORD: The username and password of the user allowed to scan MUSIC_PATH for new music and view logs and statistics.

Deployment

Running Standalone

By far the easiest way to run the application is standalone. Simply execute backend/local_server.py to start a local instance using the built-in Werkzeug server. This server is not meant for production. Be sure to configure the usernames and music directory first.

zmusic-ng $ backend/local_server.py
* Running on http://127.0.0.1:5000/
* Restarting with reloader

The collection may be scanned using the admin credentials:

zmusic-ng $ curl http://127.0.0.1:5000/scan?username=ADMIN_USER&password=ADMIN_PASSWORD

And then the site may be viewed in the browser:

zmusic-ng $ chromium http://127.0.0.1:5000/

The built-in debugging server cannot handle concurrent requests, unfortuantely. To more robustly serve standalone, read on to running standalone with uwsgi.

Running Standalone with uwsgi

The built-in Werkzeug server is really only for debugging, and cannot handle more than one request at a time. This means that, for example, one cannot listen to music and query for music at the same time. Fortunately, it is easy to use uwsgi without the more complicated nginx setup (described below) in a standalone mode:

zmusic-ng $ uwsgi --chdir backend/ -w zmusic:app --http-socket 0.0.0.0:5000

Depending on your distro, you may need to add --plugins python27 or similar.

Once the standalone server is running you can scan and browse using the URLs above.

Uploading Music

There is an additional makefile target called update-collection for uploading a local directory to a remote directory using rsync and running the metadata scanner using curl. Of course, uploading is not neccessary if running locally.

zmusic-ng $ make update-collection

The server.cfg configuration file controls the revelent paths for this command:

  • SERVER: The hostname of the remote server.
  • LOCAL_COLLECTION_PATH: The path of the music folder on the local system.
  • SERVER_COLLECTION_PATH: The destination path of the music folder on the remote system.
  • ADMIN_USERNAME and ADMIN_PASSWORD should be the same as those set in backend/app.cfg.

nginx / uwsgi

Deployment to nginx requires use of uwsgi. A sample configuration file can be found in backend/nginx.conf. Make note of the paths used for the /static/ and /music/ directories. These should be absolute paths to those specified in backend/app.cfg as STATIC_PATH and MUSIC_PATH.

uwsgi should be run with the -w zmusic:app switch, possibly using --chdir to change directory to the backend/ directory, if not already there.

For easy deployment, the makefile has some deployment targets, which are configured by the server.cfg configuration file. These keys should be set:

  • SERVER: The hostname of the deployed server.
  • SERVER_UPLOAD_PATH: A remote path where the default ssh user can write.
  • SERVER_DEPLOY_PATH: A remote path where the default ssh user cannot write, but where root can.

The upload target uploads relevent portions of the project to SERVER_UPLOAD_PATH. The deploy target first executes the upload target, then copies files from SERVER_UPLOAD_PATH to SERVER_DEPLOY_PATH with the proper permissions, and then finally restarts the uwsgi processes.

zmusig-ng $ make deploy

These makefile targets should be used with care, and the makefile itself should be inspected to ensure all commands are correct for custom configurations.

Here is what a full build and deployment looks like:

zx2c4@Thinkpad ~/Projects/zmusic-ng $ make deploy
make[1]: Entering directory `/home/zx2c4/Projects/zmusic-ng/frontend'
 JS js/lib/jquery.min.js
 JS js/lib/underscore.min.js
 JS js/lib/backbone.min.js
 JS js/models/ReferenceCountedModel.min.js
 JS js/models/Song.min.js
 JS js/models/SongList.min.js
 JS js/models/DownloadBasket.min.js
 JS js/views/SongRow.min.js
 JS js/views/SongTable.min.js
 JS js/views/DownloadSelector.min.js
 JS js/controls/AudioPlayer.min.js
 JS js/app.min.js
 CAT js/scripts.min.js
 CSS css/bootstrap.min.css
 CSS css/font-awesome.min.css
 CSS css/page.min.css
 CAT css/styles.min.css
make[1]: Leaving directory `/home/zx2c4/Projects/zmusic-ng/frontend'
 RSYNC music.zx2c4.com:zmusic-ng
 [clipped]
 DEPLOY music.zx2c4.com:/var/www/uwsgi/zmusic
+ umask 027
+ sudo rsync -rim --delete '--filter=P zmusic.db' zmusic-ng/ /var/www/uwsgi/zmusic
 [clipped]
+ sudo chown -R uwsgi:nginx /var/www/uwsgi/zmusic
+ sudo find /var/www/uwsgi/zmusic -type f -exec chmod 640 '{}' ';'
+ sudo find /var/www/uwsgi/zmusic -type d -exec chmod 750 '{}' ';'
+ sudo /etc/init.d/uwsgi.zmusic restart
 * Stopping uWSGI application zmusic ...
 * Starting uWSGI application zmusic ...

Bugs? Comments? Suggestions?

Send all feedback, including git-formatted patches, to Jason@zx2c4.com.

Disclaimer

The author does not condone or promote using this software for redistributing copyrighted works.

License

Copyright (C) 2013 Jason A. Donenfeld. All Rights Reserved.

This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version.

This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.

You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.

Why Are LEGO Sets Expensive? | Wired Science | Wired.com

0
0

Comments:"Why Are LEGO Sets Expensive? | Wired Science | Wired.com"

URL:http://www.wired.com/wiredscience/2013/01/why-are-lego-sets-expensive/


I’m not sure I would say LEGO blocks are that expensive, but the statement is that they are expensive because they are so well made. Really, this has to at least be partially true. If you take some blocks made in 1970, they still fit with pieces made today. That is quite impressive.

But the real question is: what is the distribution of LEGO sizes? How does this distribution of sizes compare to other toys? The simple way to answer this question is to start measuring a whole bunch of blocks.

Here is the plan. Use a micrometer (the tool, not the unit) to measure the width of 2 bump LEGO blocks. Plot a histogram of the different sizes. Just to be clear, the micrometer is a tool that measures small sizes – around a millimeter to 20 millimeters. This particular one has markings down to 0.01 mm – for my measurements, I will estimate the size to 0.001 millimeters. Oh, one more point. There are lots of pieces that are two LEGO dots. For this data, I am mostly using 2 x 1 and 2 x 2 pieces. I will assume that both have the same size in the 2 bump direction.

Here is my first set of data.

These 88 measurements have an average of 15.814 mm with a standard deviation of 0.0265 mm.

What about older LEGO pieces?

Fortunately, I found one of my original sets from the late 70s.

v

I even have the instructions. Even though I’m not sure how old these are, they have to be at least 30 years old. Here are some 2 bump pieces from the 70s vs modern pieces.

The pieces from the 70s have an average of 15.819 mm with a standard deviation of 0.026 mm. Without doing any formal statistical tests, this seems close enough to being about the same distributions.

How about something else? What if I just look at 2 x 2 LEGO blocks? For these blocks, I can get a measurement of the width in two different ways (length and width). I can call one dimension “x” and the other “y”. Here is a plot of x vs. y measurements for square blocks.

Maybe that wasn’t such a great plot. What does it show? I guess the only thing I can say about this is that there doesn’t appear to be a systematic error relating the two sides of a 2 x 2 block. If a block is a tiny bit smaller in one dimension, it isn’t necessarily smaller in the other dimension.

What About Other Objects?

Do other things have high precision parts too? Before I show any data, let me plot some different data. Instead of plotting just the width of different objects, I am going to plot the distribution of the width divided by the mean width. This way I can make a comparison between objects of different size.

I found three different sets of objects to measure.

There are these wooden planks that are used for building stuff. Then I have two different types of “counting” blocks used for math stuff. I will combine all the 2 bump LEGO blocks together since they seem to be from a similar distribution.

Here is the distribution of the 4 different types of objects. I only plotted the 70s LEGO pieces since they had a number comparable to the other objects.

Clearly the wooden planks have a much wider distribution than the rest of the objects. Let me remove the wooden plank data and plot just the other stuff so it will be easier to make a comparison.

I probably need more data, but these seem to be built with around the same level of precision. Honestly, I don’t know much about plastic manufacturing – but the LEGO blocks appear to be created from harder plastic. Maybe this would lead them to maintain their size over a long period of time. Unfortunately, I didn’t have old math blocks to compare to newer blocks.

Price Per Piece of LEGO

This is older data from a previous post, but I like it so much I decided to include it here also. Basically, I looked at the price of different LEGO sets along with their pieces. The cool thing about all of the LEGO sets is that the number of pieces is always listed. BOOM. Instant graph (well, instant except for looking up all the prices).

Remember, these are 2009 prices but I think the same idea holds true.

This looks linear enough to fit a function. This is what you get.

About 10 cents per LEGO piece. If you had a set with no pieces in it, it would still cost 6 dollars. Yes, there are some sets that don’t fit too well – but for the most part this works nicely.

Nokia: Yes, we decrypt your HTTPS data, but don’t worry about it — Tech News and Analysis

0
0

Comments:" Nokia: Yes, we decrypt your HTTPS data, but don’t worry about it — Tech News and Analysis "

URL:http://gigaom.com/2013/01/10/nokia-yes-we-decrypt-your-https-data-but-dont-worry-about-it/


Nokia has confirmed reports that its Xpress Browser decrypts data that flows through HTTPS connections – that includes the connections set up for banking sessions, encrypted email and more. However, it insists that there’s no need for users to panic because it would never access customers’ encrypted data.

The confirmation-slash-denial comes after security researcher Gaurang Pandya, who works for Unisys Global Services in India, detailed on his personal blog how browser traffic from his Series 40 ‘Asha’ phone was getting routed via Nokia’s servers. So far, so Opera Mini: after all, the whole point of using a proxy browser such as this is to compress traffic so you can save on data and thereby cash. This is particularly handy for those on constricted data plans or pay-by-use data, as those using the low-end Series 40 handsets on which the browser is installed by default (it used to be known as the ‘Nokia Browser for Series 40′) are likely to be.

However, it was Pandya’s second post on the subject that caused some alarm. Unlike the first, which looked at general traffic, the Wednesday post specifically examined Nokia’s treatment of HTTPS traffic. It found that such traffic was indeed also getting routed via Nokia’s servers. Crucially, Pandya said that Nokia had access to this data in unencrypted form:

“From the tests that were preformed, it is evident that Nokia is performing Man In The Middle Attack for sensitive HTTPS traffic originated from their phone and hence they do have access to clear text information which could include user credentials to various sites such as social networking, banking, credit card information or anything that is sensitive in nature.”

Pandya pointed out how this potentially clashes with Nokia’s privacy statement, which claims: “we do not collect any usernames or passwords or any related information on your purchase transactions, such as your credit card number during your browsing sessions”.

So, does it clash?

Nokia came back today with a statement on the matter, in which it stressed that it takes the privacy and security of its customers and their data very seriously, and reiterated the point of the Xpress Browser’s compression capabilities, namely so that “users can get faster web browsing and more value out of their data plans”.

“Importantly, the proxy servers do not store the content of web pages visited by our users or any information they enter into them,” the company said. “When temporary decryption of HTTPS connections is required on our proxy servers, to transform and deliver users’ content, it is done in a secure manner. “Nokia has implemented appropriate organizational and technical measures to prevent access to private information. Claims that we would access complete unencrypted information are inaccurate.”

To paraphrase: we decrypt your data, but trust us, we don’t peek. Which is, in a way, fair enough. After all, they need to decrypt the data in order to de-bulk it.

The issue here seems to be around how Nokia informs – or fails to inform – its customers of what’s going on. For example, look at Opera. The messaging around Opera Mini is pretty clear: the browser’s FAQs spell out how it routes traffic. Although you can find out about the Xpress Browser’s equivalent functionality with a bit of online searching, it’s far less explicit to the average user. And this is particularly unfortunate given that the browser is installed by default — people won’t necessarily choose it based on those data-squeezing chops.

And it looks like Nokia belatedly recognizes that fact. The statement continued:

“We aim to be completely transparent on privacy practices. As part of our policy of continuous improvement we will review the information provided in the mobile client in case this can be improved.”

The moral of the story is that those who want absolute security in their mobile browsing should probably steer clear of browsers that compress to cut down on data. Even if Nokia isn’t tapping into that data – and there is no reason to suspect that it is – the very existence of that feature will be a turn-off for the paranoid, and reasonably so. And that’s why Nokia should be up-front about such things.

UPDATE: A kind soul has reminded me that, unlike Xpress Browser and Opera Mini, two other services that also do the compression thing leave HTTPS traffic unperturbed, namely Amazon with its Silk browser and Skyfire. This is arguably how things should be done, although it does of course mean that users don’t get speedier loading and so on on HTTPS pages.


Google Flu Trends | United States

Google Chrome Blog: Our newest Beta, for Android phones and tablets

0
0

Comments:"Google Chrome Blog: Our newest Beta, for Android phones and tablets"

URL:http://chrome.blogspot.com/2013/01/our-newest-beta-for-android-phones-and.html


Release early, release often. Today, we’re introducing Chrome Beta channel for phones and tablets on Android 4.0+. The Beta channel was launched in the early days of Chrome to test out new features and fix issues fast. Our newest Beta channel for phones and tablets now joins our Beta versions of Chrome for Mac, Windows, Linux and Chrome OS.

You can expect early access to new features (and bugs!), as well as a chance to provide feedback on what’s on the way. Just like our other Beta versions, the new features may be a little rough around the edges, but we’ll be pushing periodic updates so you can test out our latest work as soon as it’s ready. Even better, you can install the Beta alongside your current version of Chrome for Android.

Chrome for Android now benefits from all the speed, security and other improvements that have been landing on Chrome’s other platforms. For example, in today’s Beta update we have improved the Octane performance benchmark on average by 25-30%. In addition, this update includes interesting HTML5 features for developers such as CSS Filters. This is just one step of many towards bringing beautiful experiences to the mobile web.

Ready? Use it, abuse it, and tell us what you think. Our new Chrome Beta for Android is available now on Google Play (use the link, you won't find it in search)!

Posted by Jason Kersey, Technical Program Manager & Mobile Cat Herder

Ex-CIA analyst finds mysterious Chinese complex on Google Earth (Wired UK)

0
0

Comments:"Ex-CIA analyst finds mysterious Chinese complex on Google Earth (Wired UK)"

URL:http://www.wired.co.uk/news/archive/2013-01/10/chinese-desert-mystery


Late last month, former CIA analyst Allen Thomson was clicking through a space news website when he noticed a story about a new orbital tracking site being built near the small city of Kashgar in southwestern China. Curious, he went to Google Earth to find it. He poked around for a while, with no luck. Then he came across something kind of weird.

Thomson, who served in the CIA from 1972 to 1985 and as a consultant to the National Intelligence Council until 1996, has made something of a second career finding odd stuff in public satellite imagery. He discovered these giant grids etched into the Chinese desert in 2011, and a suspected underground missile bunker in Iran in 2008. When the Israeli Air Force destroyed a mysterious facility in Syria the year before, Thomson put together an 812-page dossier on the so-called "Box on the Euphrates." Old analyst habits die hard, it seems.

But even this old analyst is having trouble ID'ing the objects he found in the overhead images of Kashgar. "I haven't the faintest clue what it might be -- but it's extensive, the structures are pretty big and funny-looking, and it went up in what I'd call an incredible hurry," he emails.

So he'd like your help in solving this little mystery. What follows are 10 images of the site. If you've got ideas on what might be there, drop me a note, or find me on Twitter or Facebook. I'll pass it on to Thomson.

Source: Wired.com

Images: Google Earth

Companies that support remote workers win against those that don’t | Certain Extent

0
0

Comments:"Companies that support remote workers win against those that don’t | Certain Extent"

URL:http://blog.davidtate.org/2013/01/companies-that-support-remote-workers-win-against-those-that-dont/


Years ago my boss asked if I could use a remote support developer in Europe for off-hours support of a critical system that processed data throughout the day.  He said that they had a sharp technical resource there who had normal working hours right in our support blind spot and that the candidate was interested in helping out.  I froze as the downsides flooded my mind:

  • He didn’t know our system at all.
  • I would never meet him.
  • We didn’t have much documentation of our systems.  All our knowledge transfer was done in person using heavy sarcasm and obscure hand waving.
  • We didn’t have a good ticket tracking system or history of service incidents we could point him at for self-study.
  • I wasn’t sure how I could judge whether or not he was helping or hurting.
  • I was afraid that instead of getting woken up in the middle of the night to solve a problem I would be woken up in the middle of the night to talk to him and explain the context because of the above problems.

All my objections were about how we weren’t ready to support him, monitor him, and grow him sitting where we sat as an immature support team. All my objections were things that we needed to change anyway and that he would serve as a canary and catalyst for these things to actually change.  We would be a better team by moving towards being able to support him.

***

With the developer job market being what it is  (i.e. a little nuts) some companies are offering work from home as an attractive add-on option – “Work from home Fridays”, “We support remote workers”, “Flexible schedule”.  This is being done as an after-thought and is not part of the core culture.

The difference between a company that can support a remote worker and one that cannot is not a small difference in perks: it is a chromosome-level difference. Companies that truly support remote workers win against those that don’t.

***

Having now myself been the remote person on the other side over the last 2+ years I’ve found a large range of differences between those that truly support remote work and those that just talk about it.  Think of it as the difference between a watch being water-resistant [you can wash your hands with it on] and diving-level waterproof [you can operate it underwater].

The reasoning is pretty simple: in order for a remote employee to succeed a company has to have clear communication, a standard process, and a clear focus on results above other secondary concerns.

A company has to provide the following:

  • A pipeline of work that is ready to actually be worked upon (It is packaged with its context and links to how to find out information for any questions)
  • Clear expectations for results and the ability to track how things are progressing.
  • Clear communication channels: this might include some permanence and search-ability for work already done but also includes some form of ~democratic decision-making that includes those that includes more people than can fit in a conference room.
  • A teaching culture that includes helpful coworkers ready to answer questions and help out remote workers if there are gaps of context.
  • A Results-Only-Work-Environment (ROWE) culture that allows workers to get as much done as they are able, and processes feedback from those workers about obstacles they encounter.

All of those things are good for the company supporting remote work even without a remote workforce – they create less friction around communication and infrastructure and make results the top priority.  In the end a company that focuses on primary complexity will beat those that are optimized for other things.

 

 

Posted by dtate on January 10, 2013

http://blog.davidtate.org/2013/01/companies-that-support-remote-workers-win-against-those-that-dont/

ZURB Acquires Forrst! by ZURB ZURBlog

0
0

Comments:"ZURB Acquires Forrst! by ZURB ZURBlog"

URL:http://www.zurb.com/article/1146/zurb-acquires-forrst


It's with great excitement that we announce our big news today: we've acquired Forrst!

In case you haven't heard of it, Forrst is a community for designers and developers to share work, give and receive detailed feedback and become better at their craft.

Our Story

Here at ZURB, we've prided ourselves on building out and defining the future of product design. We've developed awesome product design apps, created and iterated on a top-20 GitHub most watched front-end framework and provided design services to emerging Silicon Valley startups for over 15 years. Our next step is a bold one into the design community world, and we can't wait to get started.

Why Forrst?

We were initially drawn to Forrst for several reasons. Primarily, we believe the Forrst community is full of talented designers and developers who have exciting work (and feedback!) to share with others. From Day 1, we plan on taking an active role in growing the community into what we know it can be.

How Adding Forrst Makes ZURB better

A big part of great product design thinking is learning how to give and receive great feedback, and apply it to whatever you are working on. Instead of the simple, "Nice work!" or "Great job!," we believe our community has so much more to offer, especially when it comes to delivering feedback on designs, side projects or a weekend hack.

We want all community members to share their best work on Forrst, and gather insights to make every single visit a worthwhile one. When you sign into Forrst, you'll be greeted with a design community that will embrace not only you, but your work as well. With time, you may just become a better designer, too.

What will change?

Forrst is an established design and development community, and we believe we can make it even better. We're committed to listening, participating and fostering a healthy design community.

In addition, we'll actively participate, providing relevant feedback where necessary and sharing designs, inspiration and ideas.

Over time, we'll introduce tools and apps which will help our community members deliver increasingly-concise and detailed design feedback.

The Future Of The Forrst Community

We're excited to welcome Forrst into the ZURB fold, and we hope you are too.

We believe that in time, Forrst can be a defining force within the design community, and we're all excited to embark on the journey together. Onward!

How to price something by Jason Fried of 37signals

0
0

Comments:"How to price something by Jason Fried of 37signals"

URL:http://37signals.com/svn/posts/3394-how-to-price-something


Lately I’ve been spending some time with local entrepreneurs who are looking for business advice. Inevitably, the topic of pricing comes up. “How do I know how much to charge?”

There are lots of answers.

You can make up a number and see if it works. You can test a few different prices at the same time. You can do traditional market research and see what you find. You can read pricing books and academic papers on pricing approaches, techniques, and behavioral psychology. You can see what others are charging.

The good news about pricing is that you can guess, be wrong, but still be right enough to build a great sustainable business. Maybe you’re leaving some money on the table, but, like my dad always says, no one ever went broke making a profit.

However, you are not allowed to ask people:

  • “What would you pay for this?”
  • “Would you buy this for $20?”
  • “How much do you think this is worth?”
  • “What’s the most you’d pay?”

And these are the questions I hear people asking over and over. You can’t ask people who haven’t paid how much they’re willing to pay. Their answers don’t matter because there’s no cost to saying “yes” ”$20” “no” ”$100”. They all cost the same – nothing.

The only answers that matter are dollars spent. People answer when they pay for something. That’s the only answer that really matters.

So put a price on it and put it up for sale. If people buy that’s a yes. Change the price. If people buy, that’s a yes. If people stop buying, that’s a no. Crude? Maybe. But it’s real.

You can dig into the why’s more deeply over time, but you have to start somewhere. And the best place to start is with real answers. This is why we picked $10 for a Basecamp Breeze email address.

Instacart Adds Trader Joe's To Service

0
0

Comments:"Instacart Adds Trader Joe's To Service"

URL:http://thenextweb.com/apps/2013/01/10/grocery-delivery-startup-instacart-adds-trader-joes-to-its-service-allowing-for-on-demand-two-buck-chuck/?utm_source=Twitter&utm_content=Grocery%20delivery%20startup%20Instacart%20adds%20Trader%20Joes%20to%20its%20service,%20allowing%20for%20on-demand%20Two%20Buck%20Chuck&awesm=tnw.to_c0U3e&utm_medium=share%20button&utm_campaign=social%20media


Update: Unbeknownst to us at the time of posting, Instacart has temporarily suspended alcohol sales through its service. Thus, the title of this post was a bit off: you can’t yet buy wine from Trader Joe’s. The company told TNW that it is an issue it is working to solve, and that the ability should be back in a few weeks’ time. We apologize for the confusion. 

Today Instacart begins to roll out support for Trader Joe’s chain of stores to its grocery delivery service. Users that have had their accounts activated to support the new products will be able to toggle between Safeway and Trader Joe’s, giving them the option to purchase from one, or both of the providers.

Given that Instacart is getting its start in the Bay Area, the addition of the new grocery story is non-trivial: many in this part of California swear by it.

We spoke with Instacart’s founder Apoorva Mehta, who noted that “one of the features our customers have repeatedly asked for is the ability to shop at Trader Joe’s.” The company is especially proud of its blended cart system, by which users can buy items from either grocery store in one move, and have them delivered on its scheduled one or three hour deliveries.

Instacart’s model is finding firm footing in its trial markets, telling TNW somewhat elliptically that its economics are functional, and that its growth rates are strong. The company also noted that its attempts at paid marketing produced slim results, and that word of mouth marketing is instead propelling the firm forward.

I’m usually skeptical of such claims, but as I took part in the very same activity in my first review of the company, I must admit the claim’s plausibility.

Instacart is a somewhat interesting startup as it is working in a space where so many have failed, often spectacularly, such as WebVan. The company appears to be succeeding through simple pricing, a functional interface across platforms that makes it simple to use, and economics that make it viable in the long-term, not depending on massive scale to produce efficiencies.

For fun, here’s what your account will look like once it is approved for Trader Joe’s support. The top image is from Safeway.

Versus:

Two buck chuck on demand? Hats off. For more on Instacart, TNW’s past coverage is required reading.

Top Image Credit: Khamis Hammoudeh


More on Postgres Performance - Craig Kerstiens

0
0

Comments:"More on Postgres Performance - Craig Kerstiens"

URL:http://craigkerstiens.com/2013/01/10/more-on-postgres-performance/


If you missed my previous post on Understanding Postgres Performance its a great starting point. On this particular post I’m going to dig in to some real life examples of optimizing queries and indexes.

It all starts with stats

I wrote about some of the great new features in Postgres 9.2 in the recent announcement on support of Postgres 9.2 on Heroku. One of those awesome features, is pg_stat_statements. Its not commonly known how much information Postgres keeps about your database (beyond the data of course), but in reality it keeps a great deal. Ranging from basic stuff like table size to cardinality of joins to distribution of indexes, and with pg_stat_statments it keeps a normalized record of when queries are run.

First you’ll want to turn on pg_stat_statments:

CREATE extension pg_stat_statements;

What this means it would record both:

SELECT id 
FROM users
WHERE email LIKE 'craig@heroku.com';

and

SELECT id 
FROM users
WHERE email LIKE 'craig.kerstiens@gmail.com';

To a normalized form which looks like this:

SELECT id 
FROM users
WHERE email LIKE ?;

Understanding them from afar

While Postgres collects a great deal of this information dissecting it to something useful is sometimes more mystery than it should be. This simple query will show a few very key pieces of information that allow you to begin optimizing:

SELECT 
 (total_time / 1000 / 60) as total_minutes, 
 (total_time/calls) as average_time, 
 query 
FROM pg_stat_statements 
ORDER BY 1 DESC 
LIMIT 100;

The above query shows three key things:

The total time a query has occupied against your system in minutes The average time it takes to run in milliseconds The query itself

Giving an output something like:

 total_time | avg_time | query
------------------+------------------+------------------------------------------------------------
 295.761165833319 | 10.1374053278061 | SELECT id FROM users WHERE email LIKE ?
 219.138564283326 | 80.24530822355305 | SELECT * FROM address WHERE user_id = ? AND current = True
(2 rows)

What to optimize

A general rule of thumb is that most of your very common queries that return 1 or a small set of records should return in ~ 1 ms. In some cases there may be queries that regularly run in 4-5 ms, but in most cases ~ 1 ms or less is an ideal.

To pick where to begin I usually attempt to strike some balance between total time and long average time. In this case I’d start with the second probably, as on the first one I could likely shave an order of magnitude off, on the second I’m hopeful to shave two order of magnitudes off thus reducing the time spent on that query from a cumulative 220 minutes down to 2 minutes.

Optimizing

From here you probably want to first example my other detail on understanding the explain plan. I want to highlight some of this with a more specific case based on the second query above. The above second query on an example data set does contain an index on user_id and yet there’s still high query times. To start to get an idea of why I would run:

EXPLAIN ANALYZE
SELECT * 
FROM address 
WHERE user_id = 245 
 AND current = True

This would yield results:

 QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Aggregate (cost=4690.88..4690.88 rows=1 width=0) (actual time=519.288..519.289 rows=1 loops=1)
 -> Nested Loop (cost=0.00..4690.66 rows=433 width=0) (actual time=15.302..519.076 rows=213 loops=1)
 -> Index Scan using idx_address_userid on address (cost=0.00..232.52 rows=23 width=4) (actual time=10.143..62.822 rows=1 loops=8)
 Index Cond: (user_id = 245)
 Filter: current
 Rows Removed by Filter: 14
 Total runtime: 219.428 ms
(1 rows)

Hopefully not being too overwhelmed by this due to having read the other detail on query plans we can see that it is using an index as expected. The difference is its having to fetch 15 different rows from the index then discard the bulk of them. The number of rows discarded is showcased by the line:

Rows Removed by Filter: 14

This is just one more of the many improvements in Postgres 9.2 alongside pg_stat_statements.

To further optimize this we would great a conditional OR composite index. A conditional would be where only current = true, where as the composite would index both values. A conditional is commonly more valuable when you have a smaller set of what the values may be, meanwhile the composite is when you have a high variability of values. Creating the conditional index:

CREATE INDEX CONCURRENTLY idx_address_userid_current ON address(user_id) WHERE current = True;

We can then see the query plan is now even further improved as we’d hope:

EXPLAIN ANALYZE
SELECT * 
FROM address 
WHERE user_id = 245 
 AND current = True
 QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Aggregate (cost=4690.88..4690.88 rows=1 width=0) (actual time=519.288..519.289 rows=1 loops=1)
 -> Index Scan using idx_address_userid_current on address (cost=0.00..232.52 rows=23 width=4) (actual time=10.143..62.822 rows=1 loops=8)
 Index Cond: ((user_id = 245) AND (current = True))
 Total runtime: .728 ms
(1 rows)

For further reading, give Greg Smith’s Postgres High Performance a read

Beefing up the Python Shell to build apps faster and DRYer – Ben Plesser

0
0

Comments:"Beefing up the Python Shell to build apps faster and DRYer – Ben Plesser"

URL:http://benplesser.com/2013/01/10/beefing-up-the-python-shell-to-build-apps-faster-and-dryer/


One of the great mantras in Python is  avoid repetition.  And yet, when working on a Django app or with a new third party library, I often find myself stuck in the same pitiful cycle:

Start the Shell; encounter a bug or unexpected behavior; close the Shell; make some changes to my code; restart the Shell.  And so on….

This sucks. Not only does it end up wasting 3-4 seconds for each Shell restart on my Macbook Air (those seconds really add up over time), it also inevitably leads to soul crushing frustration.

As an alternative to incessantly restarting the Shell, you could use the builtin Python reload() function, which reimports a given module within the program.  Unfortunately, issues still crop up:

reload() must be passed a module as an argument, meaning you have to import the entire module at some point in your program. Typing reload(module_name) still takes too much time if done ad naseum Django models.py modules can’t be reloaded normally due to the AppCache singleton. People forget about the builtin in the heat of the moment, it just happens.

My solution (inspired by the builtin, multi-threaded Django server) was to add an auto-reloading thread to Shell_Plus.

Shell_Plus, an the extremely helpful script found in the Django Extensions library, is a Django Management Command which spins up an embedded Shell.  Within the embedded Shell, the script sets a number of objects in the global scope on startup via (i.e. your Django models and settings variables).

The major change here is that, before entering the mainloop of the IPython Shell, a Watchdog observer thread (another great library) is kicked off, which listens for file system events.  When a relevant event occurs, the thread automatically reloads the module into the global scope of the embedded Shell via a global dictionary.  It’s fast and completely transparent to the Shell user.

This rather large gist (https://gist.github.com/4499367) contains the code for the reloader thread, along with a heap of documentation.

At this point, I should mention that I decided to add some black magic to make the reloader as powerful as possible.

When a class definition is changed in a file and reloaded in the shell, all of the instances of the old version of that class inside the Shell’s global scope are dynamically assigned the reloaded class.

old_instance.__class__ = RefashionedKls

The implications of that last point are a little crazy. While inside a pdb debugger, you can add, delete, or modify class methods and immediately see the result inside the Shell session.  I’ve used it to great effect while debugging and generally experimenting with code but the danger is obvious so beware. Again, check out the gist to see more details.

You can find my Django Extensions fork at https://github.com/Bpless/django-extensions.

Even if you are not developing a Django app specifically, you can build off this concept of auto-reloading embedded Shells for any Python project.

Like this:

One blogger likes this.

Exec is a finalist at the Crunchies for the... - EXEC. Blog

0
0

Comments:"Exec is a finalist at the Crunchies for the... - EXEC. Blog"

URL:http://blog.iamexec.com/post/40184092197/exec-at-the-crunchies


Exec is a finalist at the Crunchies for the category of Fastest Rising Startup! The Crunchies are like the Oscars for the tech world. We’re so excited to be a finalist, we wrote a rap song. Hooray for startups!

After you watch the video, you can vote for us here. You can vote once a day!

Big thanks to…
Lyft for the sweet ride
Warby Parker for providing stylish glasses
Hipmunk for the dancing mascot
Dropbox for letting us borrow their Crunchie
TechCrunch for nominating us
Paul Graham for Makin’ it Rain

Credits
Daniel Kan, The “Talent”
Justin Kan, Director, Principal Photography, Snapchat Model
Amir Ghazvinian, Dreamboat
Carols Martinez, Wolf Man
Matt Lewis, Dog Man
Karen Cheng, Freestyle Dancer, Choreography
Emma Jones, Lego Woman
Chad Etzel, Saxophone Player
Louis Albanese, Chief Guitarist Chef
Finbarr Taylor, Scotsman
Tristan Zier, Smooth Sunglasses
Meg Kearns, Background Dancer
Neil Ayton, Off-balance Background Dancer
Julio Artiga, Motorcycle Pusher
Angel, Lyft Driver
Dan Wheeler, Dropbox Security
Phil, Hipmunk Mascot
Paul Graham, Mr. Make It Rain On ‘Em

Exec Cleaning is serving the Bay Area:
House Cleaning San Francisco
House Cleaning Berkeley
House Cleaning Daly City
House Cleaning Mountain View
House Cleaning Oakland
House Cleaning Palo Alto
House Cleaning Redwood City
House Cleaning San Jose
House Cleaning San Mateo

7 hours ago /  3 notes

You Don't Need the DOM Ready Event

0
0

Comments:"You Don't Need the DOM Ready Event"

URL:http://thanpol.as/javascript/you-dont-need-dom-ready/


It usually takes a long time for the DOM ready event to fire. During this time, many parts of a webpage are inactive as they wait for Javascript to kick in and initialize them. This delay is significant and makes a rich web application become available slower. Creates a bad user experience, doesn’t adhere to any design pattern and is, really, not needed…

Why wait for the DOM Ready Event?

First of, when I am talking about the DOM ready event, this includes any other kind of readyish events that are fired by the browsers. window.onload, readystateChange, jQuery ready event and all other variants.

So, it is very common today for Javascript applications to wait for the DOM ready event before they start executing their payload. The reasoning behind this is to have a fully rendered and ready to go DOM object.

As per jQuery’s documentation:

The handler passed to .ready() is guaranteed to be executed after the DOM is ready, so this is usually the best place to attach all other event handlers and run other jQuery code.

Why you don’t need the DOM Ready Event

It all boils down to the order of elements positioning in the final document that is published and served by a webserver. Understanding the importance of this fact enables web applications to load faster and provide the fastest possible UX.

Relying on DOM Ready also implies that the script elements are in the document HEAD. As per the HTTP/1.1 spec browsers can download no more than two components in parallel per hostname. Thus, we use CDNs or generally multiple hostnames for our static content. However when a script is loading, the browser will not start any other downloads, even on different hostnames!

Order Matters!

There is no reason at all to have any script tags in the HEAD. Not even at the top of the BODY tag. Scripts’ position is at the bottom of the document, right before the closing of the BODY tag. An exception are scripts that need to perform a document.write, like ads scripts. Pretty much everything else can easily be moved to the bottom.

All script elements are immediately invoked. So if they are positioned at the end of the document, when they are invoked the document has already been parsed, rendered and all the nodes exist in the document. Therefore they are immediately accessible to the javascript application.

The following illustration is from the overview of the parsing model at w3c:

Tokens are handled by the “Tokeniser”, they are each element that is being parsed from the text document. The reason why the DOM rendering process is reentrant is because of the document.write() and other DOM manipulation methods a script can possibly execute.

Which in code means that:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
<spanid="spanOne"></span><script>varone=document.getElementById('spanOne');vartwo=document.getElementById('spanTwo');// span one was defined before the script so it's available and can be manipulatedone.innerHTML='Gangnam';// span two is defined after the script and is undefinedtry{two.innerHTML=' Style';}catch(e){console.log('Error, span two not defined',e);}</script><spanid="spanTwo"></span>

A more complete example

Consider this sample document:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
<!DOCTYPE html><html><headlang="en"><script type="text/javascript"src="initLoggers.js"></script></head><body><divid="main-content"></div><divid="logger"></div><script type="text/javascript">log('Inline JS at bottom of BODY. Loading jQuery...');</script><script src="//ajax.googleapis.com/ajax/libs/jquery/1.8.3/jquery.min.js"></script><script type="text/javascript"src="ourApplication.js"></script></body></html>

The first script loaded at line 4initLoggers.js defines some helper functions for measuring the time that events occur since the page starts loading. We include this file in the HEAD to illustrate the flow of execution and the time differences of the process.

This is the initLoggers.js script:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
// get the current time difference between page// load and when this func was invokedfunctiongetTimeDiff(){returnnewDate().getTime()-performance.timing.navigationStart;}var$log,jqLoaded=false;functionlog(message){if(jqLoaded){$log=$log||$('#logger');$log.append('<p><b>'+getTimeDiff()+'</b>ms :: '+message);}if(window.console){console.log(getTimeDiff()+'ms :: '+message);if(console.timeStamp){console.timeStamp(message);}}}log('On HEAD, starting...');

Notice on line 4 the use of performance, a pretty useful debugging object. Try it in your console and see the available methods and properties of navigation timing available at your browser.

So after jQuery loads, the main application script ourApplication.js is loaded:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
log('jQuery loaded.');jqLoaded=true;$(document).ready(function(){log('DOM Ready fired');$('#main-content').append('Style!');});log('Inline JS appending content...',true);$('#main-content').append('Gangnam ');

As we mentioned, script elements are blocking until they load, and when they do they immediately execute. line 9 performs an inline DOM manipulation. As expected, the manipulation will happen right there, synchronously. Soon after, DOM Ready event fires and executes the payload in lines 6 and 7.

When this page runs, this is what we see in the console:

1017ms :: On HEAD, starting...
1025ms :: Inline JS at bottom of BODY. Loading jQuery...
1083ms :: jQuery loaded.
1086ms :: Inline JS appending content...
1099ms :: DOM Ready fired

The difference of 13ms between inline JS and DOM Ready execution may not look as much, but keep in mind this is an empty page. Running similar timing scripts in a moderately loaded document in development state yields these results:

290ms :: On HEAD, starting...
478ms :: Stylesheets loaded
488ms :: Inline at bottom of BODY, start loading jQuery...
587ms :: jQuery loaded, creating on DOM.Ready listener...
602ms :: First bootstrap JS file loaded, our UI can start
1525ms :: All inline scripts finished loading.
1538ms :: DOM Ready fired

The page was loaded from localhost, so time is faster on HEAD. Because the page is in development state, all assets are loaded individually in the document, meaning multiple style and javascript files are requested from the server.

In this case you can see the significant difference between when the first inline javascript file was evaluated and invoked (602ms) and when DOM Ready finally fired (1,538ms).

Faster page rendering, faster time when page becomes usable, faster page loading, better user experience. It’s time to let go of the DOM Ready Event.

Have some fun with this plnkr where you can find the code for the examples used in this post.

Please enable JavaScript to view the comments powered by Disqus.blog comments powered by

Wayback Machine: Now with 240,000,000,000 URLs | Internet Archive Blogs

0
0

Comments:"Wayback Machine: Now with 240,000,000,000 URLs | Internet Archive Blogs"

URL:http://blog.archive.org/2013/01/09/updated-wayback/


Today we updated the Wayback Machine with much more data and some code improvements.  Now we cover from late 1996 to December 9, 2012 so you can surf the web as it was up until a month ago.  Also, we have gone from having 150,000,000,000 URLs to having 240,000,000,000 URLs, a total of about 5 petabytes of data.   (Want a humorous description of a petabyte?  start at 28:55)  This database is queried over 1,000 times a second by over 500,000 people a day helping make archive.org the 250th most popular website.

Over the past year we archived tons of pages about the United States 2012 presidential election.  You can revisit the New York Times live coverage page from election day, the campaign sites of Republican hopefuls like Newt Gingrich and Ron Paul, and mini-scandals like Romney’s car elevator or using aspirin as contraceptives.  The Wayback record of the 2008 election was recently used by the Sunlight Foundation to contrast how Obama’s team dealt with disclosing inauguration donors then vs. now, so hopefully the 2012 election content will prove just as useful in the future.

The prolific volunteers of Archive Team spent a lot of time this year archiving web sites on the verge of disappearing and then contributing those records to Internet Archive.  City of Heroes (including the boards with years of posts), Fortune City and Splinder were all saved from the proverbial wood chipper.

The updated version does have at least one known issue – there is a small amount of older content missing from the index, and it will take us another month or two to sort out that problem.  In the mean time, you can still visit the previous version of the Wayback with that content.

We would like to thank the following for all their efforts in making the updated Wayback Machine:

  • Andy Bezella
  • Aaron Binns
  • Hank Bromley
  • Kris Carpenter
  • Dominic Dela Cruz
  • Vinay Goel
  • Jake Johnson
  • Brewster Kahle
  • Jeff Kaplan
  • Ilya Kreymer
  • Raj Kumar
  • John Lekashman
  • Noah Levitt
  • Adam Miller
  • Gordon Mohr
  • Ralf Muehlen
  • Kenji Nagahashi
  • Alexis Rossi
  • Jim Shankland
  • Sam Stoller
  • Brad Tofel
  • Travis Wellman
This entry was posted in Announcements. Bookmark the permalink.
Viewing all 9433 articles
Browse latest View live




Latest Images