Quantcast
Channel: Hacker News 50
Viewing all 9433 articles
Browse latest View live

Django 1.5 release candidate | Weblog | Django

$
0
0

Comments:"Django 1.5 release candidate | Weblog | Django"

URL:https://www.djangoproject.com/weblog/2013/jan/04/15-rc-1/


As part of the Django 1.5 release process, today we've released Django 1.5 release candidate 1, a preview/testing package for Django 1.5. As with all pre-release packages, this is not for production use, but if you'd like to try out some of the new goodies coming in 1.5, or if you'd like to pitch in and help us fix bugs before the final 1.5 release, feel free to grab a copy and give it a spin.

In particular, we ask users of this release candidate to watch for and report any major bugs they encounter; if no release-blocking bugs are reported, Django 1.5 will be released in approximately one week.

You can get a copy of the 1.5 release candidate from our downloads page, and we recommend you read the release notes. Also, for the security conscious, signed MD5 and SHA1 checksums of the 1.5 release candidate are available.

Also, please note that Django 1.5 now requires a minimum of Python 2.6; Python 2.5 is no longer supported. Python 3.x releases, starting with Python 3.2, are experimentally supported in this release. For more information on Python 3 support, and the testing Django 1.5 will still need, see this blog post.

Posted by James Bennett on January 4, 2013


Rant: Backbone, Angular, Meteor, Derby

$
0
0

Comments:"Rant: Backbone, Angular, Meteor, Derby "

URL:https://gist.github.com/4454814


Disclaimer: This post is Meteor & Backbone beef. Both Meteor and Backbone are absolute genius, and far beyond anything I could dream to create. But IMO there are better tools. Prepare yourselves *gulp*, I need to get this off my chest.

First, Backbone. Why people? It revolutionized JavaScript, did wonderful things for the world, and served its purpose well. But now we have better tools, so let’s move on. It’s like Gentoo users proselytizing Gentoo to the masses, perpetuating it as most common Distro; where all this time, Ubuntu would have saved everyone countless hours. Not everyone wants to build everything from scratch - use Angular (or Ember, Knockout, etc).

Ok, Meteor v Derby. Meteor doesn’t use NPM, and has no REST support. Am I wrong? That’s huge! That means you can’t have an API (read mobile app) unless you write a second server accessing that MongoDB store. Derby is the exact same thing as Meteor, plus REST and NPM. So why do people use Meteor? For legitimate reasons: it has a better public face: more PR& press, tutorials, and resources. Derby has no press, behind-the-curve documentation & major version releases, and always-down live examples (see HabitRPG, PhishVids, and Bibliaovaso for functioning production examples). However, the codebase is impeccable and well-maintained, the architecture a work of genius, and its future bright. But to use Derby you have to use HEAD, and survive the sparse docs. Learn it and you’ll go further than you’d have gotten with Meteor. Derby should get its act together, and Meteor people should migrate.

Angular is awesome. That is all. Derby gets you further for the web app because you get real-time + data persistence automatically. However, as Angular is a client MVC, you can detach it from the server if you want to recycle your code on PhoneGap or App.js. So here is my recommendation:

{IF} The web app is the most important part, and you want it fast. The mobile app will come later only after the web app proves itself.

  • First Derby
  • Then Angular + PhoneGap, exposing your backend via REST (Derby static routes, Express)

{ELSE} You want mobile & web at roughly the same time, they’re equally important

  • Angular for both Web & PhoneGap, + Server
  • Server = Express (+ Socket.io / Sock.js for real-time), or even Parse / Firebase

FWIW, Meteor and Backbone rock - I’d be promoting the hell out of them if Derby & Angular didn’t exist. But they do, so I </rant>.

Note: original post here, but my Drupal server has been miserable lately and I'm currently migrating. Go there to post comments.

Disney announces Wreck-It Ralph will arrive for download before DVD, Blu-ray

$
0
0

Comments:"Disney announces Wreck-It Ralph will arrive for download before DVD, Blu-ray"

URL:http://www.engadget.com/2013/01/04/disney-wreck-it-ralph-early-digital-release/


WRECK- IT RALPH - Walt Disney Animation Studios announces the debut of the hit arcade-game-hopping adventure "Wreck-It Ralph" marking a Disney first with the early release of the HD Digital and HD Digital 3D on February 12, 2013, followed by the 4-Disc Blu-ray Combo Pack, 2-Disc Blu-ray Combo Pack, DVD, SD Digital and On-Demand release on March 5, 2013.

From Walt Disney Animation Studios, "Wreck-It Ralph" takes viewers on a hilarious journey. For decades, Ralph (voice of John C. Reilly) has played the bad guy in his popular video game. In a bold move, he embarks on an action-packed adventure and sets out to prove to everyone that he is a true hero with a big heart. As he explores exciting new worlds, he teams up with some unlikely new friends including feisty misfit Vanellope von Schweetz (voice of Sarah Silverman). The film is directed by Emmy®-winner Rich Moore.

Featuring an all-star voice cast including Jack McBrayer as the voice of Fix It Felix, Jr. and Jane Lynch as the voice of Sgt. Calhoun, plus breakthrough bonus features that take viewers even deeper into the world of video games, Disney's "Wreck-It Ralph" has something for every player. Over an hour of all-new bonus material is featured on the Digital and Blu-ray Combo Pack, including deleted and alternate scenes, the theatrical short "Paperman," plus much more.

The home entertainment debut of "Wreck-It Ralph" will be available in multiple ways, containing exciting all-new bonus features that extend the fun-filled movie experience.

Bonus Materials Overview for These Products:
HD Digital
SD Digital
4-Disc Blu-ray Combo Pack (Blu-ray 3D + Blu-ray +DVD + Digital Copy)
2-Disc Blu-ray Combo Pack (Blu-ray +DVD)

Includes:
* Bit by Bit: Creating the Worlds of "Wreck-It Ralph" – Fans of the film will get a look at five new worlds created for "Wreck-It Ralph." The short takes viewers into Game Central Station with the artists who brought Sugar Rush, Hero's Duty and Fix It Felix Jr. to life.

* Alternate & Deleted Scenes – Four separate scenes are highlighted with an introduction and optional audio commentary from director Rich Moore.

* Video Game Commercials – Viewers can check out the commercials created for the video games featured in the film - Fix It Felix Jr., Sugar Rush, Hero's Duty and Fix It Felix Hammer.

* "Paperman" – This animated short film played in theaters before "Wreck It Ralph." It tells the story of a young man in an office who sees the girl of his dreams in a skyscraper window across the street. But how can he get her attention?

Blu-ray Exclusive Bonus Materials

Includes:
* Disney Intermission: The Gamer's Guide to "Wreck-It-Ralph" – When the film is paused, host Chris Hardwick appears on screen to guide viewers through a series of 10 video segments offering an inside look at the many video game references, Disney references and other hidden surprises featured in the film.

Suggested Retail Prices:

4-Disc Blu-ray Combo Pack = (Blu-ray 3D + Blu-ray +DVD + Digital Copy)
$49.99 US and $56.99 Canada

2-Disc Blu-ray Combo Pack = (Blu-ray +DVD)
$39.99 US and $46.99 Canada

DVD = $29.99 US and $35.99 Canada

Digital and On-Demand = Consumers should check with their television provider or preferred digital retailer for pricing and additional information

Groklaw - The USPTO Would Like to Partner with the Software Community ... Wait. What? Really? ~pj

$
0
0

Comments:"Groklaw - The USPTO Would Like to Partner with the Software Community ... Wait. What? Really? ~pj"

URL:http://www.groklaw.net/article.php?story=20130104012214868


The USPTO Would Like to Partner with the Software Community ... Wait. What? Really? ~pj
Friday, January 04 2013 @ 03:10 AM EST

There is a notice in the Federal Register that the USPTO would like to form a partership with the software community to figure out how to "enhance" the quality of software patents. To that end, they are looking for comments and there will be two roundtable events sponsored by the USPTO, one in Silicon Valley and one in New York, both in February: Each roundtable event will provide a forum for an informal and interactive discussion of topics relating to patents that are particularly relevant to the software community. While public attendees will have the opportunity to provide their individual input, group consensus advice will not be sought.... The first topic relates to how to improve clarity of claim boundaries that define the scope of patent protection for claims that use functional language. I know the USPTO doesn't want to hear that software and patents totally need to get a divorce, but since most software developers believe that, maybe somebody should at least mention it to them, if only as a future topic for discussion. Most developers I know believe software is unpatentable subject matter.

It's obvious the USPTO realizes there is serious unhappiness among software developers, and they'd like to improve things. Software developers are the folks most immediately and directly affected by the software patents the USPTO issues, and it's getting to the point that no one can code anything without potentially getting sued. I don't wish to be cynical, though, as that's a useless thing. So maybe we should look at it as an opportunity to at least be heard. It's progress that they even thought about having a dialogue with developers, if you look at it that way.

I'm sure companies with lots of patents will be participating. So some of you should probably try to attend too, don't you think? At least send in thoughtful, respectful but clear and specific comments. Large companies with patent portfolios they treasure and don't want to lose can't represent the interests of individual developers or the FOSS community, those most seriously damaged by toxic software patents. And now that patent trolls are targeting individual apps developers and small businesses that simply use technology like scanners and email, somebody needs to listen to what those of us who are not IBM or Microsoft or Google are enduring. And heaven only knows they are going through plenty too. But my point is there are more of you than there are of them.

If you do want to attend, you have to register by February 4th, free, but seating is limited, and it's first-come, first-serve:

To register, please send an email message to SoftwareRoundtable2013@uspto.gov and provide the following information: (1) Your name, title, and if applicable, company or organization, address, phone number, and email address; (2) which roundtable event you wish to attend (Silicon Valley or New York City); and (3) if you wish to make an oral presentation at the event, the specific topic or issue to be addressed and the approximate desired length of your presentation. For sure many of you have ideas to express on the first topic. The deadline to send in written comments for consideration is March 15, and you can do it by email (SoftwareRoundtable2013@uspto.gov) or by snail mail, but they express a strong preference for email. That is for all three topics, not just the first one: For these initial roundtable events, this notice sets forth several topics to begin the Software Partnership discussion. The first topic relates to how to improve clarity of claim boundaries that define the scope of patent protection for claims that use functional language. The second topic requests that the public identify additional topics for future discussion by the Software Partnership. The third topic relates to a forthcoming Request for Comments on Preparation of Patent Applications and offers an opportunity for oral presentations on the Request for Comments at the Silicon Valley and New York City roundtable events. Written comments are requested in response to the first two discussion topics. Written comments on the third discussion topic must be submitted as directed in the forthcoming Request for Comments on Preparation of Patent Applications. That's why the deadline for commenting is after the roundtables, but time your comments based on what issue you are addressing, I'd say.

Comments will be posted online, so keep that in mind in terms of your own privacy interests, and they request you *not* include your address or phone number if you don't want it made public. And why would you want that, unless you are representing a company and have that address to use?

If you wish to present at either event, you have to send your materials as Microsoft PowerPoint or Microsoft Word.

Lordy, I'd like to give a presentation about the annoyance the USPTO creates by pushing propriatary requirements on us. Don't they realize that most people use mobile devices now, and most of us don't use Microsoft at all for anything any more? It's an Apple-Android world.

Oh, and the events will be webcast, so we can all watch and see how it goes.

Here's a bit more detail on the first topic:

Software-related patents pose unique challenges from both an examination and an enforcement perspective. One of the most significant issues with software inventions is identifying the scope of coverage of the patent claims, which define the boundaries of the patent property right. Software by its nature is operation-based and is typically embodied in the form of rules, operations, algorithms or the like. Unlike hardware inventions, the elements of software are often defined using functional language. While it is permissible to use functional language in patent claims, the boundaries of the functional claim element must be discernible. Without clear boundaries, patent examiners cannot effectively ensure that the claims define over the prior art, and the public is not adequately notified of the scope of the patent rights. Compliance with 35 U.S.C. 112(b) (second paragraph prior to enactment of the Leahy-Smith America Invents Act (AIA)) ensures that a claim is definite. There are several ways to draft a claim effectively using functional language and comply with section 112(b). One way is to modify the functional language with structure that can perform the recited function. Another way is to invoke 35 U.S.C. 112(f) (sixth paragraph pre-AIA) and employ so-called ``means-plus-function'' language. Under section 112(f), an element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material or acts in support thereof, and shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. As is often the case with software-related claims, an issue can arise as to whether sufficient structure is present in the claim or in the specification, when section 112(f) is invoked, in order to satisfy the requirements of section 112(b) requiring clearly defined claim boundaries. Defining the structure can be critical to setting clear claim boundaries.... Topic 1: Establishing Clear Boundaries for Claims That Use Functional Language The USPTO seeks comments on how to more effectively ensure that the boundaries of a claim are clear so that the public can understand what subject matter is protected by the patent claim and the patent examiner can identify and apply the most pertinent prior art. Specifically, comments are sought on the following questions. It is requested that, where possible, specific claim examples and supporting disclosure be provided to illustrate the points made. 1. When means-plus-function style claiming under 35 U.S.C. 112(f) is used in software-related claims, indefinite claims can be divided into two distinct groups: claims where the specification discloses no corresponding structure; and claims where the specification discloses structure but that structure is inadequate. In order to specify adequate structure and comply with 35 U.S.C. 112(b), an algorithm must be expressed in sufficient detail to provide means to accomplish the claimed function. In general, are the requirements of 35 U.S.C. 112(b) for providing corresponding structure to perform the claimed function typically being complied with by applicants and are such requirements being applied properly during examination? In particular: (a) Do supporting disclosures adequately define any structure corresponding to the claimed function? (b) If some structure is provided, what should constitute sufficient `structural' support? (c) What level of detail of algorithm should be required to meet the sufficient structure requirement? 2. In software-related claims that do not invoke 35 U.S.C. 112(f) but do recite functional language, what would constitute sufficient definiteness under 35 U.S.C. 112(b) in order for the claim boundaries to be clear? In particular: (a) Is it necessary for the claim element to also recite structure sufficiently specific for performing the function? (b) If not, what structural disclosure is necessary in the specification to clearly link that structure to the recited function and to ensure that the bounds of the invention are sufficiently demarcated? 3. Should claims that recite a computer for performing certain functions or configured to perform certain functions be treated as invoking 35 U.S.C. 112(f) although the elements are not set forth in conventional means-plus-function format?Here's 35 U.S. C. 112(f): (f) Element in Claim for a Combination.— An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. They think software developers know what that is saying? I don't know from the words alone, and I'm a paralegal. But here's a 2011 Federal Register notice on what the USPTO said it means: 3. Computer-Implemented Means-Plus-Function Limitations: For a computer-implemented means-plus-function claim limitation invoking § 112, ¶ 6, the corresponding structure is required to be more than simply a general purpose computer or microprocessor. [96] To claim a means for performing a particular computer-implemented function and then to disclose only a general purpose computer as the structure designed to perform that function amounts to pure functional claiming. [97] The structure corresponding to a § 112, ¶ 6 claim limitation for a computer-implemented function must include the algorithm needed to transform the general purpose computer or microprocessor disclosed in the specification. [98] The corresponding structure is not simply a general purpose computer by itself but the special purpose computer as programmed to perform the disclosed algorithm. [99] Thus, the specification must sufficiently disclose an algorithm to transform a general purpose microprocessor to the special purpose computer. [100] An algorithm is defined, for example, as “a finite sequence of steps for solving a logical or mathematical problem or performing a task.” [101] Applicant may express the algorithm in any understandable terms including as a mathematical formula, in prose, in a flow chart, or “in any other manner that provides sufficient structure.” [102] __________ [96] Aristocrat Techs. Australia Pty Ltd. v. Int'l Game Tech., 521 F.3d 1328, 1333 (Fed. Cir. 2008). [97] Id. [98] Id.; Finisar Corp. v. DirecTV Group, Inc., 523 F.3d 1323, 1340 (Fed. Cir. 2008); WMS Gaming, Inc. v. Int'l Game Tech., 184 F.3d 1339, 1349 (Fed. Cir. 1999). [99] Aristocrat, 521 F.3d at 1333. [100] Id. at 1338. [101] Microsoft Computer Dictionary, Microsoft Press, 5th edition, 2002. [102] Finisar, 523 F.3d at 1340; see also Intel Corp. v. VIA Techs., Inc., 319 F.3d 1357, 1366 (Fed. Cir. 2003); In re Dossel, 115 F.3d 942, 946-47 (1997); MPEP § 2181. That's how they explained it in 2011. But let's be real. With the US Supreme Court and the Federal Circuit playing ping pong with where the line should be, nobody knows where it really is any more. You may have noticed that in the amicus briefs being filed in the CLS Bank v. Alice case, a case about when, if ever, software should be patentable being considered by the Federal Circuit.

Here's what I know, though. Just saying, "with a computer" shouldn't be enough.

Note that there is also a request for comments on what would be best practices when preparing patent applications as they relate to software. That's at the very end of the notice, Topic 3:

Oral comments are requested on the advantages and disadvantages of applicants employing the following practices when preparing patent applications as they relate to software claims. Expressly identifying clauses within particular claim limitations for which the inventor intends to invoke 35 U.S.C. 112(f) and pointing out where in the specification corresponding structures, materials, or acts are disclosed that are linked to the identified 35 U.S.C. 112(f) claim limitations; and Using textual and graphical notation systems known in the art to disclose algorithms in support of computer-implemented claim limitations, such as C-like pseudo-code or XML-like schemas for textual notation and Unified Modeling Language (UML) for graphical notation. If you understand that, go for it. All I know is saying, "with a computer" or "on a computer" should never be enough. And you should have to provide source code. No excuses. If the point is that the public is supposed to get something out of patent law beyond higher prices, and if it's supposed to be specific enough that someone skilled in the art can know how to duplicate it, surely source code is required. Developers don't speak legalese. They speak code. So if they're supposed to understand and benefit, you need to speak their language. And if a company is too paranoid about its precious software secrets, then patents are not appropriate. Let them use trade secret protection, because otherwise the public is being robbed of its end of the patent law bargain.

With that introduction, here is the complete notice from the Federal Register, also available as PDF:

******************

[Federal Register Volume 78, Number 2 (Thursday, January 3, 2013)]

[Notices]
[Pages 292-295]
From the Federal Register Online via the Government Printing Office
[www.gpo.gov]
[FR Doc No: 2012-31594]

-----------------------

DEPARTMENT OF COMMERCE

United States Patent and Trademark Office

[Docket No. PTO-P-2012-0052]

Request for Comments and Notice of Roundtable Events for Partnership for Enhancement of Quality of Software-Related Patents

AGENCY: United States Patent and Trademark Office, Commerce.

ACTION: Request for comments. Notice of meetings.

----------------------------

SUMMARY:

The United States Patent and Trademark Office (USPTO) seeks to form a partnership with the software community to enhance the quality of software-related patents (Software Partnership). Members of the public are invited to participate. The Software Partnership will be an opportunity to bring stakeholders together through a series of roundtable discussions to share ideas, feedback, experiences, and insights on software-related patents. To commence the Software Partnership and to provide increased opportunities for all to participate, the USPTO is sponsoring two roundtable events with identical agendas, one in Silicon Valley, and the other in New York City. Each roundtable event will provide a forum for an informal and interactive discussion of topics relating to patents that are particularly relevant to the software community. While public attendees will have the opportunity to provide their individual input, group consensus advice will not be sought.

For these initial roundtable events, this notice sets forth several topics to begin the Software Partnership discussion. The first topic relates to how to improve clarity of claim boundaries that define the scope of patent protection for claims that use functional language. The second topic requests that the public identify additional topics for future discussion by the Software Partnership. The third topic relates to a forthcoming Request for Comments on Preparation of Patent Applications and offers an opportunity for oral presentations on the Request for Comments at the Silicon Valley and New York City roundtable events. Written comments are requested in response to the first two discussion topics. Written comments on the third discussion topic must be submitted as directed in the forthcoming Request for Comments on Preparation of Patent Applications.

DATES: Events: The Silicon Valley event will be held on Tuesday, February 12, 2013, beginning at 9 a.m. Pacific Standard Time (PST) and ending at 12 p.m. PST. The New York City event will be held on Wednesday, February 27, 2013, beginning at 9 a.m. Eastern Standard Time (e.s.t.) and ending at 12 p.m. e.s.t.

Comments: To be ensured of consideration, written comments must be received on or before March 15, 2013. No public hearing will be held.

Registration: Registration for both roundtable events is requested by February 4, 2013.

ADDRESSES: Events: The Silicon Valley event will be held at: Stanford University, Paul Brest Hall, 555 Salvatierra Walk, Stanford, CA 94305- 2087.

The New York City event will be held at: New York University, Henry Kaufman Management Center, Faculty Lounge, Room 11-185, 44 West 4th St., New York, NY 10012.

Comments: Written comments should be sent by electronic mail addressed to SoftwareRoundtable2013@uspto.gov. Comments may also be submitted by mail addressed to: Mail Stop Comments--Patents, Commissioner for Patents, P.O. Box 1450, Alexandria, VA 22313-1450, marked to the attention of Seema Rao, Director Technology Center 2100. Although comments may be submitted by mail, the USPTO prefers to receive comments via the Internet.

The comments will be available for public inspection at the Office of the Commissioner for Patents, located in Madison East, Tenth Floor, 600 Dulany Street, Alexandria, Virginia, and will be available via the USPTO Internet Web site at http://www.uspto.gov. Because comments will be available for public inspection, information that is not desired to be made public, such as an address or phone number, should not be included in the comments. Parties who would like to rely on confidential information to illustrate a point are requested to summarize or otherwise submit the information in a way that will permit its public disclosure.

Registration: Two separate roundtable events will occur, with the first in Silicon Valley and the second event in New York City. Registration is required, and early registration is recommended because seating is limited. There is no fee to register for the roundtable events, and registration will be on a first-come, first-served basis. Registration on the day of the event will be permitted on a space- available basis beginning 30 minutes before the event.

To register, please send an email message to SoftwareRoundtable2013@uspto.gov and provide the following information: (1) Your name, title, and if applicable, company or organization, address, phone number, and email address; (2) which roundtable event you wish to attend (Silicon Valley or New York City); and (3) if you wish to make an oral presentation at the event, the specific topic or issue to be addressed and the approximate desired length of your presentation. Each attendee, even if from the same organization, must register separately.

The USPTO will attempt to accommodate all persons who wish to make a presentation at the roundtable events. After reviewing the list of speakers, the USPTO will contact each speaker prior to the event with the amount of time available and the approximate time that the speaker's presentation is scheduled to begin. Speakers must then send the final electronic copies of their presentations in Microsoft PowerPoint or Microsoft Word to SoftwareRoundtable2013@uspto.gov by February 1, 2013, so that the presentation can be displayed at the events.

The USPTO plans to make the roundtable events available via Web cast. Web cast information will be available on the USPTO's Internet Web site before the events. The written comments and list of the event participants and their affiliations will be posted on the USPTO's Internet Web site at www.uspto.gov.

If you need special accommodations due to a disability, please inform the contact persons identified below.

FOR FURTHER INFORMATION CONTACT: Seema Rao, Director Technology Center 2100, by telephone at 571-272-3174, or by electronic mail message at seema.rao@uspto.gov or Matthew J. Sked, Legal Advisor, by telephone at (571) 272-7627, or by electronic mail message at matthew.sked@uspto.gov.

SUPPLEMENTARY INFORMATION:

I. Purpose of Notice: This notice is directed to announcing the Software Partnership which is a cooperative effort between the USPTO and the software community to explore ways to enhance the quality of software-related patents. The Software Partnership will commence with the two bi-coastal roundtable events. The initial topics selected for comment and discussion have been chosen based on input the USPTO has received regarding software-related patents. The input has been gleaned from public commentary on patent quality, dialogue with stakeholders that have requested that the USPTO take a closer look at the quality of software-related patents, and from insight based on court cases in which software-related patents have been the subject of litigation. The public is invited to provide comments on these initial topics and to identify future topics for discussion.

II. Background on Initiative to Enhance Quality of Software-Related Patents: The USPTO is continuously seeking ways to improve the quality of patents. A quality patent is defined, for purposes of this notice, as a patent: (a) For which the record is clear that the application has received a thorough and complete examination, addressing all issues on the record, all examination having been done in a manner lending confidence to the public and patent owner that the resulting patent is most likely valid; (b) for which the protection granted is of proper scope; and (c) which provides sufficiently clear notice to the public as to what is protected by the claims.

Software-related patents pose unique challenges from both an examination and an enforcement perspective. One of the most significant issues with software inventions is identifying the scope of coverage of the patent claims, which define the boundaries of the patent property right. Software by its nature is operation-based and is typically embodied in the form of rules, operations, algorithms or the like. Unlike hardware inventions, the elements of software are often defined using functional language. While it is permissible to use functional language in patent claims, the boundaries of the functional claim element must be discernible. Without clear boundaries, patent examiners cannot effectively ensure that the claims define over the prior art, and the public is not adequately notified of the scope of the patent rights. Compliance with 35 U.S.C. 112(b) (second paragraph prior to enactment of the Leahy-Smith America Invents Act (AIA)) ensures that a claim is definite.

There are several ways to draft a claim effectively using functional language and comply with section 112(b). One way is to modify the functional language with structure that can perform the recited function. Another way is to invoke 35 U.S.C. 112(f) (sixth paragraph pre-AIA) and employ so-called ``means-plus-function'' language. Under section 112(f), an element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material or acts in support thereof, and shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. As is often the case with software-related claims, an issue can arise as to whether sufficient structure is present in the claim or in the specification, when section 112(f) is invoked, in order to satisfy the requirements of section 112(b) requiring clearly defined claim boundaries. Defining the structure can be critical to setting clear claim boundaries.

III. Topics for Public Comment and Discussion at the Roundtable Events: The USPTO is seeking input on the following topics relating to enhancing the quality of software-related patents. These initial topics are intended to be the first of many topics to be explored in a series of roundtables that may ultimately be used for USPTO quality initiatives, public education or examiner training. First, written and oral comments are sought on input regarding improving the clarity of claim boundaries for software-related claims that use functional language by focusing on 35 U.S.C. 112 (b) and (f) during prosecution of patent applications. Second, written and oral comments are sought on future topics for the Software Partnership to address. Third, oral comments are sought on the forthcoming Request for Comments on Preparation of Patent Applications to the extent that the topics of that notice particularly pertain to software-related patents.

The initial topics for which the USPTO is requesting written and, if desired, oral comments are as follows:

Topic 1: Establishing Clear Boundaries for Claims That Use Functional Language

The USPTO seeks comments on how to more effectively ensure that the boundaries of a claim are clear so that the public can understand what subject matter is protected by the patent claim and the patent examiner can identify and apply the most pertinent prior art. Specifically, comments are sought on the following questions. It is requested that, where possible, specific claim examples and supporting disclosure be provided to illustrate the points made.

1. When means-plus-function style claiming under 35 U.S.C. 112(f) is used in software-related claims, indefinite claims can be divided into two distinct groups: claims where the specification discloses no corresponding structure; and claims where the specification discloses structure but that structure is inadequate. In order to specify adequate structure and comply with 35 U.S.C. 112(b), an algorithm must be expressed in sufficient detail to provide means to accomplish the claimed function. In general, are the requirements of 35 U.S.C. 112(b) for providing corresponding structure to perform the claimed function typically being complied with by applicants and are such requirements being applied properly during examination? In particular:

(a) Do supporting disclosures adequately define any structure corresponding to the claimed function? (b) If some structure is provided, what should constitute sufficient `structural' support? (c) What level of detail of algorithm should be required to meet the sufficient structure requirement? 2. In software-related claims that do not invoke 35 U.S.C. 112(f) but do recite functional language, what would constitute sufficient definiteness under 35 U.S.C. 112(b) in order for the claim boundaries to be clear? In particular: (a) Is it necessary for the claim element to also recite structure sufficiently specific for performing the function? (b) If not, what structural disclosure is necessary in the specification to clearly link that structure to the recited function and to ensure that the bounds of the invention are sufficiently demarcated? 3. Should claims that recite a computer for performing certain functions or configured to perform certain functions be treated as invoking 35 U.S.C. 112(f) although the elements are not set forth in conventional means-plus-function format?

Topic 2: Future Discussion Topics for the Software Partnership

The USPTO is seeking public input on topics related to enhancing the quality of software-related patents to be discussed at future Software Partnership events. The topics will be used in an effort to extend and expand the dialogue between the public and the USPTO regarding enhancing quality of software-related patents. The Software Partnership is intended to provide on-going, interactive opportunities and a forum for engagement with the USPTO and the public on software- related patents. Therefore, to plan future events, the USPTO seeks input on which topics, and in what order of priority, are of most interest to the public. Input gathered from these events, may be used as the basis for internal training efforts and quality initiatives. One potential topic for future discussion is how determinations of obviousness or non-obviousness of software inventions can be improved. Another potential topic is how to provide the best prior art resources for examiners beyond the body of U.S. Patents and U.S. Patent Publications. Additional topics are welcomed.

Another topic for which the USPTO is requesting oral comment at the roundtable events is as follows:

Topic 3: Oral Presentations on Preparation of Patent Applications

In the near future, the USPTO will issue a Request for Comments on Preparation of Patent Applications. The purpose of this forthcoming Request for Comments is to seek public input on whether certain practices could or should be used during the preparation of an application to place the application in the best possible condition for examination and whether the use of these practices would assist the public in determining the scope of the claims as well as the meaning of the claim terms in the specification. To ensure proper consideration, written comments to the forthcoming Request for Comments should only be submitted in response to that notice to QualityApplications_Comments@uspto.gov. However, registrants may make oral presentations at the Silicon Valley and New York City roundtable events on the topics related to the forthcoming Request for Comments to the extent that the topics pertain to software-related inventions. Note particularly two questions from the forthcoming Request for Comments, which are previewed below. Oral comments are requested on the advantages and disadvantages of applicants employing the following practices when preparing patent applications as they relate to software claims.

Expressly identifying clauses within particular claim limitations for which the inventor intends to invoke 35 U.S.C. 112(f) and pointing out where in the specification corresponding structures, materials, or acts are disclosed that are linked to the identified 35 U.S.C. 112(f) claim limitations; and

Using textual and graphical notation systems known in the art to disclose algorithms in support of computer-implemented claim limitations, such as C-like pseudo-code or XML-like schemas for textual notation and Unified Modeling Language (UML) for graphical notation.

Dated: December 27, 2012.

David J. Kappos,
Under Secretary of Commerce for Intellectual Property and Director of the United States Patent and Trademark Office.
[FR Doc. 2012-31594 Filed 1-2-13; 12:09 pm]



Authored by: SpaceLifeForm on Friday, January 04 2013 @ 04:02 AM EST

---

You are being MICROattacked, from various angles, in a SOFT manner.

[ Reply to This | # ]

Authored by: SpaceLifeForm on Friday, January 04 2013 @ 04:04 AM EST
Please include a link to the article you are referencing as they will roll off of the main page.

---

You are being MICROattacked, from various angles, in a SOFT manner.

[ Reply to This | # ]

Authored by: SpaceLifeForm on Friday, January 04 2013 @ 04:06 AM EST
Please make links clickable.

---

You are being MICROattacked, from various angles, in a SOFT manner.

[ Reply to This | # ]

Authored by: SpaceLifeForm on Friday, January 04 2013 @ 04:07 AM EST

---

You are being MICROattacked, from various angles, in a SOFT manner.

[ Reply to This | # ]

Authored by: SpaceLifeForm on Friday, January 04 2013 @ 04:14 AM EST
That tells you right there that it is
all a sham, and will be an exercise by
the darkside into determining who they
can control or buy off. Complacent parties
will be welcomed, so they can spin the charade
to the public that the sham is in the public interest.

---

You are being MICROattacked, from various angles, in a SOFT manner.

[ Reply to This | # ]

Authored by: Anonymous on Friday, January 04 2013 @ 04:28 AM EST
The UK patent office attempted a similar but smaller consultation when the
european directive was being debated. The outcome was not to the UK patent
office liking:
http://www.zdnet.com/patent-campaigners-make-government-breakthrough-3039181169/

Despite this outcome it did not stop the UK government from agreeing the
directive - fortunately it got thrown out by the European Parliament

Given the strength of business interest lobbying in the US I suspect your
outcome will be an even more relaxed approach - good luck.

[ Reply to This | # ]

Authored by: myNym on Friday, January 04 2013 @ 04:31 AM EST
Uh. Don't issue them. Software patents are illegal.

(Or should be. I am not a lawyer.)

[ Reply to This | # ]

Authored by: Anonymous on Friday, January 04 2013 @ 04:44 AM EST
I wonder how they'd react if everyone on the anti-software-patent side sent
their presentations in as ISO/IEC 26300:2006/Amd 1:2012 format? (That's
OpenDocument's ISO number according to wikipedia)

[ Reply to This | # ]

Authored by: Anonymous on Friday, January 04 2013 @ 04:49 AM EST
The question I would have addressed is whether software patents describe a new
machine or a new process.

If the idea is that a "computer + program" creates a new machine, it
should be recognized that a "computer + data" also produces a new
machine. Should not then (to be consistent) "data" be considered
patentable subject matter?

If it is the 'process' of the computer reading the software that is being
patented then why are the software producers being held liable for infringement?
They are not performing the process, they are merely producing instructions on
how the process should be performed.

[ Reply to This | # ]

Authored by: feldegast on Friday, January 04 2013 @ 05:28 AM EST
it should be fairly straightforward for the Groklaw community
to write a submission including all the points we have
gathered to date into a single submission, using PolRs excellent articles as a
starting point is a significant
start...

---
IANAL
My posts are ©2004-2013 and released under the Creative Commons License
Attribution-Noncommercial 2.0
P.J. has permission for commercial use.

[ Reply to This | # ]

Authored by: Anonymous on Friday, January 04 2013 @ 06:39 AM EST
"Most developers I know believe software is unpatentable subject
matter."

I think this is as much a comment on the type of developers that you know as it
is on patentability.

As a long-term commercial software developer, very few of the developers of
proprietary software that I've worked with over the years have any idea what I'm
talking about when I object to the concept of software patents, and all of the
companies have had active programs in place to encourage developers to submit
patent ideas, with significant bonus payments in place as carrots.

I have never personally become involved in the patent rat-race, but know many
colleagues that have, and many of the quite laughable concepts that have been
put forward as patentable material have proceeded to get patents.

[ Reply to This | # ]

Authored by: Ian Al on Friday, January 04 2013 @ 07:19 AM EST
Software by its nature is operation-based and is typically embodied in the form of rules, operations, algorithms or the like. Unlike hardware inventions, the elements of software are often defined using functional language. While it is permissible to use functional language in patent claims, the boundaries of the functional claim element must be discernible...Compliance with 35 U.S.C. 112(b) (second paragraph prior to enactment of the Leahy-Smith America Invents Act (AIA)) ensures that a claim is definite. The Supreme Court in Mayo: [T]he Government argues that virtually any step beyond a statement of a law of nature itself should transform an unpatentable law of nature into a potentially patentable application sufficient to satisfy §101’s demands. Brief for United States as Amicus Curiae. The Government does not necessarily believe that claims that (like the claims before us) extend just minimally beyond a law of nature should receive patents. But in its view, other statutory provisions—those that insist that a claimed process be novel, 35 U. S. C. §102, that it not be “obvious in light of prior art,” §103, and that it be “full[y], clear[ly], concise[ly], and exact[ly]” described, §112—can perform this screening function. In particular, it argues that these claims likely fail for lack of novelty under §102.This approach, however, would make the “law of nature” exception to §101 patentability a dead letter. The approach is therefore not consistent with prior law. The relevant cases rest their holdings upon section 101, not later sections. Bilski, Diehr, Flook, Benson. See also H. R. Rep. No. 1923, (“A person may have ‘invented’ a machine or a manufacture, which may include any thing under the sun that is made by man, but it is not necessarily patentable under section 101 unless the conditions of the title are fulfilled” (emphasis added)). We recognize that, in evaluating the significance of additional steps, the §101 patent-eligibility inquiry and, say, the §102 novelty inquiry might sometimes overlap. But that need not always be so. And to shift the patent eligibility inquiry entirely to these later sections risks creating significantly greater legal uncertainty, while assuming that those sections can do work that they are not equipped to do. §101 Says that 'any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof', is patentable subject matter. This means that inventions 'typically embodied in the form of rules, operations, algorithms or the like' are not statutory subject matter.

The Supreme Court has, many times, pointed out that it is a waste of time and money considering issues of 35 U.S.C. 112(b) if ' topics relating to patents that are particularly relevant to the software community' are easily addressed by considering §101, first.

Software by its nature is operation-based and is typically embodied in the form of rules, operations, algorithms or the like. Unlike hardware inventions, the elements of software are often defined using functional language. While it is permissible to use functional language in patent claims, the boundaries of the functional claim element must be discernible...

---
Regards
Ian Al
Software Patents: It's the disclosed functions in the patent, stupid!

[ Reply to This | # ]

Authored by: Anonymous on Friday, January 04 2013 @ 07:37 AM EST
"I know the USPTO doesn't want to hear that software and
patents totally need to get a divorce, but since most
software developers believe that, maybe somebody should at
least mention it to them, if only as a future topic for
discussion."

At the risk of violating something akin to Godwin's law, I
cannot help but think of how the situation resembles the
U.S. dispute over slavery from the 1820's to the civil war
(note that I am in no way suggesting that software patents
are remotely as wrong as slavery).

Just as the USPTO doesn't want to hear that software patents
should be abolished, it was not considered appropriate or
realistic to mention the abolition of slavery.
"Abolitionist" was a dirty word, much like "communist" or
"radical" in the 1950s. Anyone who suggested that there was
any moral problem with racial enslavement was considered to
be inciting terrorism, particularly after the Nat Turner
rebellion. Abolition could not be discussed on the floor of
the U.S. Congress, nor could anti-slavery literature be sent
through the mail. Great concern was held for the economic
rights of slaveholders, who after all had invested huge sums
of money to acquire their "property". Anti-slavery
discussion was limited to whether the "peculiar institution"
should be allowed to spread into new states and territories,
and how to maintain the "proper" balance of political power
between free and slave states. Until nearly the end of the
Civil War, elimination of slavery was politically off the
table even for the Union.

So I think we are in a miniature version of this in the
dispute over software patents. The USPTO may be willing to
talk about minor issues at the edges, but the core issue is
that software patents are simply wrong - all of them, with
no exceptions. It is a fatally flawed idea. As RMS put it,
if someone independently programs a solution to a software
problem, when should anyone else ever be allowed to prevent
that? The answer is never, not under any circumstances. The
monied interests may not want that issue to be raised, but
that really is the issue.

[ Reply to This | # ]

Authored by: Anonymous on Friday, January 04 2013 @ 10:00 AM EST
The comment that source code should be required, I feel, is
slightly overboard. While software patents are WAY to
generally specified, detail pseudo code showing in detail
(sorry for the department of redundancy department speak)
the algorithm should be sufficient. (Of course, specifying
an algorithm makes it plain that it IS an algorithm, and not
patentable.)
After all, if the patent covers the idea, then the
particular language used to implement the idea is
irrelevant, hence the actual source code (a particular
language implementation) is also irrelevant.
Disclaimers: I have software patents (done as a defensive
move), and do not believe any algorithm (implemented in
software or otherwise) should be patentable.

[ Reply to This | # ]

Authored by: Anonymous on Friday, January 04 2013 @ 10:33 AM EST
And you should have to provide source code. No excuses. If the point is that the public is supposed to get something out of patent law beyond higher prices, and if it's supposed to be specific enough that someone skilled in the art can know how to duplicate it, surely source code is required. Requiring Source Code is akin to the implementation, but if a physical patented device requires a nut and bolt of a certain size and I don't have one, but use a substitute which works perfectly fine, then am I in breach of the patent or not? Similarly, if the supplied source code was Z8000 assembler and I program the "invention" in 68000 assembler have I infringed the patent or not?

But source code is protected by Copyright, so why the need for a patent as well?

The requirement should be that the algorithm is clearly written out so that any programmer could follow it and code it in whatever language they like (and get it to perform in the same way), just as theUSPTO requires:

The specification must include a written description of the invention and of the manner and process of making and using it, and is required to be in such full, clear, concise, and exact terms as to enable any person skilled in the technological area to which the invention pertains, or with which it is most nearly connected, to make and use the same. Very few software patents actually so this: full, clear, concise and exact terms would be the algorithm; they are woolly and very broad. For example, Euclid's algorithm to find the highest common factor of two numbers:

Currently it would be something like:

Claim 1. A method whereby two numbers are input and their highest common factor output.
Claim 2. It is ascertained that the difference between the quotient of the division of the dividend by the divisor and the same, ignoring partial results multiplied by the divisor and the quotient is zero or not.
Claim 3. Claim 2 is further extended by use of such ascertainment to a further decision being made to modify the numbers so that ascertainment of a second level can be made as to that which was originally sought.
Claim 4. By use of claims 2 and 3 the the output will be of a desired result.

Along with more waffle that may, or may more likely not, actually describe how to do it; whereas what should be required is something like:

1. Find the remainder of the first number divided by the second
2. If the remainder is not zero, make the second number the first number and the remainder the second number and repeat from step 1
3. The highest common factor is the second number.

Alternatively, I could write it as:

1. Let the two numbers be N1 and N2
2. find the remainder R when N1 is divide by N2
3. If R is not zero, let N1 be N2 and N2 be R and repeat from step 2
4. The Highest Common Factor is N2

Either of those tell anyone who wants to program Euclid's algorithm clearly exactly how to do it: the actual lines of code to do it are left up to the programmer; for example in C:

int hcf(n1, n2)
int n1, n2;
{
int r;
for (; r = n1 % n2; n2 = r) n1 = n2;
return n2;
}

In BASIC, Pascal, Fortran, the code would be different, eg BBC Basic:

10 DEF FN_hcf(n1, n2)
20 LOCAL r
30 REPEAT
40 r = n1 MOD n2: n1 = n2 : n2 = r
50 UNTIL r = 0
60 = n1

Or in SuperBASIC on a QL:

10 DEFine FuNction hcf(n1, n2)
20 LOCal lp, r
30 REPeat lp
40 r = n1 MOD n2: if r = 0: EXIT lp
50 n1 = n2, n2 = r
60 END REPeat lp
70 RETurn n2
80 END DEFine

All different source codes, all designed for different environments, but all execute the same algorithm*. If one source code was included, which one would it be? Also, would the other source codes be non-infringing?

So while I agree that a source could could be included, the actual algorithm that the source codes executes needs to be clearly written (as USPTO supposedly requires) - perhaps a standard pseudocode?

*There are subtle difference between the versions due to the size of integers that the modulus operator can utilise; the code could be written to trap for things like one (or both) numbers zero or negative IF it was used in an environment where rational inputs (ie both numbers greater than zero) cannot be guaranteed.

[ Reply to This | # ]

Authored by: 351-4V on Friday, January 04 2013 @ 12:11 PM EST
maybe somebody should at least mention it to them P.J. is right here of course. I'll go further and propose that the only correct course of action is to insist upon an end to software patents with no further discussion. Too extreme you say? While I normally agree that dialog and the compromise resulting from cooperation are usually for the best, I must take exception in this case.

If we agree to "partner" with them and commence a discussion of how to fix software patents, we have already compromised and allowed their argument in favor of software patents. The moment we open this dialog, we concede our main point and gain nothing in return. This is not compromise, this is capitulation.

No doubt there will be many well-intentioned people of high repute that will engage the USPTO in a discussion of how to fix a software patent and I wish them well. But do not be surprised when this spirit of cooperation and compromise is spun in a press release that claims "Major software authors see need for patents". This will not happen only because of malice, but it will happen honestly as well because if you are talking with someone about fixing something, you are implicitly supporting the thing's existence.

I do support discussion and dialog with the USPTO but that discussion should be limited to how to dismantle the software patents that exist at the present time and how to create a developer environment free of software patents altogether.

[ Reply to This | # ]

Authored by: Anonymous on Friday, January 04 2013 @ 12:22 PM EST
I think that the argument that software should simply not be patentable should
be made. Get it on the record. But we also have to realistically recognize
that such an argument will not be accepted. So we need to do like we've seen a
number of legal teams do: We make multiple arguments. If one is not accepted,
we hope that the next one is.

If software patents are not going to be completely prohibited, how do we limit
the damage? One way is by making them less vague, less "covering
everything under the sun". How do we do that?

One way is by using the phrase "one ordinarily skilled in the art".
My proposal is that the USPTO hire a bunch of programmers who are
"ordinarily skilled in the art" - say, three to five years experience.
(Five years is where you start getting into "senior software
engineer" territory.) Each software patent application is handed to three
of the USPTO's software guys. Each one makes a simple decision: Yes, given the
information in the application, I know how to implement that; or else No, I
don't know how to implement that because it's too vague. If the majority votes
No, the application is rejected because it does not contain sufficient detail to
enable one ordinarily skilled in the art to practice the invention.

They might also, at the same time, make a Yes or No vote on obviousness.

The point here is to get actual software people rather than just patent
examiners looking at the applications, and weeding out the junk that never
should have been patented. Weed out the vague patents that don't claim anything
concrete. Weed out the obvious stuff. We'd have a lot fewer problem patents.
(We'd still have software patents, which you may consider to be a problem in and
of itself, but I don't think we're going to win that battle this year.)

MSS2

[ Reply to This | # ]

Authored by: Anonymous on Friday, January 04 2013 @ 01:10 PM EST
1) It's easier to try something better if you can go back.

2) Find out that it's easier to run without shackles.

3) Don't attack the portfolio monsters, soothe them instead.

4) Keep old patents intact. They'll be worthless soon anyway.

[ Reply to This | # ]

Authored by: Anonymous on Friday, January 04 2013 @ 01:42 PM EST

It seems to me that USPTO would like to make patents more specific and more
restricted so that the full width of a process can be covered by more patents.
More patents covering a specific area = more revenue for the patent office.

[ Reply to This | # ]

Authored by: Anonymous on Friday, January 04 2013 @ 02:23 PM EST
Groklaw is an amazing blog and I read it with regularity, but I read things like
this, I know you've hit your collective blind spot: "I know the USPTO
doesn't want to hear that software and patents totally need to get a divorce,
but since most software developers believe that, maybe somebody should at least
mention it to them, if only as a future topic for discussion."

For your information, my dear lawyers, it is CONGRESS and the COURTS that decide
what subject matter is considered patentable. The administration simply tries to
do the best they can to implement these overarching policies. They can't change
them. They'd be sued if they did.

Seriously, I understand how logic can be subverted by high emotions, but that
was embarrassing.

[ Reply to This | # ]

Authored by: Anonymous on Friday, January 04 2013 @ 03:02 PM EST
This "round table" will be stuffed with highly paid lobbyists and
lawyers from large corporations and patent trolls.

Joe Average Programmer will have to stand at the sideline and watch how the big
boys spin their story that software patents are absolutely needed to ensure
America's leadership and glory, and that being against software patents is
unamerican.

[ Reply to This | # ]

Authored by: OpenSourceFTW on Friday, January 04 2013 @ 03:20 PM EST
We definitely need to make sure someone brings up throwing out SW patents
wholesale.

They are unlikely to listen to it, but at least it will be on record.

There should also be a more moderate approach, but not too moderate. RMS's
suggestion on barring patents on general purpose computers is an excellent start
to clearing up the mess.

I feel that attempting to reason with them about math and algorithms is not
going to work well, so give them economic reasons as well. Show them how SW
development is so stifled by patents that any developer can be sued over just
about anything. Multiple times. Without warning. And that the developer can do
exactly NOTHING to avoid it except not write software. Searching a patent
database is no help at all. In fact, if they find out the developer happened to
glance at their patent, boom, treble damages.

Something needs to be said about how too many SW patents are so broad that they
cover huge segments of computing, even overlapping with other SW patents.

Many SW patents are so vague that even if you know about them in advance, there
is not much you can do to avoid infringing upon them. No implementation is
described.

The entire point of a patent is to encourage public publishing of an invention
to benefit the whole country. The incentive is the temporary monopoly granted to
practice the patent, after which the invention is available to all. SW patents
are written in such a way as to subvert this, being convoluted, vague, and
broad. The monopoly part is used (and abused) alright, but what happened to the
public publishing that is the entire point of the whole process?

If patents are no longer about publishing an invention, then why have them at
all?

Also, the technology industry moves at warp speed. A SW patent's term is a
lifetime in computing terms, so they at least need to be much shorter.

[ Reply to This | # ]

Authored by: YurtGuppy on Friday, January 04 2013 @ 03:30 PM EST

"..an algorithm must be expressed in sufficient detail to provide means to
accomplish the claimed function."

---
a small fish in an even smaller pond

[ Reply to This | # ]

Caffeinated Seas Found off U.S. Pacific Northwest

$
0
0

Comments:"Caffeinated Seas Found off U.S. Pacific Northwest"

URL:http://news.nationalgeographic.com/news/2012/07/120730-caffeinated-seas-pacific-northwest-caffeine-coffee-science/


The Pacific Northwest may be the epicenter of U.S. coffee culture, and now a new study shows the region's elevated caffeine levels don't stop at the shoreline.

The discovery of caffeine pollution in the Pacific Ocean off Oregon is further evidence that contaminants in human waste are entering natural water systems, with unknown consequences for wildlife and humans alike, experts say.

(Read National Geographic magazine's "Caffeine: What's the Buzz?")

Scientists sampled both "potentially polluted" sites—near sewage-treatment plants, larger communities, and river mouths—and more remote waters, for example near a state park.

Surprisingly, caffeine levels off the potentially polluted areas were below the detectable limit, about 9 nanograms per liter. The wilder coastlines were comparatively highly caffeinated, at about 45 nanograms per liter.

"Our hypothesis from these results is that the bigger source of contamination here is probably on-site waste disposal systems like septic systems," said study co-author Elise Granek.

The difference may be due to more stringent monitoring in more developed areas.

"Wastewater-treatment plants, for the most part, have to do regular monitoring to ensure they are within certain limits," added Granek, a Portland State University marine ecologist. Granek noted, though, that caffeine is unregulated, and so is not specifically monitored.

By contrast, for on-site waste-disposal systems, "there is frequently not much monitoring going on."

The big sewage plants may also be at an advantage because Oregon cities are relatively small. The plants don't have to process the sheer volume of waste associated with a major city such as Boston, which one study has found to be pumping fairly high levels of caffeine into its harbor.

(Related: "Cocaine, Spices, Hormones Found in Drinking Water.")

"Contaminant Soup" Has Unknown Impacts

Hydrologist Dana Kolpin welcomed the new research, saying caffeine concentrations in water have been documented before but more often in freshwater than marine environments.

"Caffeine is pretty darn ubiquitous, and there is growing evidence that this and other understudied contaminants are out there,"  said Kolpin, of the USGS's Toxic Substances Hydrology Program in Iowa City, Iowa.

In our waste "there is a whole universe of potential contaminants including pharmaceuticals, hormones, personal-care products like detergents or fragrances, even artificial sweeteners."

Caffeine is something of a canary in a coal mine for elevated levels of human contaminants in water, said Kolpin, who wasn't part of the new study.

In other words, if caffeine's in the water, chances are there are other contaminants too.

"What does this mean?" he asked. "Aquatic organisms are getting hit with a soup of low-level contaminants.

"Are there environmental or human-health consequences from exposure to these compounds or different mixtures of compounds? Obviously that's the million-dollar question."

(Infographic: How Coffee Changed America.)

Caffeine and Cellular Stress in Animals

Caffeine has been documented in waters around the world, including Boston Harbor, Puget Sound, the Mediterranean, and the North Sea. It might persist for up to 30 days in marine waters, study co-author Granek noted.

But the stimulant's impact on natural ecosystems is unknown. Nonlethal effects may be invisible but could have repercussions up and down the food chain and from generation to generation.

Granek and colleagues have shown in lab experiments that caffeine at the levels found offshore does affect intertidal mussels, causing them to produce specialized proteins in response to environmental stress.

The levels found in the remote study areas, for example, "did cause these mussels to exhibit cellular stress," she said. "If we expose them to higher concentrations or longer terms, do we see changes in growth rates, or changes in reproductive output?" The team hopes to find out with future experiments.

Kolpin said some studies of other contaminants have shown more drastic effects, including one at a remote Ontario Lake, which concluded that estrogen from birth control pills can cause wild fish populations to collapse.

"With caffeine, we're not yet sure about its environmental effects," he said. "But it's a very nice tracer, even if it doesn't have a large effect, because in most parts of the world, you know that this is coming from a human waste source."

The Pacific Northwest caffeine research was published in the July 2012 edition of the Marine Pollution Bulletin.

A year without caffeine, part 1 | Bryan Alexander

$
0
0

Comments:"A year without caffeine, part 1 | Bryan Alexander"

URL:http://bryanalexander.org/2013/01/04/a-year-without-caffeine/


I used to drink more caffeine than you do.

Note the bared teeth, the delighted eyes, and the paw wrapped protectively wrapped around that frail cup.

That is almost certainly true.  From my college days (1985ff) through the end of 2011, my caffeine consumption was extravagant.  Epic.  Other people speak of the number of cups of coffee they drink per day; they are pikers.  For me, the number was pots.  Two, three, or four pots between grimly waking up and falling asleep.  From the first thing in the morning through multiple after-dinner imbibings, the blessed black bean brew was my brain’s constant companion.

Ah, Jolt.

Along with coffee, soda was my parallel caffeine delivery system.  I still recall the glorious days of Jolt Cola (more sugar, and twice the caffeine!), two-liters of which saw me through my sophomore and junior years.  Coke was too basic for me, but doable when nothing else was available.  Mello Yellow was fine, but hard to obtain.  Although it had a splendidly lying name: lots of caffeine, so nothing mellow; green, not yellow color.

Mountain Dew was my drink of choice, sweet and fiercely caffeinated.  One year my housemates and I purchased enough Mountain Dew cans in bulk to make a six foot tall stack.  It nearly replaced water for us. I was quaffing a can with breakfast, bottles during the day, cups in the evening, plus a final can in bed, just to relax.

Gunpowder tea, preferably.

Other caffeine mechanisms also supplied my needs.  Chocolate, especially chocolate-covered espresso beans, helped.  Black tea sometimes sufficed when I was among Brits, or just wanted the taste.  Hot chocolate was fine in winter.  But Turkish coffee, ah, that was the sublime caffeine delivery system.  I fell in love with the potent stuff in Bosnia during the 1990s war, and sought it out ever afterwards.  I visited an academic in Mostar whose house had taken a hit from a shell or missile.  In its ruins, on a half-shattered gas-powered stove, the prof and his wife brewed Turkish coffee every day.  I recognized my fellows, members of the worldwide society of caffeine devotees.  That concentrated bolt of coffee was like neutronium, or anti-Kryptonite for Superman, an outrageously heavy distillate for my gleeful brain.

The ur-coffee.

I could also combine caffeination systems.  During a long drive I’d load up with Mountain Dew and a giant cup of coffee.  After a couple of hours I’d stop to replenish those sources, buy some Water Joe, then add a couple of doses of Stok to the steaming coffee.  (At home my wife forbid me from brewing coffee with Water Joe, lest my chest simply explode)

Once a chemist friend gave me a small container of pure caffeine.  She warned me not to just snarf the white power straight down, so I took to dabbing a finger in it.  That was peppy.

Too Much Coffee Man, my ideal superhero.

Why did I drink so much caffeine?  it wasn’t simply chemical or behavioral addiction.  My habit began in college as a way of providing enough energy to do both my studies and jobs.  I took heavy courseloads (double or triple majoring), while working.  After graduation that overload of work never went away.  I did my M.A. in a single year, while working at a bookstore.  For my PhD I was teaching nearly the entire time.  As a professor I taught four (4) classes per term, while conducting research, plus doing lots of service (committees, technology, advising, etc), plus consulting on the side.  I also married, and we had two children.  With such long days (and nights), the caffeine was essential.

After a while caffeine no longer provided stimulus.  Instead it became a way of recovering some basic energy level from a pit of exhaustion.  A strong dose stopped making me sparky and manic, as far as I can tell, but powered me up just enough to get things done over the course of a very long, but well fueled, day.  I suppose this is another way of saying “maintenance level”.

I would have continued along this glorious, bean-strewn path until my body failed, and it nearly did.  Throughout 2010 and 2011 I suffered frequent bouts of gut pain.  This wasn’t indigestion, but fiery shocks, enough to wake me up at night or knock me off track during the day.  The pains increased in frequency, duration, and intensity, ultimately coming several times a day, and leading to regular nausea.  Besides being painful and disgusting, these attacks were debilitating.  Ultimately I decided to seek medical advice.  Well, “I decided” really means “I gave into my wife’s patient, well-informed concern”.

On December 22, 2011, we arrived at the family clinic we saw for most medical questions.  I described my symptoms to the doctor, who looked concerned.  He asked me to describe my caffeine intake, and his facial expressions were quite entertaining.  He demonstrated incredulity, dismay, outrage, amusement, followed by iron determination.  When I finished, the doc laid it out for me.

“Either I hospitalize you tomorrow, or you go cold turkey on caffeine.  Immediately.”

To be continued.

(photos by Cogdog, 7 Bits of Truth, Akuppa, Nate SteinerWikipedia)

Like this:

2 bloggers like this.

[APK] Seeder 1.1 entropy generator to provide significant lag reduction - xda-developers

$
0
0

Comments:" [APK] Seeder 1.1 entropy generator to provide significant lag reduction - xda-developers"

URL:http://forum.xda-developers.com/showthread.php?t=1987032&nocache=0


Page 1 of 120   1231151101>>>

12th November 2012, 08:18 AM

(Last edited by lambgx02; Today at 05:27 PM.)

Senior Member - OP

Posts: 151

Join Date: Jul 2008

Location: Montreal

Hey everyone,

So, I was experiencing significant lag as we all do from time to time, and decided I was going to get to the bottom of it.

After tracing and debugging for hours, I discovered the source of 90% of Android's lag. In a word, entropy (or lack thereof).

On some versions of Android, the JVM (and other components) read large amounts of random data from the blocking /dev/random device.

Random data is used for all kinds of stuff.. UUID generation, session keys, SSL.. when we run out of entropy, the process blocks. That manifests itself as lag. The process cannot continue until the kernel generates more high quality random data.

So, I cross-compiled rngd, and used it to feed /dev/urandom into /dev/random at 1 second intervals.

Result? Significant lag reduction!

Chrome, maps, and other heavy applications switch instantaneously, and map tiles populate as fast as I can scroll. You know how sometimes when you hit the home button, it takes 5-10 seconds for the home screen to repopulate? Yeah. Blocking on read of /dev/random. Problem solved. But don't take my word for it .. give it a shot!

Update!
Version 1.1 attached. This version uses the release signature, so you will need to uninstall the old XDA version first!

This version fixes the issue some users were seeing on later Jellybean ROMs, where the UI would misreport the RNG service status.

Note that this APK is actually compatible with all Android versions, and all (armel) devices. It's not at all specific to the Captivate Glide.

Caveats

  • There is a (theoretical) security risk, in that seeding /dev/random with /dev/urandom decreases the quality of the random data. In practice, the odds of this being cryptographically exploited are far lower than the odds of someone attacking the OS itself (a much simpler challenge).
  • This may adversely affect battery life, since it wakes every second. It does not hold a wakelock, so it shouldn't have a big impact, but let me know if you think it's causing problems. I can add a blocking read to the code so that it only executes while the screen is on. On the other hand, many of us attribute lag to lacking CPU power. Since this hack eliminates almost all lag, there is less of a need to overclock, potentially reducing battery consumption.

If you try it, let me know how it goes.

ROM builders - feel free to integrate this into your ROMs (either the .apk / application, or just the rngd binary called from init.d)!

If anyone's interested, I've launched a paid app on the Play store for non-xda users. As I add features I'll post the new versions here as a thanks to you guys (and xda community at large for being such a great resource). But if anyone's interested in the market's auto-update feature, just thought I'd mention it.

Cheers!

The Following 1,041 Users Say Thank You to lambgx02 For This Useful Post: [ Click to Expand ] #illidan (Today), ->Sid (2nd January 2013), -angel* (Yesterday), -Mindroid- (Today), .:Crack:. (16th December 2012), .coomb. (Yesterday), /ZERO\ (Today), 0088 (Today), 063_XOBX (Today), 1kbps (30th December 2012), 1two77 (10th December 2012), 1wayjonny (Today), 270maniek (Today), 3ateef (27th December 2012), 3mdelgado (Today), 3|Saint|5 (Today), 5pace (Yesterday), 7raiden7 (Today), 911jason (Yesterday), a-id (Yesterday), access9 (Today), ace7196 (Today), adelancker (Yesterday), Adi_Pat (Today), adnan.selimagic (Today), adulfo (18th December 2012), Ahmad.H (6th December 2012), aiapaec (Yesterday), aidanaedy (Today), aimcr7 (Today), akshay7394 (Today), aladin6 (Yesterday), AlanVG (Yesterday), albuah (Today), alcantarasanchez (Today), aldenb (14th November 2012), AlexandreT (Yesterday), alexctk (Today), alexisguitarra (Today), alexkno (Yesterday), alext20htchd (Today), Alex_R3CONN3R (Today), alkhemista (Yesterday), allan1313 (Yesterday), alroger (18th November 2012), AlucardDH (31st December 2012), ama3654 (Yesterday), amg009 (Today), amppcguru (19th November 2012), Amrooz (10th December 2012), amulecregg (Yesterday), andray27 (Today), andreiciu (Yesterday), AndreiLux (30th December 2012), Androguide.fr (28th December 2012), android88 (27th December 2012), androidkid311 (10th December 2012), AndroidNeophyte (28th December 2012), AndroidProject (18th December 2012), andy.smith (Today), Angelakaro (Yesterday), angelos22 (Today), Anil_Sharma (Today), AnimalFromTHeZoo (Today), Anon87 (Today), AnTerNoZ (Today), Antia5 (Today), antoa (Today), anuseb89 (Today), ap030993 (1st January 2013), apocalicix (Yesterday), arafsheikh (22nd December 2012), arashtarafar (Today), arcaneck (Today), arczangel (Today), ardu (Today), arifkpi (Yesterday), ark_daemon (Today), Arles120 (3rd December 2012), artteam (Today), Asadullah (Today), ashishv (Today), ashwin2011 (18th November 2012), aSquard (Today), Asycid (2nd December 2012), atapia984 (Yesterday), atribe (Today), Auris 1.6 vvt-i (Yesterday), austinkp (Yesterday), Axel 11 (Today), ayushextreme (Yesterday), azboo13 (Yesterday), AzizWahid (28th December 2012), aznflawless (Yesterday), azoz123456 (Today), Badbullet (Today), bads3ctor (Today), baltac (Today), barento32 (Today), Bares (6th December 2012), BassBlaster (Yesterday), bassie1995 (31st December 2012), BatCavedIn (19th December 2012), benny3 (Yesterday), bespinhal (Today), betadr (Today), bigbadmoshe (1st January 2013), BigBrother1984 (15th December 2012), BigE04GTO (Today), bigron77 (Today), bigwavedave25 (Yesterday), billydroid (Today), binlalo (Today), BintEd (Today), Birdemani (Yesterday), birr97 (Yesterday), bisio971 (Yesterday), bitpushr (Today), Bizarre48 (Yesterday), bjoswald (Today), blackcanopy (12th December 2012), blackdub370 (1st January 2013), BLADESMAN1889 (Today), blake2893 (29th December 2012), bleepyboop (30th December 2012), blibert (Today), bloodychaos (Today), bluepratham (Yesterday), bobbyphoenix (Today), Bomba89 (Today), bone069 (17th December 2012), bornagainpenguin (Today), bradman117 (2nd January 2013), Brahmosifer (Today), brain juice (Today), brando56894 (Yesterday), brendan10211 (Today), bsuhas (Today), Bulletblitz27 (Yesterday), bulletproof136 (5th December 2012), buru898 (Today), bxmx666 (2nd January 2013), c0ldburn3r (18th December 2012), caballon (Today), Calavaro (Today), canis999 (Today), caper (Today), Captain_Throwback (Yesterday), Carites.rt (Yesterday), cashyftw (Today), casouzaj (Yesterday), castaway1 (Today), caustictoast (Yesterday), cavalloz (Yesterday), CekMTL (Yesterday), cerius (29th December 2012), cfanmaoli (Today), chacal1906 (Today), CharlesIstus (Yesterday), chavocarlos (Today), chazybaz13 (Today), chepoz (Today), chia8331 (Today), chino331 (Today), Chizzele (Today), ChrisG683 (Today), chrissyg123 (Yesterday), Chris_84 (Today), chrone (Today), chupajr (Yesterday), churria (Yesterday), Chxrlyglez (Yesterday), claro966 (Today), Claudio67 (Today), clintkev251 (Today), Cloaker (Today), CLopez92868 (Today), Cobalton (Today), cobravnm13 (Today), coderzs (Today), colinpowell (Today), comosa (Yesterday), cooper104 (Today), coronero (Today), CosmicDan (30th December 2012), cosmoskymaster (Yesterday), coto39 (Today), Cpt. Captain (Today), CriGiu (30th December 2012), cristian2k (Today), Cristiano Matos (29th December 2012), cucx (Today), currentuserjade (Today), Cute Idiot (5th December 2012), CyberScopes (Today), cypher_zero (Yesterday), D-m-x (Today), Dabido1983 (Today), daemon21 (Today), dagger (7th December 2012), dan-fish (28th December 2012), danestaples (Yesterday), danny 92 (2nd January 2013), dannyb0i (Today), dansen926 (Yesterday), darkangel1016 (Today), darkbasic4 (Today), dark_knight35 (28th December 2012), dastinger (Today), davelyh (Today), davemuir (Yesterday), dawiseguy77 (Today), dbrannon79 (13th November 2012), deathtech (Yesterday), deepo75 (Yesterday), default50 (Yesterday), DehDehDeh (29th December 2012), Delgra (Today), dellup (Today), demkantor (Today), derricancy (28th December 2012), Derrick21 (Today), desalesouche (Yesterday), despotovski01 (Yesterday), devaloka (Yesterday), devrruti (Today), diegoxcore12 (Yesterday), diesiocho (Today), dillinjer (2nd January 2013), DIMENSIONAL (29th December 2012), DimoniSabater (Yesterday), djbenny1 (Today), djjosh (Today), djynns (Today), DJ_SpaRky (Today), dkephart (Yesterday), dkiller19 (Today), dkitty (1st January 2013), dkmurphys (Today), dongerado (Yesterday), donthateme702 (Today), doomed151 (10th December 2012), doomerxx (Today), dpendolino (Yesterday), Dr. Mario (16th December 2012), Dragn4rce (Today), dragon1357 (Today), dragonnn (16th December 2012), Dragooon123 (Today), drashran (Yesterday), drawkcaB (Today), drew breeze (Today), drms12 (Yesterday), drnjstj (Today), drsood (Today), dsDoan (Today), Duarte777 (Today), dublued (Yesterday), Duffman14 (Today), Duuuude (Yesterday), dwitherell (24th December 2012), Einander (Today), eirikgu (Today), ej8989 (Yesterday), ela1103 (Yesterday), elban (13th November 2012), elcocolo50 (Today), eldelcairo (29th December 2012), elijahblake (Yesterday), Elmirth (Today), elvisypi (28th December 2012), elvitas (10th December 2012), el_campi (Today), Emblema (Today), emp1re07 (Yesterday), emperorxavier (Today), engwalker (Yesterday), enigmatic.lucifer (Today), enox.co (Today), ephraim033 (Yesterday), Erahgon (1st January 2013), ericmarx9 (Today), Eris _2.1_2010 (Today), erktheerk (Today), ernexto (Yesterday), esmier (Today), estallings15 (Today), etham3 (Today), EvilHobbit (27th December 2012), evo401 (Today), ewalk4866 (Today), exemp30 (Today), ezet (Today), Fabri2306 (Yesterday), fabricatedhero (Today), fadinglionhart (Yesterday), FailX (Yesterday), falconrulz (Yesterday), Fallon9111 (Yesterday), fandr0id (Yesterday), farfetch (Today), farmerbb (Today), fastcx (Today), fastest963 (Today), fax1991 (Today), FD1999 (Today), febian2 (Today), fernandodistinto (Yesterday), Ferradinho (Yesterday), ffox44 (Today), fideliofidelio (Today), fifo171 (1st December 2012), Finguz (Today), finneginsfast (Yesterday), fish3046 (Yesterday), fisher6s (Yesterday), FlanFlinger (Yesterday), flight777 (Yesterday), fluffman86 (Yesterday), foggyflute (Yesterday), foid (Today), FoliathR (Yesterday), fr0st420 (Today), francis820 (Yesterday), Freeroot (Yesterday), frogstalon (Today), funiewski (30th December 2012), funnycreature (Yesterday), Funnydwarf (Today), gade12 (Yesterday), gadzooks64 (Today), GamayuEndan (Today), gaui (Today), gazdra (Today), GEKTHEBOSS (Today), gengi (Yesterday), GEORG_X8 (23rd November 2012), geotsam (Today), GerBBAR14_c (Today), GermainZ (3rd December 2012), geru2842 (Today), ghostcowboy (Today), ghostsa (Today), ghyarlae (Today), gigley (Today), GiHs (Today), ginobili1 (Today), Gladiator00 (5th December 2012), gnaynehz (Today), gocegi (23rd November 2012), GoDieNow (2nd January 2013), godofgeeks (Today), gogyly (15th December 2012), gokussjx (Today), Gooler (Yesterday), Gormsen (31st December 2012), graffix31 (Today), grantusmantus (Today), grayfoxmg1 (Yesterday), greg2826 (Today), GuilhermeXOT (Today), guru.birko (Today), Hacre (Yesterday), hadobac (Today), hamma (17th December 2012), Harbir (Yesterday), hardcore4ever (Today), Haro912 (Yesterday), harrier3 (Today), harsha13bm (Today), hecatae (29th December 2012), hector_servadac (Today), Hitschman (Today), Hobb3es (Today), hoss_n2 (Yesterday), hotdog125 (Yesterday), HTC-David (Today), HTCDreamOn (Yesterday), humberos (18th November 2012), hung_mtuci (Today), huwande1acruz (Today), hyoga316 (Today), iamchocho (Today), IAmNice (9th December 2012), iBlueee (30th December 2012), Icarus1391 (Yesterday), iceman19631 (Today), ICS_XD (29th December 2012), icu64 (Today), IdeosPower (Today), iiandskater (1st January 2013), ijama (Today), Ilko (Today), imjustafq (21st December 2012), Impius Nex (Today), incidentflux (30th November 2012), ineedone (28th December 2012), ingcolchado (Today), iNovecento (Today), iONEx (29th December 2012), iridaki (17th December 2012), ironjon (Yesterday), iSkyLiner (8th December 2012), iss42 (Yesterday), ItBankRock (Yesterday), ithunter (Today), ivkos (Yesterday), j1po (Yesterday), j3r3myp (Yesterday), jader13254 (27th December 2012), jagox5 (Today), jakit (Yesterday), jambo21 (Yesterday), Jamiehefe (Yesterday), janarp (Today), janver22 (Yesterday), jason1332 (Today), jason_55904 (Today), JaViJeC (Today), Jay Ssj (Today), jayanthn (29th December 2012), Jayedamina (Today), jayjay3333 (Today), JayyWilliams (Today), jbejaranogarcia (Today), Jcksus (Today), jcomana (Today), Jean-DrEaD (2nd January 2013), jeebspawnshop (28th December 2012), Jeklund (Today), jekyll86 (Yesterday), jellileo (Today), Jeppep86 (Today), jeremyandroid (Today), jeremysherriff (Yesterday), Jessecar96 (Yesterday), Jesustheheretic (Yesterday), jfjeeman (Today), JimmyD_92 (Today), jinxtedium (Today), jiovaxrs (Today), jirafabo (Today), jiraya142 (Today), jiunks (28th December 2012), jjmmll (Today), Jmjguru (2nd January 2013), jmot205 (Today), jobrima (Today), jocarog (Today), JoeCastellon (Today), Joey22688 (Today), johnbarry3434 (Today), johnblundon (Today), Jonachan218 (Yesterday), jonah1234 (Yesterday), JONES2311 (Yesterday), jonny3132 (Yesterday), jorgemoya (Yesterday), JoseBerga (Yesterday), josegalre (Today), JOSEunit10 (Today), joshthehappy (Today), jukyO (Yesterday), juninromualdo (Today), junpei1337 (Today), Justch (Yesterday), justmpm (31st December 2012), juvenile43 (Today), jvmax (Today), jwhood (Today), jym2679 (Today), k.caleb.cole (Today), k1nk33 (Today), kaeldric (Today), Kage01 (Yesterday), kain72 (Today), kashim123321 (28th December 2012), Kavnor (Today), Kayant (Yesterday), Kayrin90 (2nd January 2013), kdog1202 (Today), keithross39 (19th December 2012), kekko986 (Today), kemoba (Today), kepaware (Today), Kerns_JW (11th December 2012), keuda (Today), kevinarjun (Today), khoailang2500 (Yesterday), kickban (Yesterday), kinfolk248 (Today), king.james (Yesterday), kjyli (Today), kkookkik (Yesterday), KLBrey (Yesterday), kobik77 (Today), kojiro_cl (Today), koningjim (Today), koopakid08 (Yesterday), krishna_yana (Today), KSLevel11 (Yesterday), Ku4t0 (Today), KuaQ (Today), kubson1999 (Yesterday), kun kidou (17th December 2012), kushelmex (Yesterday), kushXmaster (Today), kwibis (Today), KyraOfFire (Today), l3ullseye (Today), Lacoste_ (Today), lacostin (Yesterday), laihafloyd (Today), laill (6th December 2012), Lapiduss (Yesterday), Larteas (Today), LayzeeEyez (Today), leajian (Today), legacysl (Today), leonbadman (Today), leoposas (Today), letmeputdatipin (Today), Leto II (Today), leventbb (Yesterday), lhcrulz (Today), licha26 (6th December 2012), lilmikeyv (Today), Lim Wee Huat (Yesterday), LinChina (Today), LittleEd (Today), lo2ay (Today), loftwyr (Yesterday), lolmensch (Yesterday), lonepie (Yesterday), lorddavid (20th December 2012), lordxcom (Yesterday), Lost.soul (Today), Loureiro (14th December 2012), lovejoy777 (12th December 2012), luba6ky (Yesterday), lucaoldb (24th November 2012), lucian1 (Yesterday), luigisvc (Today), Luigi_2 (Yesterday), luiseteyo (Today), lumannnn (Today), m1batt1 (Yesterday), m3adow (Yesterday), m4k3r (13th December 2012), Mac of York (Today), Magnumutz (Yesterday), mahercs (Yesterday), maisi (Today), makejau (Today), makimac (Yesterday), Makrilli (Today), mamouton (Today), ManBearrrPig (Today), mandrakur (Today), maniaczek1111 (Today), maped (Today), marcoccio (2nd December 2012), MarcoToo (Today), Marcxd2214 (Today), mario7595 (Yesterday), Markuzy (Yesterday), Marshton (Yesterday), martirio3000 (Today), marvin_irl (Today), massimo2001 (26th December 2012), masterleron (Today), Mathias85 (Yesterday), Mattish13 (Today), Mattix724 (12th December 2012), maxim_united (Today), MaxWorks (31st December 2012), MechaGen (Today), medinstpro (Today), meer_mortal (Yesterday), melvinchng (Today), MentaLrz (Yesterday), Mesaman2012 (Yesterday), Mesograt (Yesterday), metalgearhathaway (Today), mghtbgiant (Today), Mich3lin (Today), Michealtbh (Yesterday), michelsberg (Today), miguel4360 (Today), mik0bil0g (Today), mike28 (Today), milestonefail (Today), milro1970 (Yesterday), mirco_pa (31st December 2012), Mirkokkk (Today), Misledz (Today), mitchellreece (Today), mjamidon224 (Today), Mkdon (Yesterday), mladenijs (Yesterday), mmf01 (Today), moncheer (Today), moonzbabysh (Today), Moped_Ryder (Today), MordodeMaru (Yesterday), morpheus2000 (Yesterday), mosez.315 (Today), mosta00 (Today), mosta_9741 (28th December 2012), mountainjoe (Today), MR.change (Today), Mr.KiLLeR (Yesterday), Mr.Spummy (Yesterday), Mrcharmz (Today), MrHyde03 (Yesterday), mrziou (Yesterday), MSephiroth (Today), msquared (Today), Mtdpaiste (Yesterday), MtG92 (Today), mugzy (Yesterday), musclehead84 (Today), musvi (Today), mw86 (Yesterday), Mydy (29th December 2012), Mystenes (Yesterday), N00BGuy (Today), nachopre (Yesterday), Nagrom Nniuq (Today), nano-kun (19th December 2012), nashshafrulrezza (Today), nathrinder (Today), navkp (6th December 2012), nazz.rule (23rd November 2012), ncasas (Yesterday), nedd1 (Today), neh4pres (Today), Neo-ST (Today), neog17 (1st January 2013), neonghost (Today), neoseekar (Today), nerfman100 (Yesterday), Neutr1no (Yesterday), NFGman (Today), nhshah7 (Yesterday), nickdk (Yesterday), NicoCaldo (Today), Nigeldg (Yesterday), nightwing369 (20th November 2012), niimand (Yesterday), nikhil16242 (5th December 2012), niksy+ (Today), Nilmandir (Today), ninthsense (12th November 2012), Nirrik (Yesterday), nishant_713 (Yesterday), nisiomorais (25th December 2012), nit_in (6th December 2012), nklenchik (16th December 2012), nme2 (31st December 2012), NoarSausage (Today), nogo8888 (Today), nogoodusername (Today), noname2062 (Yesterday), noradoor (16th December 2012), Noriega813 (Yesterday), nrfitchett4 (Today), nsnsmj (Yesterday), nunogil (Yesterday), nvo2411 (Today), nwg (Today), Nxxx (Today), o>c (Today), o0BlacknesS0o (Today), Obartuckc (Today), OccupyDemonoid (Yesterday), odrackir (Today), ogremount (Today), oliva94 (Yesterday), Omar04 (Today), onon_onon (Yesterday), openSort (Today), optimus039 (Yesterday), Orador (Today), orangekid (Today), Orical (Today), Orr.Penn.18 (Yesterday), oSandmaNo (Yesterday), overhauling (Today), ovi3 (Today), ozz_y (1st January 2013), p0isonra1n (Today), P0MM3S (Yesterday), PabloING (Today), Pag96 (Yesterday), page64 (Yesterday), pajerm (Yesterday), Palavilli (Today), palicnjak (2nd January 2013), Pancakeface (Yesterday), Paoul conetta (Yesterday), papacarla (Today), papikev1714 (Yesterday), parthabhatta (Today), parthpatels007 (Yesterday), paul.n (Today), pauley (Yesterday), pcpiotr (Today), peat_psuwit (Today), pepoluan (17th December 2012), petruuus (Today), PhAkEer (Today), phatmanxxl (Today), phio (Today), pillum (Yesterday), pirracas77 (Today), Pixel #1 (Today), pixelexip (Yesterday), pixule (28th December 2012), plegimus (29th December 2012), poooolbachhan (21st December 2012), prabhav.3892 (Today), prairied0gg (Today), prakharkmr27 (Yesterday), Prashnts (Today), pravarth (Today), ProcessorHog (Yesterday), Prodai (27th December 2012), prvn234 (Today), prwnd (Today), psloan (Today), Publiuss (Yesterday), punshkin (2nd January 2013), pvchip3 (29th December 2012), qballe (Today), qkster (Yesterday), QNBT (Today), qtu (23rd November 2012), quezpr123 (Today), Quinny899 (24th December 2012), Ra1n3R (17th December 2012), Raghav Sood (Yesterday), ragnartarod (Yesterday), Rahat34 (28th December 2012), rallaster (Today), Ranguvar (31st December 2012), Rat.NL (Today), ratatattata (Yesterday), RatusNatus (7th December 2012), ravb90 (Today), rawdealer (Today), rayjd1650 (Yesterday), razgoth (Today), redbittttttttttt (Today), reddvilzz (17th December 2012), reformista45 (Today), regisso66 (Today), Rehab4Life (Today), Reiver_Neriah (11th December 2012), renegadedj (Today), renna1992 (Yesterday), rfranken (Today), riaan0010 (Yesterday), RiceDie (Today), Richy19 (Today), RickHenry (Today), rifky125 (Today), Ripolin2 (Today), Rishik17 (Today), Rjev07 (Today), rlorange (31st December 2012), rmagruder (Yesterday), robollama (Yesterday), rock99rock (Today), Rockmins (28th December 2012), RohinZaraki (Today), Romeo20 (Today), ronmq (Today), rootSU (Yesterday), ROX77 (Today), rrobyrroby (Yesterday), rrrrrichard (Today), rymngh (Today), RyomaEchizen (Today), Ryuinferno (4th December 2012), s3nsation (Today), sacapo (1st January 2013), safepacket (Yesterday), sahibunlimited (Today), sahil11 (Today), saintckk (28th November 2012), Saints775 (Today), Sajikane (1st January 2013), sakindia123 (Today), saldirai (Today), Sammy671 (Today), samsgun357 (Today), samurai77x (Today), samuro625 (Today), savie (Today), scheichuwe (Yesterday), ScreenWhine (13th November 2012), sdk16420 (Today), seba9087 (Yesterday), Secretwar1 (Today), seedoubleyou (Yesterday), selva.simple (Yesterday), selva847 (Today), semdxd (3rd December 2012), sengork (Yesterday), sergini (Yesterday), sergiob8 (Today), sergio_sos (Today), sevenrock (Yesterday), Seventysix-MUC (Today), SferaDev (Today), ShadowSkills (Yesterday), shag_on_e (Today), ShaPrince (Today), Sheldor1967 (Today), sheraro (Today), shibumi77 (Today), ShivangDave (Today), shonen45 (Today), shortyoko (5th December 2012), shrike1978 (21st December 2012), shubhamjain001 (Yesterday), shwetshkla (Today), shyam.tukadiya (Today), sibbor (Today), Siddarth Malik (Today), silhouette_blacktea (Today), simonrussell77 (15th December 2012), simplyhg (Today), Sine. (Today), skinniezinho (Today), skrambled (Yesterday), skyjedi (Today), skyliner33v (Yesterday), slacker1979 (Yesterday), sling (Yesterday), SLver (Today), Smitelight (Yesterday), smoker46 (Today), sn0wmis3r (Today), snip3rb0y (Yesterday), soadzoor (12th November 2012), SoftwareMonkey (Today), sonnhy (Today), sonurockspunk (23rd November 2012), sony_84 (1st January 2013), sorullo_xgrx (Today), Sot0 (Today), sp3a12uk (Today), spiderio (28th December 2012), Spike760 (Yesterday), Spiricore (Today), spirosbond (Today), Splaktar (Today), Sp_Ark (30th December 2012), sromer (Today), stamatis (4th December 2012), starcallr (Today), Starfarrel (Yesterday), starszy (Today), stat'o'flow (Today), ste0ey (4th December 2012), steevmiller (Today), STELIOSFAN (17th December 2012), stinkyhippy (Yesterday), Strife89 (Today), striplex (Today), StuartTheFish (Yesterday), suaverc118 (Today), SubZero.it (Today), Suixo (Today), Sunny88 (Today), SuperModz (2nd December 2012), supersophisticated (Today), surcamares (Today), swagmeister (Yesterday), swarm871 (Yesterday), syntropic (Today), Szczepanik (Today), S_Kool (Today), s__s (15th December 2012), t0mm13b (29th December 2012), t4d73 (Today), taadow1030 (Yesterday), tabnnaj (Today), TAJ_Rocks (3rd December 2012), tallyforeman (Today), Talminator (Yesterday), taomorpheus (Yesterday), tarkin88 (Yesterday), tejus.k.v (Today), tenet420 (28th December 2012), teppe498 (Today), test123456789012 (Today), Th3R1pp3r (Yesterday), thacounty (Yesterday), thatdude02 (Yesterday), thcur (Today), The Broken (30th December 2012), THE RED BLUR (Yesterday), The-Droidster (2nd January 2013), the.emilio (Today), the1dynasty (Today), TheArt. (Yesterday), thebiftek (Yesterday), TheBlackWolf (Today), TheBlueOne (20th December 2012), thebobp (Yesterday), TheCoutures (Today), thedudejdog (7th December 2012), thegreatergood (12th November 2012), TheHobbyist (Yesterday), TheIceKing (Today), TheIntersect (Yesterday), thejaimes111 (Today), thekguy (Today), TheMorpheus (Yesterday), Theraze (Today), therussman2002 (Yesterday), thewilfulforce (Today), thief-mp3 (Today), thomwithah (Today), ThugEsquire (Yesterday), tichyline (Yesterday), timmoty_g (Today), timmyjoe42 (Today), Timo_SGS2 (Yesterday), Tirnor (Yesterday), tisov (Today), titoet (Today), tiwiz (Today), TJD319 (30th December 2012), tobitege (Yesterday), tomsons26 (Yesterday), Tone_Capone (Yesterday), Tony84 (Today), toothless_cog (Yesterday), topgeardave (Yesterday), torxx (Today), Touch_Pete (Today), TpmKranz (Yesterday), trestevenson (Yesterday), Trichelieu (30th November 2012), tridibsahabd (Today), troullis2004 (31st December 2012), troyboytn (Today), trueptr (Yesterday), tuanster1119 (Yesterday), turkeyshark (Today), twinky444 (Yesterday), TxaEkis (2nd January 2013), txingu (Today), Tyfighter (13th November 2012), ukanth (Yesterday), Ultimategeppie (6th December 2012), UndisputedGuy (Today), Unexist6969 (Yesterday), unfugor (Today), unsungkhan (Today), UpInTheAir (16th December 2012), Urahara (Today), urko95 (Today), uzirox (Yesterday), vaiStyle (Today), Vaizen (Today), Valcho (Yesterday), Vanhoud (Yesterday), vaserbanix (Today), vbhtt (Today), vbz89 (Yesterday), veddandekar (Today), verszipo (Today), Vertana (13th December 2012), vicl89 (30th December 2012), victorgordis (Yesterday), vinaybaboo (Today), vinvam (Today), vishalbotre (Today), VivaErBetis (Today), volk9029 (Today), volpe71 (Today), vosoun (Yesterday), vwpiper (Today), wadb (Today), Walking dead (Yesterday), Warren87 (12th November 2012), washjayb (Today), WCCobra (Today), Weaseal (Today), webmastir (Yesterday), wedhusman (31st December 2012), whoamigriffiths (2nd January 2013), Will Hong (Today), williamcharles (Today), williamfold (27th December 2012), willstanners (Yesterday), wind79 (Yesterday), wks960803 (Today), wlmeng11 (Today), wolffboy212 (Today), woof123 (17th December 2012), woonaval (30th December 2012), woopdedoda (Yesterday), wretchedlocket (Yesterday), WWeltmeister (Yesterday), x3nophobia (Today), Xdalikeaboss (3rd December 2012), xdas3tester (Today), Xetro84 (Today), ximo8429 (Today), xinux87 (Yesterday), xionide (Yesterday), xkawer (Yesterday), xsyrhies17 (20th November 2012), xtrememorph (Today), XxXPachaXxX (Today), yazzabo (Today), ÜBER™ (2nd January 2013), Yenkazu (15th December 2012), yeradis (Yesterday), yocasta (29th December 2012), YokoMotive (Yesterday), YozhikVT (5th December 2012), Yo_2T (28th December 2012), yrxe (Today), yuvarajv (Today), yxxact (Today), z28tovette (7th December 2012), zazza8 (Today), zenwafer (Today), Zero Computing (12th November 2012), zkrocek (Today), zomgalama (Yesterday), zylor (29th December 2012), _Vegas (Today), ___flatline___ (Today)

12th November 2012, 05:23 PM

Senior Member - OP

Posts: 151

Join Date: Jul 2008

Location: Montreal

Quote:

Will this work for cwmr 6

Not yet. If a few people try it and report positive results, I'll make a flashable image. Stay tuned. I updated the first post with instructions. Please be careful, though! Let me know if you need more detail.

12th November 2012, 05:49 PM

Member

Posts: 65

Join Date: Sep 2012

Quote:

I updated the first post with instructions. Please be careful, though! Let me know if you need more detail.

I got troubles. Using Terminal Emulator I got an error message when I type the 3rd line ("cp /mnt/sdcard/rngd /system/xbin"), it says: "sh: cp: not found"

12th November 2012, 07:00 PM

Senior Member - OP

Posts: 151

Join Date: Jul 2008

Location: Montreal

Quote:

I got troubles. Using Terminal Emulator I got an error message when I type the 3rd line ("cp /mnt/sdcard/rngd /system/xbin"), it says: "sh: cp: not found"

Where did you transfer rngd to on your phone? Have to make sure the source path matches.

12th November 2012, 07:15 PM

Member

Posts: 65

Join Date: Sep 2012

Quote:

Where did you transfer rngd to on your phone? Have to make sure the source path matches.

It does match, that's why I'm confused.. :\ which terminal do you use?

12th November 2012, 07:18 PM

(Last edited by Zero Computing; 12th November 2012 at 07:50 PM.)

Member

Posts: 49

Join Date: Sep 2012

Will test this later, for sure! If all goes well, may I request permissions to include this with the MIUI build I will be learning to make and attempting to produce?

edit: My phone wasnt particularly laggy before except when playing games, but there is a noticeable difference after executing this binary. Noticed a few small hangs but unsure if it is related to this binary.

12th November 2012, 08:06 PM

(Last edited by thegreatergood; 12th November 2012 at 08:25 PM.)

Senior Member

Posts: 409

Join Date: Sep 2012


 

DONATE TO ME
I've tested it ... integrated it into my rom and installed ... there was no lag even right after it first boot ... its incredibly smooth ... though I too noticed small hangs ... though I attributed this to the device getting ahead of itself ....

Sent from my SGH-I927 using xda premium


Page 1 of 120   1231151101>>>

Posting Rules You may not post new threads You may not post replies You may not post attachments You may not edit your posts HTML code is Off

Go to top of page...

Most Thanked In This Thread

Scala 2.10.0 now available! | The Scala Programming Language


954 free SVG and PNG icons easily customizable for games or apps | Game-icons.net

Nuclear weapon statistics using monoids, groups, and modules in Haskell

$
0
0

Comments:"Nuclear weapon statistics using monoids, groups, and modules in Haskell"

URL:http://izbicki.me/blog/nuclear-weapon-statistics-using-monoids-groups-and-modules-in-haskell


The Bulletin of the Atomic Scientists tracks the nuclear capabilities of every country. We’re going to use their data to demonstrate Haskell’s HLearn library and the usefulness of abstract algebra to statistics. Specifically, we’ll see that the categorical distribution and kernel density estimates have monoid, group, and module algebraic structures.  We’ll explain what this crazy lingo even means, then take advantage of these structures to efficientlyanswer real-world statistical questions about nuclear war. It’ll be a WOPR!

Before we get into the math, we’ll need to review the basics of nuclear politics.

The nuclear Non-Proliferation Treaty (NPT) is the main treaty governing nuclear weapons. Basically, it says that there are five countries that are “allowed” to have nukes: the USA, UK, France, Russia, and China. “Allowed” is in quotes because the treaty specifies that these countries must eventually get rid of their nuclear weapons at some future, unspecified date. When another country, for example Iran, signs the NPT, they are agreeing to not develop nuclear weapons. What they get in exchange is help from the 5 nuclear weapons states in developing their own civilian nuclear power programs. (Iran has the legitimate complaint that Western countries are actively trying to stop its civilian nuclear program when they’re supposed to be helping it, but that’s a whole ‘nother can of worms.)

The Nuclear Notebook tracks the nuclear capabilities of all these countries.  The most-current estimates are from mid-2012.  Here’s a summary (click the warhead type for more info):

Country Delivery Method Warhead Yield (kt) # Deployed USA ICBM W78 335 250 USA ICBM W87 300 250 USA SLBM W76 100 468 USA SLBM W76-1 100 300 USA SLBM W88 455 384 USA Bomber W80 150 200 USA Bomber B61 340 50 USA Bomber B83 1200 50 UK SLBM W76 100 225 France SLBM TN75 100 150 France Bomber TN81 300 150 Russia ICBM RS-20V 800 500 Russia ICBM RS-18 400 288 Russia ICBM RS-12M 800 135 Russia ICBM RS-12M2 800 56 Russia ICBM RS-12M1 800 18 Russia ICBM RS-24 100 90 Russia SLBM RSM-50 50 144 Russia SLBM RSM-54 100 384 Russia Bomber AS-15 200 820 China ICBM DF-3A 3300 16 China ICBM DF-4 3300 12 China ICBM DF-5A 5000 20 China ICBM DF-21 300 60 China ICBM DF-31 300 20 China ICBM DF-31A 300 20 China Bomber H-6 3100 20

 

I’ve consolidated all this data into the file nukes-list.csv, which we will analyze in this post.  If you want to try out this code for yourself (or the homework question at the end), you’ll need to download it.  Every line in the file corresponds to a single nuclear warhead, not delivery method.  Warheads are the parts that go boom!  Bombers, ICBMs, and SSBN/SLBMs are the delivery method.

There are three things to note about this data.  First, it’s only estimates based on public sources.  In particular, it probably overestimates the Russian nuclear forces. Other estimates are considerably lower.  Second, we will only be considering deployed, strategic warheads.  Basically, this means the “really big nukes that are currently aimed at another country.”  There are thousands more tactical warheads, and warheads in reserve stockpiles waiting to be disassembled.  For simplicity—and because these nukes don’t significantly affect strategic planning—we won’t be considering them here.   Finally, there are 4 countries who are not members of the NPT but have nuclear weapons: Israel, Pakistan, India, and North Korea.  We will be ignoring them here because their inventories are relatively small, and most of their weapons would not be considered strategic.

Programming preliminaries

Now we’re ready to start programming. First, let’s import our libraries:

>import Control.Lens>import Data.Csv>import qualified Data.Vector as V>import qualified Data.ByteString.Lazy.Char8 as BS> >import HLearn.Algebra>import HLearn.Models.Distributions>import HLearn.Gnuplot.Distributions

Next, we load our data using the Cassava package.  (You don’t need to understand how this works.)

>main = do> Right rawdata <- fmap (fmap V.toList . decode True) $ BS.readFile "nukes-list.csv">        :: IO (Either String [(String, String, String, Int)])

And we’ll use the Lens package to parse the CSV file into a series of variables containing just the values we want.  (You also don’t need to understand this.)

> let list_usa = fmap (\row -> row^._4) $ filter (\row -> (row^._1)=="USA" ) rawdata> let list_uk = fmap (\row -> row^._4) $ filter (\row -> (row^._1)=="UK" ) rawdata > let list_france = fmap (\row -> row^._4) $ filter (\row -> (row^._1)=="France") rawdata > let list_russia = fmap (\row -> row^._4) $ filter (\row -> (row^._1)=="Russia") rawdata > let list_china = fmap (\row -> row^._4) $ filter (\row -> (row^._1)=="China" ) rawdata

NOTE: All you need to understand about the above code is what these list_country variables look like. So let’s print one:

> putStrLn $ "List of American nuclear weapon sizes = " ++ show list_usa

gives us the output:

List of American nuclear weapon sizes = fromList [335,335,335,335,335,335,335,335,335,335 ... 1200,1200,1200,1200,1200]

If we want to know how many weapons are in the American arsenal, we can take the length of the list:

> putStrLn $ "Number of American weapons = " ++ show (length list_usa)

We get that there are 1951 American deployed, strategic nuclear weapons.  If we want to know the total “blowing up” power, we take the sum of the list:

> putStrLn $ "Explosive power of American weapons = " ++ show (sum list_usa)

We get that the US has  516 megatons of deployed, strategic nuclear weapons.  That’s the equivalent of 1,033,870,000,000 pounds of TNT.

To get the total number of weapons in the world, we concatenate every country’s list of weapons and find the length:

> let list_all = list_usa ++ list_uk ++ list_france ++ list_russia ++ list_china> putStrLn $ "Number of nukes in the whole world = " ++ show (length list_all)

Doing this for every country gives us the table:

Country Warheads Total explosive power (kt) USA 1,951 516,935 UK 225 22,500 France 300 60,000 Russia 2,435 901,000 China 168 284,400 Total 5,079 1,784,835

 

Now let’s do some algebra!

Monoids and groups

In a previous post, we saw that the Gaussian distribution forms a group. This means that it has all the properties of a monoid—an empty element (mempty) that represents the distribution trained on no data, and a binary operation (mappend) that merges two distributions together—plus an inverse. This inverse lets us “subtract” two Gaussians from each other.

It turns out that many other distributions also have this group property. For example, the categorical distribution.  This distribution is used for measuring discrete data. Essentially, it assigns some probability to each “label.”  In our case, the labels are the size of the nuclear weapon, and the probability is the chance that a randomly chosen nuke will be exactly that destructive.  We train our categorical distribution using the train function:

> let cat_usa = train list_usa :: Categorical Int Double

If we plot this distribution, we’ll get a graph that looks something like:

A distribution like this is useful to war planners from other countries.  It can help them statistically determine the amount of casualties their infrastructure will take from a nuclear exchange.

Now, let’s train equivalent distributions for our other countries.

> let cat_uk = train list_uk :: Categorical Int Double> let cat_france = train list_france :: Categorical Int Double> let cat_russia = train list_russia :: Categorical Int Double> let cat_china = train list_china :: Categorical Int Double

Because training the categorical distribution is a group homomorphism, we can train a distribution over all nukes by either training directly on the data:

> let cat_allA = train list_all :: Categorical Int Double

or we can merge the already generated categorical distributions:

> let cat_allB = cat_usa <> cat_uk <> cat_france <> cat_russia <> cat_china

Because of the homomorphism property, we will get the same result both ways. Since we’ve already done the calculations for each of the the countries already, method B will be more efficient—it won’t have to repeat work we’ve already done.  If we plot either of these distributions, we get:

The thing to notice in this plot is that most countries have a nuclear arsenal that is distributed similarly to the United States—except for China.  These Chinese ICBMs will become much more important when we discuss nuclear strategy in the last section.

But nuclear war planners don’t particularly care about this complete list of nuclear weapons.  What war planners care about is the survivable nuclear weapons—that is, weapons that won’t be blown up by a surprise nuclear attack.  Our distributions above contain nukes dropped from bombers, but these are not survivable.  They are easy to destroy.  For our purposes, we’ll call anything that’s not a bomber a survivable weapon.

We’ll use the group property of the categorical distribution to calculate the survivable weapons.  First, we create a distribution of just the unsurvivable bombers:

> let list_bomber = fmap (\row -> row^._4) $ filter (\row -> (row^._2)=="Bomber") rawdata> let cat_bomber = train list_bomber :: Categorical Int Double

Then, we use our group inverse to subtract these unsurvivable weapons away:

> let cat_survivable = cat_allB <> (inverse cat_bomber)

Notice that we calculated this distribution indirectly—there was no possible way to combine our variables above to generate this value without using the inverse! This is the power of groups in statistics.

More distributions

The categorical distribution is not sufficient to accurately describe the distribution of nuclear weapons. This is because we don’t actually know the yield of a given warhead. Like all things, it has some manufacturing tolerances that we must consider. For example, if we detonate a 300 kt warhead, the actual explosion might be 275 kt, 350 kt, or the bomb might even “fizzle out” and have almost a 0kt explosion.

We’ll model this by using a kernel density estimator (KDE).  The KDE basically takes all our data points, assigns each one a probability distribution called a “kernel,” then sums these kernels together.  It is a very powerful and general technique for modelling distributions… and it also happens to form a group!

First, let’s create the parameters for our KDE.  The bandwidth controls how wide each of the kernels is.  Bigger means wider.  I selected 20 because it made a reasonable looking density function.  The sample points are exactly what they sounds like: they are where we will sample the density from.  We can generate them using the function genSamplePoints.  Finally, the kernel is the shape of the distributions we will be summing up.  There are many supported kernels.

> let kdeparams = KDEParams> { bandwidth = Constant 20> , samplePoints = genSamplePoints> 0 -- minimum> 4000 -- maximum> 4000 -- number of samples> , kernel = KernelBox Gaussian> } :: KDEParams Double

Now, we’ll train kernel density estimates on our data.  Notice that because the KDE takes parameters, we must use the train’ function instead of just train.

> let kde_usa = train' kdeparams list_usa :: KDE Double

Again, plotting just the American weapons gives:

And we train the corresponding distributions for the other countries.

> let kde_uk = train' kdeparams list_uk :: KDE Double> let kde_france = train' kdeparams list_france :: KDE Double> let kde_russia = train' kdeparams list_russia :: KDE Double> let kde_china = train' kdeparams list_china :: KDE Double>> let kde_all = kde_usa <> kde_uk <> kde_france <> kde_russia <> kde_china

The KDE is a powerful technique, but the draw back is that it is computationally expensive—especially when a large number of sample points are used. Fortunately, all computations in the HLearn library are easily parallelizable by applying the higher order function parallel.

We can calculate the full KDE from scratch in parallel like this:

> let list_double_all = map fromIntegral list_all :: [Double]> let kde_all_parA = (parallel (train' kdeparams)) list_double_all :: KDE Double

or we can perform a parallel reduction on the KDEs for each country like this:

> let kde_all_parB = (parallel reduce) [kde_usa, kde_uk, kde_france, kde_russia, kde_china]

And because the KDE is a homomorphism, we get the same exact thing either way.  Let’s plot the parallel version:

> plotDistribution (genPlotParams "kde_all" kde_all_parA) kde_all_parA

The parallel computation takes about 16 seconds on my Core2 Duo laptop running on 2 processors, whereas the serial computation takes about 28 seconds.

This is a considerable speedup, but we can still do better. It turns out that there is a homomorphism from the Categorical distribution to the KDE:

> let kde_fromcat_all = cat_allB $> kdeparams> plotDistribution (genPlotParams "kde_fromcat_all" kde_fromcat_all) kde_fromcat_all

(For more information about the morphism chaining operator $>, see the Hlearn documentation.) This computation takes less than a second and gets the exact same result as the much more expensive computations above.

We can express this relationship with a commutative diagram:

No matter which path we take to get to a KDE, we will get the exact same answer.  So we should always take the path that will be least computationally expensive for the data set we’re working on.

Why does this work? Well, the categorical distribution is a structure called the “free module” in disguise.

Modules and the Free Module

R-Modules (like groups, but unlike monoids) have not seen much love from functional programmers. This is a shame, because they’re quite handy. It turns out they will increase our performance dramatically in this case.

It’s not super important to know the formal definition of an R-module, but here it is anyways: An R-module is a group with an additional property: it can be “multiplied” by any element of the ring R. This is a generalization of vector spaces because R need only be a ring instead of a field. (Rings do not necessarily have multiplicative inverses.)  It’s probably easier to see what this means by an example.

Vectors are modules.  Let’s say I have a vector:

> let vec = [1,2,3,4,5] :: [Int]

I can perform scalar multiplication on that vector like this:

> let vec2 = 3 .* vec

which as you might expect results in:

[3,6,9,12,15]

Our next example is the free R-module. A “free” structure is one that obeys only the axioms of the structure and nothing else. Functional programmers are very familiar with the free monoid—it’s the list data type. The free Z-module is like a beefed up list. Instead of just storing the elements in a list, it also stores the number of times that element occurred.  (Z is shorthand for the set of integers, which form a ring but not a field.) This lets us greatly reduce the memory required to store a repetitive data set.

In HLearn, we represent the free module over a ring r with the data type:

:: FreeMod r a

where a is the type of elements to be stored in the free module. We can convert our lists into free modules using the function list2module like this:

> let module_usa = list2module list_usa

But what does the free module actually look like? Let’s print it to find out:

> print module_usa

gives us:

FreeMod (fromList [(100,768),(150,200),(300,250),(335,249),(340,50),(455,384),(1200,50)])

This is much more compact! So this is the take away: The free module makes repetitive data sets easier to work with. Now, let’s convert all our country data into module form:

> let module_uk = list2module list_uk> let module_france = list2module list_france> let module_russia = list2module list_russia> let module_china = list2module list_china

Because modules are also groups, we can combine them like so:

> let module_allA = module_usa <> module_uk <> module_france <> module_russia <> module_china

or, we could train them from scratch:

> let module_allB = list2module list_all

Again, because generating a free module is a homomorphism, both methods are equivalent.

Module distributions

The categorical distribution and the KDE both have this module structure. This gives us two cool properties for free.

First, we can train these distributions directly from the free module.  Because the free module is potentially much more compact than a list is, this can save both memory and time. If we run:

> let cat_module_all = train module_allB :: Categorical Int Double> let kde_module_all = train' kdeparams module_allB :: KDE Double

Then we get the properties:

cat_mod_all == cat_all
kde_mod_all == kde_all == kde_cat_all

Extending our commutative diagram above gives:

Again, no matter which path we take to train our KDE, we still get the same result because each of these arrows is a homomorphism.

Second, if a distribution is a module, we can weight the importance of our data points.  Let’s say we’re a general from North Korea (DPRK), and we’re planning our nuclear strategy. The US and North Korea have a very strained relationship in the nuclear department. It is much more likely that the US will try to nuke the DPRK than China will. And modules let us model this!  We can weight each country’s influence on our “nuclear threat profile” distribution like this:

> let threats_dprk = 20 .* kde_usa> <> 10 .* kde_uk> <> 5 .* kde_france> <> 2 .* kde_russia> <> 1 .* kde_china>> plotDistribution (genPlotParams "threats_dprk" threats_dprk) threats_dprk

Basically, we’re saying that the USA is 20x more likely to attack the DPRK than China is.  Graphically, our threat distribution is:

The maximum threat that we have to worry about is about 1300 kt, so we need to design all our nuclear bunkers to withstand this level of blast.  Nuclear war planners would use the above distribution to figure out how much infrastructure would survive a nuclear exchange.  To see how this is done, you’ll have to click the link.

On the other hand, if we’re an American general, then we might say that China is our biggest threat… who knows what they’ll do when we can’t pay all the debt we owe them!?

> let threats_usa = 1 .* kde_russia > <> 5 .* kde_china>> plotDistribution (genPlotParams "threats_usa" threats_usa) threats_usa

Graphically:

So now Chinese ICBMs are a real threat.  For American infrastructure to be secure, most of it needs to be able to withstand ~3500 kt blast.  (Actually, Chinese nuclear policy is called the “minimum means of reprisal”—these nukes are not targeted at military installations, but major cities.  Unlike the other nuclear powers, China doesn’t hope to win a nuclear war.  Instead, its nuclear posture is designed to prevent nuclear war in the first place.  This is why China has the fewest weapons of any of these countries.  For a detailed analysis, see the book Minimum Means of Reprisal.  This means that American military infrastructure isn’t threatened by these large Chinese nukes, and really only needs to be able to withstand an 800kt explosion to be survivable.)

By the way, since we’ve already calculated all of the kde_country variables before, these computations take virtually no time at all to compute.  Again, this is all made possible thanks to our friend abstract algebra.

Homework + next Post

If you want to try out the HLearn library for yourself, here’s a question you can try to answer: Create the DPRK and US threat distributions above, but only use survivable weapons.  Don’t include bombers in the analysis.

In our next post, we’ll go into more detail about the mathematical plumbing that makes all this possible. Then we’ll start talking about Bayesian classification and full-on machine learning. Subscribe to the RSS feed so you don’t miss out!

Why don’t you listen to Tom Lehrer’s “Song for WWIII” while you wait?

 

Felix

$
0
0

Comments:"Felix"

URL:http://felix-lang.org


The fastest scripting language on Earth.

Download |Overview: Slideshow |Tutorial |Reference |Community

Why do we need a new programming language?

Existing languages have too many faults to support modern requirements.

Goals.

  • high performance
  • rapid prototyping and a scripting language deployment model
  • safety and correctness
  • scalability
  • adaptability
  • platform independence

Performance

The ability to obtain high performance in a wide variety of domains is fundamental. Light-speed behaviour was a primary goal of C++ and it is for Felix too. In fact, we aim for hyper-light performance: faster than C.

Network performance matters too. This means we need platform independent asynchronous I/O. To support a large number of clients, we need lightweight threads with fast context switching.

Rapid Prototyping and scripting harness

C++ is typically very hard to build. Make files, macros, compiler and OS feature tests, switches, paths and a huge array of poorly integrated non-portable tools make development a nightmare compared with the easy of deploying Python or Perl scripts. IDEs fix these problems in a clean way, but only handle boilerplate workflows.

Correctness

Unfortunately dynamic typing is not amenable to reasoning about program correctness, and traditional imperative programming also presents obstacles.

We need:

  • static typing for low level safety
  • contract programming model for high level safety
  • automatic memory management without compromising performance
  • overloading for convenience
  • parametric polymorphism for data types
  • functional programming for correctness
  • imperative programming for performance
  • platform independence for portability
  • platform dependence for performance and low level programming
  • shared-memory concurrency for real-time performance
  • message passing for distributed concurrent cloud computing
  • low level optimisations for tight calculations
  • high level optimisations for overall application speed
  • extensible grammar in user space for implementing Domain Specific Sub-Languages
  • easy use of existing C and C++ libraries and code bases
  • built-in support for web development
  • built-in support for high performance computing
  • built-in support for game development
which quite a demanding list!

How Felix meets these goals.

Felix is designed to address all these issues. It is a C++ code generator and thereby can provide compatibility with existing C and C++ code bases. We let the native C++ compiler do the hard work of low level optimisation whilst Felix does high level optimisations. The resulting code is very fast, sometimes "faster than the speed of light (C)", but can be platform independent and is simply to deploy: just distribute the source files and run them, like a scripting language.

However Felix has its own type system based on a combination of Ocaml and Haskell. Like Ocaml it provides strong support for functional programming, whilst also supporting imperative programming. The type system is strict.

First order polymorphism is core, not a bolt-on as in Java and C++. Felix also provides open overloading like C++, but only allows exact matches. It also provides Haskell style type classes as an alternative way to obtain genericity.

To overcome syntactic impedence mismatching with the wide number of application domains, the Felix grammar is defined in user space. It can be extended by the end user to provide a suitable Domain Specific Sub-Language. The parser is GLR and the user actions are written in R5RS Scheme.

A rich set of shortcuts makes programming a breeze. Built-in regular expression support and other features provide string handling on par with Perl. Web programming is enabled by built-in asynchronous socket I/O combined with cooperatively multi-tasked fibres that would support millions of http clients if only the server could supply enough sockets. Context switching is achieved by a pointer swap, and state is maintained by a spaghetti stack.

Android SDK is now proprietary, Replicant to the rescue « Torsten's FSFE blog

$
0
0

Comments:"Android SDK is now proprietary, Replicant to the rescue « Torsten's FSFE blog"

URL:http://blogs.fsfe.org/torsten.grote/2013/01/03/android-sdk-is-now-proprietary-replicant-to-the-rescue/


By torsten.grote, on January 3rd, 2013

I just noticed that the Android SDK is now non-free software. If you go to

https://developer.android.com/sdk/index.html#download

and click on one of the files, you are presented with lengthy “Terms and Conditions” which for example say:

In order to use the SDK, you must first agree to this License Agreement. You may not use the SDK if you do not accept this License Agreement.

This sentence alone already violates freedom 0, the freedom to use the program for any purpose without restrictions.

Today, the truly Free Software version of Android called Replicant came to the rescue and released a free (as in free speech) version of the SDK.

Apparently, Google made this step to prevent fragmentation of the ecosystem. What are they going to do next? This situation is far from perfect for software freedom. Developing Android Apps in freedom is only possible as soon as the Replicant developers catch up. Looks like Android stops being a Free Software friendly platform.

So let’s all help that this trend is stopped and Android remains Free. Signing up on the android discussion list is a good first step to asses the situation and plan further action.

Update: It has been pointed out by some people that the SDK Terms and Conditions are older than previously assumed. Google only requires explicit agreement now and shows the terms before download. That wasn’t the case earlier.

Life & Career Lessons – 2012 » Coder Cowboy

$
0
0

Comments:"Life & Career Lessons – 2012 » Coder Cowboy"

URL:http://www.codercowboy.com/2013/01/04/life-career-lessons-2012/


Music: Angels and Airwaves – Diary

I’m always a little late on the 2012 lists, stick with me, this’ll be worth it.

Executive Summary of 2012:

I started 2012 by taking a bit of a risk careerwise, where before I had been the enterprise huge-company guy, I ventured into mid-sized startupish culture, pushing myself out of my personal comfort zone in terms of commute and direction. I was getting into mobile (IOS) development in my spare time, but still wanted to fill in some experience gaps I’d missed in previous opportunities. I had a good 10 month run at a great ‘startup’ in downtown Austin, was laid off, felt my world was upside down for a little while there, and came out of the tailspin – righted myself and found myself with a really great career path/opportunity for 2013.

2012 wasn’t all career & code, but it was certainly too much of it, and it shows. We’ll get to the finer points on that angle as we get through the list.

So, here’s what I learned:

I learned to take risks.

When I quit my job at PayPal a few years ago, I had interviewed around a bit, but didn’t find anything quite perfect. Meanwhile, I was steadily growing more and more disenchanted with code and slow moving big companies. At some point I decided to just quit, taking the risk to take a little time off to take a break, and fall back in love with code in the process.

The day I quit, I was scared to tell my wife what I’d done, and I was shocked when she gave me one of the best hugs of my life and simply told me “What took you so long?”. – Only days later she would not stop telling me that there was an immediate change in my mood, my sense of peace, and my happiness.

The thing that happened when I quit PayPal was, I took my destiny into my own hands. I decided firmly that PayPal wasn’t it for me, and that the next great thing was out there somewhere.

I spent that next year consulting on a cloud infrastructure product at Dell, making some really great friends in the process, and learning all about datacenter innards that I’d never had a chance to learn about back at PayPal.

When the Dell gig was up a year later, I interviewed around again, and found myself pushing my personal envelope of comfort and safety even further – trying a startup-like mid-size company with an open-atmosphere office layout and 90% all-star employees. I wasn’t a cultural fit for the new place, but I was surprised to find myself very comfortable in my own skin – despite philisophical clashes with the culture at large.

The point is, the downside to risk is the unknown, but the upside, if you can handle it, is also the unknown. Each time I switched jobs I learned a little more about what’s always the same in software, and what can be, and should be different. Each time I jumped from one ship to the next, I accepted a little bit more risk and discomfort, and learned a lot about myself, because the opportunity ahead of me always offered 1,000 new variables.

While I was at that ‘startup’ with the all-stars, I learned quite a few valuable lessons, such as:

I learned that sales guys really matter.

As a hot-shot self-involved software engineer/coder, I’ve always had trouble understanding what it is that the sales guys actually *do* at a company. It always seemed the sales guys were the guys selling more than we could deliver, and generally mucking things up – this may still be a fair assessment for a giant cant-do-wrong company such as a PayPal or Dell where the money is pouring in the doors no matter what happens. … It would not be unfair to say I had contempt and disdain for the marketing guys.

Then I worked at a startup.

When you personally meet the 3 or 4 guys who have those sales calls week to week to keep the 3 or 4 big clients sending the checks in, the contempt fades. When you hear the war stories of those 3 guys working in a shitty little closet of an office, and see a team of 20+ engineers around you in a posh downtown office making great salaries 3 years later – the disdain goes too.

Seeing and hearing those sales guys in action was, I think, *critical* to my understanding of how the world works. And let me tell you, Mr. Hotshot Engineer or Designer, the part you do, doesn’t mean shit. The world of money goes around based on relationships and a little bit of luck. The guy in your office who’s non-replacable is the guy who looks and acts like a walking parody out of a ralph lauren ad – so long as he can sell.

When you see VP so and so from client X lose their shit for some random reason and threaten your company’s bottom line on a whim, then see your sales guy have that same VP offering more money next month after a 5 minute phone conversation, that’s when the light bulb clicks on, and you realize that really, the code matters a little, but not very much.

I learned that assholes matter.

The thing about jerks is, they run their mouths. The thing about the right kind of asshole is, they’ve got charisma, and backbone. There are certainly standard run-of-the-mill wannabe jerks who don’t have the right, and I’d argue we could all stand being nicer to one another day to day in general, but man, assholes make the world actually move.

Working in the big company, you’ll see an executive here or there who wasn’t the original big idea guy or gal, and wasnt the nepotism stick-it-out-long-enough ladder climber – frankly, this odd executive is the asshole. And, like it or not, you and I need these people.

When you work in the smaller company, and see the next engineering VP hired cold off the street, along with half a dozen other hires – you’ll know which one he is. He’s going to run his mouth. He’s going to be friendly to everyone, but trash talk everyone too. He’s cunning, he’s smart as a whip, and if he’s rude, he’ll get you if you cross him. Most everyone will agree that this new guy is going to ruin the company, except.. he doesn’t, he does the exact opposite. It’s the nice polite guys who are too scared to risk their necks who ruin the company.

The thing about the asshole is, he gets shit done, he has drive. He has a fault of running his mouth, and along with that comes a life full of lessons of how to get himself out of the jams his mouth gets him in over and over – that means, this guy has a spine, this guy can make tough decisions, and this guy will deliver when it counts.

I’m not advocating ladder climbers, I’m not advocating jerks being jerks for the sake of jerktitude, I’m just saying, they have a place, and when you find the right asshole, they’re going to deliver and kick ass while doing it. The delicious irony will be, 5 years from now when your midsize is larger than midsize, the asshole who everyone hates will be the only executive of the lot who arguably deserves his merit badge title. Think on that.

I learned the value of having lunch with others.

One of the perks at my job last year was paid lunches. This is a really great thing for developers, because developers are idiots in many ways. First, they’re anti-social, and secondly, they’re cheap. So, if they can pretend to be a robot and work 8 hours straight with a hotpocket “meal” in the middle, they will, like idiots.

The downside to this is that it takes your developers a year to make the friendships your marketing team will make with each other in a week. The solution to this problem is to get your developers to go have lunch together.

After having lunch with my new peers for only a few weeks, it became immediately obvious to me how stupid I’d been in my career until that point. Previously I’d always opted for a 15 minute or 30 minute lunch, microwaving something, and getting back to it. In only a few weeks with the new group, I knew more about several of those guys than I did people I’d worked beside for 3+ years at PayPal.

Lunch matters, take it, and have lunch with friends and colleagues, often. Happy hours, too.

I learned that open environments can work for coders, within reason.

Open environment offices are an in-thing. Facebook did it, so everyone else must now do this too.

I disagree with the open environment if it’s done the wrong way. Doing it the wrong way includes: forbidding telecommuting; having more than ten people in any one open space; having sales guys in the same open office as the coders; not providing quiet spaces (couches, little closet hotel cubes) for people to go to to talk or work; and test-piloting your open-space idea on an executive sales team, then mandating it for the world (PayPal..).

Doing it the right way means doing the opposite of everything above, and providing nice noise cancelling headphones for your employees.

Coders need quiet, and time to think, and noise-cancelling headphones are not enough. The place I worked this past year had a great open-environment layout and telecommuting policy, but even still there were more than a few days where I had unending headaches caused by the choice of music or office chatter.

I learned that telecommuting is really awesome, within reason.

Let’s see, this past year I accomplished many things while telecommuting, I watched the entire 5 seasons of the wire (amazing, perception altering show, btw), many movies, did a lot of laundry, took my dog on many walks, and was more productive than I’ve ever been at any job in the meantime.

Telecommuting isn’t for everyone, you’ve got to be driven, on-task, and have a list to constantly feed on when you finish the task before. But, for coders, who need the quiet, the peace, and the space to play This Will Destroy You at deafening volumes from time to time, telecommuting is great.

The thing to remember about telecommuting is that it’s lost face time with your colleagues and boss, and you’ve got to make time while in the office to make up for that lost time.

When 2012 started, I thought telecommuting was something you do on a day without real work, in a startup-like mid-size, those days don’t exist – and that’s a good thing, because by the end of 2012 I can safely say telecommuting days are the days you take to really go heads down (even with This Will Destroy You or The Wire blaring in the background) and get shit done. Asking for a telecommuting day no longer carries a guilty connotation with me, and as a person who consistently delivers, I actually *need* the freedom to just go do the right thing from time to time.

I learned to track myself better.

I’ve always been a list maker, and from time to time I burn myself out with the lists. There’s a balance between the lists and actually letting life just happen, and I suck at that balance.

It turns out, I had a fatal flaw in the way I managed lists. That is, I deleted items after I finished them.

Don’t delete items after you finish them, grey them out, in place, and make a new list each week.

When you grey the items out, you start to get a feel for how much you actually accomplish.

Going into 2012 I was constantly feeling stressed out that I was never on top of my list, and that the list was constantly growing.

After a year of greying items out I feel worlds different, I now have great pride in how much I accomplish, and yet I also finally understand about how many items I can truly knock out per week. I also, for better or for worse, realize what a shitty friend and flake of a person I can be sometimes, because the lists show me these things where before the giant list that never ends with deleted items did not.

I learned that tracking myself does not matter.

One thing I’ve always done with my lists of tasks at work is track what I did each day, because I thought I could cover my ass with the paper trail. It turns out, that matters somewhat, but .. well, not really ..

I learned to be fired (and, when to quit).

I wasn’t fired, I was layed off, budgets got tight, last in first out, etc, but really – I was fired. I was on the top of the boss’ list to axe for a while, and I knew it. I wasn’t a great cultural fit, and had fundamental philosophical differences with prioritization and feel-good-about-ourselves-rewriting-endlessly-for-the-hell-of-it wankery.

When I started working at the place, it was an uncomfortable risk in several ways, and at first I gave myself 6 months to decide if I liked it – at some point that changed to a year mark, and a little after that it turned into a “2 or 3 years, I guess..” kind of thing from my side. I was into it, having fun, but cultural friction was perpetually upsetting. Nothing quite like being 1 of 5 “platform” engineers never invited to the endless feel-good-about-ourselves wankery standards meetings that never went anywhere, perhaps the fact that it was wankery in my mind had something to do with it

To the point about covering your ass with your paper trail, the thing there is, that doesn’t matter. If someone has a target on your back for whatever reason, the paper trail won’t help you. What will help you is spending more time getting to know people and working things out by communicating more, if anything.

Communication helps, but also, life is short, if you’re unhappy with your lot in life, even a little bit, consider changing your lot. For me, I had to be laid off to have the wakeup call that I was settling in several ways to work at the place. I had convinced myself that the settling was part of the discomfort/risk experiment, and honestly I’d probably still be working there today had I not had the not-so-gentle push out that I needed.

Saying I “Learned to be fired” sounds funny, but truly, learning to not compromise 100%, have a little backbone, and be myself mostly was worthwhile. Had I not been myself, I would have hated myself for capitulating to philosophical differences I couldn’t get behind, and I probably still would have been first on the list to go. There’s a stigma to the thought of being fired, especially if you’re someone who’s fired for really bad reasons – such as not actually doing your job. In my case, I did my job to the best of my ability, and kicked ass while doing it, all without compromising my character or beliefs in the process. Being let go for philosophical differences is a lot like being that hard-won asshole VP – being let go b/c you give a damn, and stood up for something, but fell on the wrong side of the dice. (In this case, I just gave a different Damn than the rest of the team.. )

I’m not advocating getting fired, but truly, politely contributing to the cause without making a scene of your philosophical differences too often, is worth it, even if you’re fired over it. I suppose a secondary lesson to learning that assholes are needed is that you can’t be everyone’s friend. You win some, and you lose some. That’s really all there is to it.

No slight to the people I worked with or for, to each his own, truly – I wasn’t a cultural fit at the place, end of story, and that’s one of the beautiful things about software – there’s a dozen or more overarching cultural styles you’ll encounter depending on the shop – if you don’t fit at one place, you’ll fit at another.

I learned to communicate with my spouse, regularly.

The layoff could have been a lot worse, had I not been able to communicate openly and work through the topsy-turvy tailspin with my wife by my side.

Earlier in the year my wife and I had gone to some couples therapy together. As a non-religious person who values reason and actually doing something to change yourself for the better, I highly recommend therapy when the time is right.

We skipped marriage counseling, luckily having enough wits about ourselves to already talk about and understand each other’s thoughts on money, babies, family, etc before actually tying the knot. I’m glad we skipped marriage counseling b/c honestly we wouldn’t have enjoyed it or been ready for it – we were on a high that didnt really dip from the day we met until late 2011 – the counseling would have caused needless turmoil, or been wasted on deaf ears.

So we had communication breakdown, and went to speak with a counselor, a great one.

It’s funny how even the greatest relationships still have amazing amounts of built-in fear and reservation. There were so many downright silly and stupid things that my wife and I were so scared to talk about with each other, nitpicks on character or even habits and whatnot that didnt matter in the big scheme of things. The thing is, the nitpicks build into a mountain at somepoint and will kill you if you can’t talk about them. Having a third party intermediary person there to listen to us and encourage us to talk about the scary things really helped.

At first, there were many tears and deep breaths while we vocalized things that were bothering us, and sometimes voices were raised, but the counselor kept draining reason into our ears and showing us how to handle these communications on our own. A few weeks later, even the most intimate fears or new worries were voiced easily without any fear at all – we didn’t even need the counselor anymore. That’s how you can tell you’ve got a great counselor, when there’s an end game and you can see yourself clearly in a better place of understanding than before.

Another tidbit the counselor gave us was a really great, if corny sounding tool: relationship talks.

A relationship talk is a weekly meeting (no shit, like a business meeting) that lasts 20 minutes. Spouse A has ten minutes to talk, uninterrupted, if time is left at the end of their ten minutes, Q&A can happen. Then Spouse B takes a turn. That’s it, the end. Next week, you switch who goes first. After the meeting, you do something fun together, which in our case usually wound up being a walk around the neighborhood b/c we’d find there was so much stuff to talk about that the relationship talk would open up. You do the relationship talk *every* week, no exceptions. Sometimes the talk’s a tear jerker, most times it’s boring, but doing it every week is essential.

The relationship talk keeps the communication lines open. When they’re wide open and you’re humming along happily, the talks may be a bit bland, but even then they’ll suprise you with news you had no idea of, and when the comm lines are shut down or atrophying, the relationship talk will save you. I promise.

We also started doing ‘Wonderful Wednesdays’, which, more or less, is date night. No computers, no tv, something wholesome and fulfilling and rejuvinating, together, no exceptions. In practice, it’s sometimes a ‘Wonderful Tuesday’ because something really can’t budge, but making time in your life, together, to chill the fuck out is important.

I learned to read myself, and trust my gut.

I am what, and who I am. I’m driven, impatient at times, overly dramatic, and extremely emotional when everything’s just right.

Otherwise, like most of this last year, I’m dead inside.

Early on at the startup, my boss informed me that he didn’t like the way I communicate, I’m too wordy, imprecise, yatta yatta. – It was a fair assessment in part, but I took it all the wrong way and used it as a vice rather than a tool to grow with. Long story short, it fucked me up.

My natural style is to be transparent, open, and accomodating, so hearing that I was expected to be more precise and careful with my words sounded a lot like I’d have to be someone different while I worked at the place if I was going to cut it. In the end, I probably WOULD have had to be a different person to jive well there longterm, but the point my boss was making wasnt that I should shut down and be a different person, he was saying I should strive to be slightly less sloppy.

I told myself I was becoming a respectable little adult by shutting down and towing the line, and when I was finally laid off a good while later – it all came pouring out. My writing became prolific again, as did my creative whims. I was staying up through the night one or two nights per month, I was even honest-to-god crying fairly often at even the littlest beautiful or horrible things – I was me, again.

Perhaps I’m both of these people, the emotional creative dreamer, and the tight-ass conservative who’s dead inside, but I prefer the dreamer.

Time and time again I’ve noticed these moments where everything synchronizes into this chaotic yet perfectly ordered moment where some huge chapter in life immediately makes sense – as if my subconscious has been working on it all the while, for months. The backside of those events is always the same – I’m writing, I’m loving my life, I’m creating all sorts of crazy little mementos, I’m taking it easy more often, and I’m, yes, occasionally crying at the immense overcoming beauty and tragedy of it all.

2012 gave me metrics, if I’m not blogging for the fuck of it at 5am on a work night once or twice a month, something’s off. If I’m not occasionally trading sleep for another one-night shot at something great, something’s wrong – and it’s time to really assess what my gut is telling me.

I learned to be brave, and patient.

Life is funny. Often the most important stories and lessons of your life are ticking right along, slowly, very slowly, in the background – completely hidden by silly shit you tell yourself matters more. Here’s what I mean, when I was pulling photos from the past 3 years out of my archives for the Freddie Book, I had to scroll past thousands, literally thousands of photos of my dog. I’m a dog guy, not a cat guy, and though I think my dog’s pretty fucking amazing, the story to tell for recent years is not hers, it’s the cat’s.

Here’s a picture of the awesome dog anyway:

Anyway, after being laid off, I took time off, again like after PayPal, because I could, and because I knew I needed to. TumbleOn seemed to be taking off, so I distracted myself with dreams about that for a while, and I visited my good friend in Seattle for a bit, and so on. After the predictable this-is-going-nowhere burnout phase that followed shortly, I was finally at ease for a moment or two. This was one of those moments where the chaos distilled into clarity.

One morning I woke up, and decided to write the story of Freddie. There are perhaps 3 days in the past year like the day I wrote that story, but Freddie’s story was the best of the three, easily.

Freddie came through me like they say the Bible did the prophets, it just flowed, as if from somewhere else. I didnt plan it 3 days previous, I didnt even know what the damn point of the thing was as I was writing it, and yet, as the story flowed out it turned into this really incredible, amazing thing – an allegory for my own personal story of 2012 – of learning to take larger and larger risks, be brave, and welcome change – all while giving credit to the one person who deserves all the credit for any betterment in my life ever, my wife.

I won’t bother you with the details of Freddie’s story here, you can go read the book yourself (online, for free).

I learned that success is not what capitalism says it is.

Working at the ‘startup’ this past year was really a great experience to be walking through while simultaneously seeing our personal side project, TumbleOn, grow.

Until you’ve worked at a startup, or a ‘startup’ that’s really a decently-funded mid-size pretending to be a ‘startup’, you don’t really ‘get’ what success is. Success, arguably, in American or capitalistic terms, is striking it rich. But, frankly, striking it rich is a stroke of luck, whether you win the lottery or win the right-time-right-place lottery like Bill gates.

This past year redefined my personal measure of success. Success is not a certain amount of money, or a title, success is being one of those initial three guys in a closet-sized office, seeing 20 well-salaried employees loving life 3 years later. Success is making something great that brings joy to other people’s lives.

Would I like TumbleOn to make me and my friends independently wealthy? Sure, but to me, TumbleOn is already the largest personal success of my short little career, and the sweet and sad fact is, it may be the largest success of my long term career as well. In my own little corner of the world, I’ve been a part of a small team who made something really amazing that thousands of people use regularly, viewing more than 300 million photos in 2012, and more than a billion in 2013.. laughing, gasping, giggling, and being inspired along the way.

To me, as I said earlier this year, success is not the money, success is making something bigger than myself, that makes the world just a little bit better than it would be without me.

I learned to take detours.

A stronger relationship between my wife and I resulted in a bolder wife, who’s more uppity lately, in all the right ways. This year she really put the time in to repeat the message to me that I should live life a bit more rather than plan so much.

For example, I immediately thought to take a long-planned-anyway trip to Seattle after being laid off, but kept the thought secret and was ashamed – it seemed extravagant given the circumstances. A few days after the layoff, my wife independently suggested and insisted that I go to Seattle. I hemmed and hawwed about it, listing the cons of the idea (money), and she wouldn’t have it. She knew what was good for me, and sitting at home in the familiar, moping, driving myself crazy wasn’t it.

Months later, I was visiting some friends, absent mindedly returning from lunch with a good buddy, when I got lost.

I wasn’t lost lost, but I was way off the beaten path. My coder brain immediately detoured me to the most efficient route home – still a different route than that intended, but efficient – when all of the sudden half way home I slammed on my brakes and nearly caused a wreck – swerving into the drive way of this great botanical gardens in Fort Worth. I’d completely forgotten the place was there, and decided that for once I would take my wife’s advice, and take the detour, throwing efficiency out the window.

On my way into the Japanese Gardens, the ticket clerk asked how my day was going, I told him “it was going alright, but it’s about to get a whole lot better”. The clerk laughed. I took a few steps inside and something primal overtook me. I felt as if I were outside myself, floating through this long-familiar place, truly soaking in the beauty and moment at hand – it was a feeling I really don’t have often.

As I walked past the zen garden I was congratulating myself on taking a detour, excited to tell my wife, sure that she’d be so proud of me for actually experiencing something rather than running the numbers beforehand.

I walked and savored the moment, being outside myself as in a dream, and conscious of it all the while. As I continued on, I saw something amazing, or I thought it was.

There was this crane, sunning himself, wings spread wide to wholly accept the sun. He too, was savoring the moment, and he just stood there, zen-like in this serene scene for a full 10 minutes, motionless. For a while I was convinced he was some poetic statue painted convincingly real, and then he blinked.

Seeing that crane, on my floating detour, was one of those weird things I don’t think I’ll ever forget for the rest of my life. Now, two months later, I can barely recall many of the particulars of that visit to Fort Worth at all, but that moment in the sun, watching that crane fully accepting the simultaneous chaos, beauty, and fragility of life – that scene and moment will stick with me for life.

Thank you, wife, for telling me to take detours.

I learned that time is precious.

The flipside to living a happy and full life is that there isn’t enough time in life.

There’s something they call “FU Money”, or, “Fuck You Money”. Loosely, this is the amount of money it would take for you to quit working and start living your life the way you want, and deserve, to live it.

Something flipped in Steve Jobs’ brain one day, and he realized the best way he could live his life was to live every day as if it were his last.

This year, I learned to set my “Fuck You Money” value to $0. I’m no zen master like Jobs, but for the first time since college I can honestly say that I’m working at a place where even if I were independently wealthy, I’d still be showing up for work the very next day.

That is what I, and you, and everyone, deserves.

You deserve to work with kick ass people who both inspire you and celebrate with you. You deserve to take time on wednesdays to be with your family and turn the computers and calendars off. You deserve to take time in the middle of the night to forego sleep in favor of some crazy undending blog rant (like this) or new project idea. You deserve to take vacation, and you deserve a wake-up call from time to time.

There’s this great White Stripes documentary of their last tour. There’s a segment in there where Jack White is talking about living on the edge, manufacturing wake-up calls. He says that when he goes on stage, he measures the comfortable distance to set the mic stand so he can easily grab his next guitar pick. Then, he moves the mic stand a few feet further, so he has to really jump and make himself get that pick in time. Sometimes he misses, sometimes he knocks the damn stand over, but every time he gaurantees he’s living – because he’s pushing his own personal envelope of comfort in even the silliest esoteric ways – and, he says, it works.

AnandTech - The ARM vs x86 Wars Have Begun: In-Depth Power Analysis of Atom, Krait & Cortex A15

$
0
0

Comments:"AnandTech - The ARM vs x86 Wars Have Begun: In-Depth Power Analysis of Atom, Krait & Cortex A15"

URL:http://www.anandtech.com/show/6536/arm-vs-x86-the-real-showdown


by Anand Lal Shimpi on 1/4/2013 7:32:00 AM
Posted in Tablets , SOC , smartphones , Intel , ARM , Samsung , Cortex A15

Late last month, Intel dropped by my office with a power engineer for a rare demonstration of its competitive position versus NVIDIA's Tegra 3 when it came to power consumption. Like most companies in the mobile space, Intel doesn't just rely on device level power testing to determine battery life. In order to ensure that its CPU, GPU, memory controller and even NAND are all as power efficient as possible, most companies will measure power consumption directly on a tablet or smartphone motherboard.

The process would be a piece of cake if you had measurement points already prepared on the board, but in most cases Intel (and its competitors) are taking apart a retail device and hunting for a way to measure CPU or GPU power. I described how it's done in the original article:

Measuring power at the battery gives you an idea of total platform power consumption including display, SoC, memory, network stack and everything else on the motherboard. This approach is useful for understanding how long a device will last on a single charge, but if you're a component vendor you typically care a little more about the specific power consumption of your competitors' components.

What follows is a good mixture of art and science. Intel's power engineers will take apart a competing device and probe whatever looks to be a power delivery or filtering circuit while running various workloads on the device itself. By correlating the type of workload to spikes in voltage in these circuits, you can figure out what components on a smartphone or tablet motherboard are likely responsible for delivering power to individual blocks of an SoC. Despite the high level of integration in modern mobile SoCs, the major players on the chip (e.g. CPU and GPU) tend to operate on their own independent voltage planes.


A basic LC filter

What usually happens is you'll find a standard LC filter (inductor + capacitor) supplying power to a block on the SoC. Once the right LC filter has been identified, all you need to do is lift the inductor, insert a very small resistor (2 - 20 mΩ) and measure the voltage drop across the resistor. With voltage and resistance values known, you can determine current and power. Using good external instruments (NI USB-6289) you can plot power over time and now get a good idea of the power consumption of individual IP blocks within an SoC.


Basic LC filter modified with an inline resistor

The previous article focused on an admittedly not too interesting comparison: Intel's Atom Z2760 (Clover Trail) versus NVIDIA's Tegra 3. After much pleading, Intel returned with two more tablets: a Dell XPS 10 using Qualcomm's APQ8060A SoC (dual-core 28nm Krait) and a Nexus 10 using Samsung's Exynos 5 Dual (dual-core 32nm Cortex A15). What was a walk in the park for Atom all of the sudden became much more challenging. Both of these SoCs are built on very modern, low power manufacturing processes and Intel no longer has a performance advantage compared to Exynos 5.

Just like last time, I ensured all displays were calibrated to our usual 200 nits setting and ensured the software and configurations were as close to equal as possible. Both tablets were purchased at retail by Intel, but I verified their performance against our own samples/data and noticed no meaningful deviation. Since I don't have a Dell XPS 10 of my own, I compared performance to the Samsung ATIV Tab and confirmed that things were at least performing as they should.

We'll start with the Qualcomm based Dell XPS 10...

kpshek/mm2pwd · GitHub

$
0
0

Comments:"kpshek/mm2pwd · GitHub"

URL:https://github.com/kpshek/mm2pwd


Mega Man 2 Password Generator

Overview

Mega Man 2, like many games of its era, utilized a password system in order to continue your progress between game sessions. This removed the need for a battery in the game catridge and allowed gamers to share passwords (and thus progress) with others.

In Mega Man 2, the password is represented as a 5x5 grid in which the columns are labled 1-5 and the rows A-E. Each password is composed of 9 cells which are 'set', indicated by a red dot.

Thus, a password can be communicated as A5, B2, B4, C1, C3, C5, D4, D5, E2.

Put another way, this 5x5 grid represents 25 bits in which a password always has exactly 9 bits set. Using this representation, the password algorithm can be expressed succinctly in terms of these bits and using basicbitwise operations.

In the 25 bits, there are 5 words of 5 bits each where each word represents a row in the grid. The entire 25-bit password is thus comprised of the words A E D C B (using little endian). So, the first word (lowest 5 bits) are the 5 bits of the row B and the last word (bits 20-25) are the 5 bits of row A.

Words 1-4 (bits 1-20)

A Mega Man 2 password has exactly 9 bits set. 8 of these bits represent the alive/defeated status of each of the8 bosses in the game. The follow table illustrates the bit values for the alive/defeated status for each of the 8 bosses.

Boss Alive Defeated
Bubble ManC3D1
Air ManD2E3
Quick ManC4B4
Wood ManB5D3
Crash ManE2C5
Flash ManE4C1
Metal ManE1E5
Heat ManD5B2

Thus, if both Bubble Man and Air Man were defeated but all other bosses were still alive, this is represented as the following bits (1-20) 01111 10001 01000 10000.

Row E D C B
Word01111100010100010000

E-Tank Word (bits 21-25)

The last bit (9th) represents the number of E-Tanks Mega Man has. This is stored in the 5th word (row A) and represents bits 21-25, the most significant bits of the password. Mega Man can have between 0-4 E-Tanks and this is encoded simply per the bit position in this last word. Thus, if Mega Man has 0 E-Tanks the word is 00001, 1 E-Tank is 00010, 2 E-Tanks is 00100, and so forth. Unlike the other words, the 5th word (row A) will thus only ever have a single bit set.

The E-Tank word (row A) is important in that it encodes bits 1-20 by performing arotate left operation on bits 1-20 by the number of E-Tanks that Mega Man has. Thus, if Mega Man has 2 E-Tanks, bits 1-20 are rotated left by 2 positions. If Mega Man has 0 E-Tanks, this is effectively a no-op. The table above illustrating the bits set for each of the 8 bosses represent the bits prior to the rotate left operation. The bits of the E-Tank word are not included in the left rotation.

Algorithm

The algorithm for calculating a password can be summarized as follows:

Set the bits of the first 4 words (rows B, C, D, E) based on the table above (bits 1-20) Rotate left bits 1-20 based on the number of E-Tanks Add the E-Tank word (bits 21-25) as the most significant word of the password

Setup

To run mm2pwd you will need Ruby installed. mm2pwd has been tested with Ruby 1.9.3p362 which is the latest version as of this writing.

Running

To run mm2pwd, simply open a terminal session in the root directory and execute the following command:

$ ./mm2pwd.rb

Without an modification, this will generate a password in which Mega Man has all 4 E-Tanks and has defeated all 8 bosses. If you want to modify this, simply change the values in the initialize method.

License

Copyright 2013 Kevin Shekleton

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.


How blind people use Instagram

An iPhone lover’s confession: I switched to the Nexus 4. Completely. — 24100.NET

$
0
0

Comments:"An iPhone lover’s confession: I switched to the Nexus 4. Completely. — 24100.NET"

URL:http://www.24100.net/2013/01/an-iphone-lovers-confession-i-switched-to-the-nexus-4-completely/


+++
Update: Shortly after publishing this article here, it became #1 on Y Combinator’s Hacker News. Subsequently, GIZMODO approached me and asked to re-publish it. Finally, ReadWriteWeb’s Editor-in-Chief (Dan Lyons) wrote a great follow-up. As it is difficult to keep up with responding to all the comments, I mainly focus on responding on Google+.
+++

First things first, I’d love to get in touch on Google+ and Twitter (@ralf).

Over the past few years I’ve invested a lot into Apple products and services.

If you’d come by my house, you’d find four of the latest Apple TVs, two iMacs, the latest MacBook Air, a MacBook Pro, more than five AirPort Express stations and Apple’s Time Capsule. You could touch every single iPhone, from the first up to the iPhone 5, iPads ranging from first generation to fourth and we’ve lately added two iPad minis.

My iTunes Library comprises well over 8.000 songs – all purchased via the iTunes Store. No matter whom you would ask, everybody will confirm that I’m what some folks call an Apple fanboy.

The reach of Apple’s products goes beyond my personal life.

As the co-founder of Germany’s largest mobile development shop, I’m dealing with apps – predominantly iOS powered – in my daily professional life.

Driven primarily by the business I run, I tried to give Android a chance more than once.

In various self-experiments, I tried to leave my iPhone at home for the Motorola Droid, the Nexus One, the Samsung Galaxy S II and S III – and always switched straight back to the iPhone. None of those Android devices have worked for me – yet.

And then I got the Nexus 4.

When the latest Google flagship Android device shipped, I almost expected it to turn out as yet another “take-a-look-and-sell-it-on-ebay” experience. Little did I know.

It’s now almost two weeks since I switched the Nexus 4 on for the first time – and meanwhile I completely moved to it, leaving my iPhone 5 at home. Do I miss anything? Nope. Except iMessage. More to that later.

In this somewhat lengthy post, I’ll try to explain why.

My motivation is not to bash Platform A over Platform B. On the contrary: I will try to summarize my very personal findings and experience based on years of using iOS. I’ve seen the Apple platform evolve while Android was playing catch-up for so long. When iOS 6 came out, for the first time I complained about the lack of innovation in this major new release. I asked myself, whether we might see Apple beginning to lose its leading position in mobile platforms.

Before you read on, it’s important to emphasize that I’m a pro user.

I’m not the average smartphone owner, who makes just a couple of calls every now and then or runs an app once in a while. By the nature of my job and out of curiosity, I deal a lot with social media outlets, social networks and constantly try new services. With that said, my judgement might not be suitable for everyone. In case you consider yourself being a demanding power user, though, you might find this helpful.

At the time of this writing, I used Android Jelly Bean 4.2.1 on an LG Nexus 4.

Summary

Putting it into a single line: The latest version of Android outshines the latest version of iOS in almost every single aspect.

I find it to be better in terms of the performance, smoothness of the rendering engine, cross-app and OS level integration, innovation across the board, look & feel customizability and variety of the available apps.

In the following paragraphs, I try to explain why.

Performance and Smoothness of the Rendering Engine

I know there are benchmarks which measure all kinds of technical performance on a very detailed level. That’s not what I’ve done and, honestly, I’m not interested into that much. I’m talking about the performance I feel in my daily use.

Using the Nexus 4 with Android 4.2.1 is a pure pleasure when it comes to performance. I don’t exactly know what Google has done with “Project Butter” in Jelly Bean, but the result is astonishing. In the past, Android felt laggy, sometimes even slow and responses to gestures didn’t feel half as immediate as on iOS.

This has changed completely.

I’d say both platform are at least even. In some cases, Android even feels a bit ahead of iOS 6. I especially got this impression when it comes to rapidly switching between apps – which I constantly do now – and scrolling through a huge number of more complex content. (I’m not talking just tables with text here.)

While Android still doesn’t give you bouncing lists and scroll views – primarily, because Apple has a patent for this specific behavior – every transition between views has been reworked, polished and modernized. In most cases, it feels more modern, clean and up-to-date than its iOS counterpart.

Cross-app and OS level integration

One of the biggest advantages I found during my daily use is the level of cross-app and OS level integration.

This also is the area where I got most disappointed when Apple introduced iOS 6.

In fact, I think iOS has reached a point, where usability starts to significantly decrease due to the many workarounds that Apple has introduced. All of these just to prevent exposing a paradigm like a file system or allowing apps to securely talk to each others. There is a better way of doing this. Apples knows about it but simply keeps ignoring the issues.

On Android, it’s quite the opposite. One can see the most obvious example when it comes to handling all sorts of files and sharing.

Let’s assume, I receive an email with a PDF attachment which I’d like to use in some other apps and maybe post to a social network later.

On iOS, the user is forced to think around Apple’s constraints. There is no easy way to just detach the file from the email and subsequently use it in what ever way I want. Instead, all iOS apps that want to expose some sort of sharing feature, do have to completely take care for it themselves. The result is a fairly inconsistent, unsatisfying user experience.

On iOS, you might use the somewhat odd “Open in…” feature – in case the developer was so kind to implement it – to first move the file over to Dropbox, which gives you a virtual, cloud based file system. If you’re lucky, the other app, from which you want to use the file next, offers Dropbox integration, too, so you can re-download it and start from there. All because Apple denies the necessity of basic cross-app local storage.

On Android, it’s really simple.

I can detach the file to a local folder and further work with it from there. Leveraging every single app that handles PDF files. In case I receive a bunch of mp3 files, I can do the same. And every app, that somehow can handle audio playback, can reuse those mp3 files.

Another great example: Sharing stuff on social networks. On iOS, I have to rely on the developers again. Flipboard, as one of the better examples, gives me the ability to directly share with Google+, Twitter and Facebook. On my Nexus 4, I have 20+ options. That is, because every app I install can register as a sharing provider. It’s a core feature of the Android operating system.

But it goes even further: On Android, I can change the default handlers for specific file types – much like I’m used to from desktop operating systems.

If, for example, you’re not happy with the stock Photo Gallery application, that shows up whenever an app wants you to pick an image, you can simply install one from over a hundred alternatives and tell Android to use it as its new default. The next time, you post a photo with the Facebook app – or have to pick an image from within any other app – your favorite gallery picker shows up instead of Android’s own.

All of this is entirely impossible on iOS today. I’ve stopped counting how often I felt annoyed because I clicked a link to a location in Mobile Safari and would have loved the Google Maps app to launch. Instead, Apple’s own Maps app is hardcoded into the system. And there’s no way for me to change it.

The customizability is simply stunning

Let me make this very clear: Gone are the days where home screens on Android phones almost always looked awful.

If you don’t believe me, hop over to MyColorscreen and see for yourself.

Also note that all of those are real Android home screens, not just concepts provided by designers. They are not beautifully photoshopped wallpapers, but fully functional screens with app icons and active widgets.

And all of those can be configured pretty easily just by installing a couple of apps and tweaking settings. Here is an album showing my current configuration, which I was able to achieve after just a couple of days using Android as an absolute newbie.

Getting inspired? Here are some more of my favorites:

Now, iPhone lovers might argue, that the average Joe doesn’t want to deal with widgets, icons and custom animations.

I’ve used the same argument for years. Well, guess what, you don’t have to. The default Jelly Bean home screen looks beautiful already. But in case you want a somewhat more individual phone, the possibilities are endless.

For years, what you could do with Android, simply yielded awful looking home screens. This has changed. Significantly so.

And believe me or not, but after having configured my Nexus 4 just the way I always wanted – providing me with the fastest access to my most frequently used apps along with the most important information on a single screen – whenever I grab my iPhone for testing purposes, iOS feels pretty old, outdated and less user friendly. For me, there currently is no way of going back. Once you get used to all of these capabilities, it’s hard to live without them.

App quality and variety

Yes, there are still lots of really ugly apps on Google Play.

In my opinion, this has two primary reasons.

First, the obvious one: The lack of a centralized quality control and review. It’s great for encouraging variety, but obviously it also allows for some really cheap productions to be published to the store. Usually, you can spot those immediately from the screenshots on Google Play.

The second reason is more low-level: The way developers declare user interfaces (it’s primarily done in an XML configuration file) allows for rapidly hammering together dirty UIs. That’s what happens a lot and users can see and feel it. iOS developers tend to be more aware to involve designers and iOS UIs cannot be crapped together as easily.

However, gone are the days where the apps I use most greatly fall behind their iOS counterparts.

The Facebook app is identical in terms of the look and feel and its features. As a plus, it has better cross-app integration. The Google+ app is better on Android, but that’s to be expected. Flipboard is fantastic on Android, plus better integration. The same is true for Pulse News. The list goes on: Instagram, Path, LinkedIn, WhatsApp, Quora, Pocket, Amazon Kindle, Spotify, Shazam and Google Talk. They are all great on Android. Plus better integration. Plus home screen widgets. You sense a scheme here?

And if you want to experience some real UI magic – even if you just need an argument when you’re bumping into an iPhone owner the next time – install Zime, a highly addictive calendar for Android which features a smooth 3D animation and really innovative UI.

Talking about variety. This is, where Android’s openness pays off.

On iOS, many things I always wished to see being developed, simply cannot be done because of the strict sandbox Apple enforces around apps. On Android, I use an app to block unwanted calls. To auto-respond to incoming short messages. And to lock some specific apps with an extra passcode, so my customers don’t play with my Facebook profile, when I hand over my Nexus 4 for demos.

I also have apps that give me great insight into the use of mobile data across the device and all apps. Or the battery consumption. Or which apps talk home and how frequently.

None of it is available for iOS. And possibly won’t be at any time in the near future.

What I miss

I said this earlier: The only thing I miss is iMessages. I’m not kidding. Letting go iMessages was difficult, as many of my friends are on iPhones and used to text me via iMessage. While there are perfect alternatives (Facebook Messenger, Google Talk, WhatsApp, to name only a few), from time to time I still find a couple of unread iMessages, when I switch on my iPhone 5.

My most frequently used apps

I’m an Android newbie. During the last couple of days, I had to ask many questions and received hundreds of recommendations for apps. I installed, tried and uninstalled. And kept the great ones. My sincere thanks go out to the great Nexus and Android communities over at Google+.

In case you decided to give Android a try before you read this article, or got inspired here, I’d like to save you some of my journey. Here is a list of the apps I found most useful (and beautiful, given the high standards set by years as an iPhone addict):

Note: I always use the paid / pro version of apps, if one is available. Coming from iOS, I simply cannot adjust my eyes to in-app-ads and probably never will. Google Play now offers credit cards, PayPal and some other payment alternatives. Plenty of choice. I encourage everybody to give back to the developer economy and not just go for the free versions.

In case you’re wondering why I took the burden to include all of the links to the apps above, well, here is another advantage over iOS: Google Play allows the complete remote install via the Web. If you’re logged into your account you click the install button after visting one of the links in any browser, and wherever your phone is, the respective app will be installed silently.

My Android Wish List

Let me finish this post with a couple of wishes I’ve got for the next major version of Android, hopefully made available at this year’s Google I/O:

  • More and centralized settings for notifications, or, a notification center.The rich notifications introduced in Jelly Bean and the overall usability of the notification bar and drawer are already far better than those on iOS. (On a side note, I never understood why usability masters like the Apple engineers decided to make the “clear” button so tiny, that you can hardly hit it without using a magnifying glass.)However, the level of customization you get for Android notifications is currently 100% up to the developers.This means, even though Android offers a great variety of possibilities, they are not consistently available in all apps. In fact, some apps barely let you switch notifications on and off, while others allow you to customize every aspect, from notification sound to the color of the notification LED to do-not-disturb times. These should be made available globally and enforced through the APIs.For example, I’d love to be able to receive notifications on Facebook messages, but don’t want them to show the full message preview in the notification bar.There are some apps, which let you chose whether you want a complete preview, or just a standard “you’ve got mail” message, without revealing its content. But it’s up to the developer whether you’ve got the choice or not.Or: Android has support for a notification LED that can flash in different colors. I configured the LED on my Nexus 4 to blink green on new WhatsApp messages. Incoming stuff from Facebook notifies in blue and new business mail causes the LED to flash in white. What sounds like a tiny feature is really valuable: While sitting in a meeting, you can grasp immediately whether you might want to check your phone right away or not. Unfortunately, not all apps let you customize the LED color. Again, it’s up to the developer to provide these settings as part of their application. This belongs into a centralized notification center.Options I’d like to see centralized: LED color, notification sound, content preview. They could also be exposed on app level, but the Android Notification Center should allow for overrides.
  • Support for multiple accounts in Google Now.I’d love to see Google Now taking advantage of multiple configured Google accounts. On my device, I’d like my Google Apps for Businesses account to drive the calendar based cards but my private one for everything else (location and browsing history, etc.). Currently, Google Now can only leverage a single account. I therefore had to switch browsing and location history on for the Google Apps for Business account I use professionally. This should be a no-brainer for Google and I keep wondering, why the folks at Google tend to forget these multiple-account scenarios.
  • Solving the inconsistencies grouped around the back button.I’ve actually found this on many lists and from what I’ve read it has already gotten better in Jelly Bean. However, at times I still get confused about the multiple navigation hierarchies that are caused by the native back button which is part of the OS and a second back button available within apps. Oddly enough, the mostly fantastic Google+ Android app suffers from this issue, too. Sometimes I end up on my home screen just because I “went back to far”. It’s not a big issue, but one which needs to be addressed.As a starter, how about giving the damn back button a different color if the next time you hit it, you’ll be taken out of the app.
  • Indicate whether an app uses Google Cloud Messaging or some other technology  stay connected.I believe this one to be huge: On iOS, there are essentially no long-running background processes, except for VoIP or Navigation apps. This means, all apps that notify users of incoming data while they are inactive, make use of a centralized service operated by Apple, called Push Notifications. It has a great advantage with respect to battery life, as there is only a single process on the OS level, that monitors all incoming messages and distributes them to the targeted apps, instead of potentially many apps doing whatever they want to do to stay connected.Android has a similar service, named Google Cloud Messaging.Unfortunately, there is no obvious way to differentiate apps that leverage this service from those, that constantly poll or even keep a socket connection to their home servers.I’d love to see the ones making use of Google Cloud Messaging identified in Google Play and on the OS level, maybe in the already available App Info screen. That way, I could dramatically increase battery life by stopping those that constantly talk back home and encourage developers to make use of Google’s Cloud Messaging service.

One last word

At the beginning I stated, that I tried Android many times before and it never worked for me. I figured, there are two main reasons for this. First, Android has made a major step forward with Jelly Bean. It just wasn’t on pair with iOS before. Second, and more important, I found the stock Android experience provided by Google the best you can get. After switching to the Nexus 4, I tried my Samsung S III again, and it did not work for me.

What Samsung does with its TouchWiz modifications and many of the other tiny changes – and other non Nexus vendors, too – totally ruins the experience for me. If you’re coming from iOS I highly recommend choosing one of the Nexus devices with guaranteed updates and a clean Android environment the way Google envisioned it.

Closing it off

This was rather lengthy. I figured, switching the mobile OS platform should be worth an in-depth view. Hence this post. I hope you’ve enjoyed it.

Will I sell my iPhone 5? No. No. No. I never sold one. I’ll keep it. Maybe it’ll manage to win me back with iOS 7.

Looking forward to your feedback in the comments. Or on Google+, Facebook and Twitter.

IRC is dead, long live IRC

$
0
0

Comments:"IRC is dead, long live IRC"

URL:http://royal.pingdom.com/2012/04/24/irc-is-dead-long-live-irc/


IRC (Internet Relay Chat) has been around since 1988, which makes it ancient in Internet terms.

And although it’s still used by hundreds of thousands of users around the world, IRC has seen a dramatic downturn in usage.

We have talked to the creator of IRC, and others, about why the once so widely used technology has seemingly fallen out of favor with so many users.

The origins of IRC

We connected with Jarkko Oikarinen, the creator of IRC, who works at Google in Sweden, and he told us the story of how IRC was born.

The first IRC Server. Photo courtesy of Wikipedia.

Oikarinen says that he created IRC during three to four months in 1988 when he was a summer intern at the University of Oulu in Finland.

At the time, Oikarinen was maintaining a local BBS (Bulletin Board System) called OuluBox and the chat system there needed refreshing. While working on the updated chat system he also wanted to allow participants from the Internet who didn’t need to be logged in to OuluBox to participate in chat.

Thus, IRC was born.

Since then, IRC has served as an invaluable way of communicating for scores of users around the world. For almost whatever you’d like to discuss or get help with, there’s been an IRC network and channel that would serve your interests.

But since the arrival of the new century, IRC has dropped in popularity, with users moving to other forms of communication like the web and social media. We took a look at the numbers to see just how bad it is for IRC.

IRC has lost 60% of its users since 2003

It’s clear that overall IRC usage, both in terms of users as well as channels, has been in steady decline for many years. In fact, IRC has lost 60% of its users since 2003, a dramatic fall in numbers for any service.

Oikarinen attributes the decline in IRC to a trend of commercialization on the Internet.

“Companies want to bring users to their walled gardens,” he says, to ”keep the user’s profiles locked there and not make it easy for users to leave the garden and take their data with them.”

IRC’s distributed nature does not fit with the walled garden approach, says Oikarinen. So instead of supporting open communication tools like IRC, companies invest money in making their own solutions, he claims.

Christian Lederer, also known as “phrozen77,” is the webmaster of IRC-Junkie.org and he’s had his pulse on the IRC community for many years. According to Lederer, the decline in IRC usage has many possible reasons behind it:

  • Lederer lists large and prolonged DDoS attacks in the early 2000s as one main reason behind the decline. The attacks disrupted many IRC networks, including the most popular ones, and crippled the chat experience for users. When the networks were back up again, many users had migrated elsewhere or abandoned IRC completely.
  • Software piracy and the spreading of warez, is another reason Lederer points to. In the early days of IRC, finding such content was a major reason why some users connected to IRC networks. Over the years, users have found new and easier ways of obtaining warez, like P2P, resulting in less of a need for IRC.
  • Social networking also played an important part of users fleeing IRC. With services like Facebook, Twitter, Linkedin, and others, users found it more convenient to communicate through the social network rather than logging in to an IRC channel.
  • Finally, Lederer points to declining costs and increasing availability of cheap and reliable hosting. If someone disagreed with the way a network was run, they could suddenly start his or her own. In doing so, they could potentially take the channel they operated and their users with them, thereby decimating the numbers of the bigger networks even further.

So the decline in IRC usage is a complex issue with no straightforward answers. But it’s not all bad news, as we’ll see next.

Among top IRC networks, Freenode bucks the trend

If we look more closely at the top six IRC networks and chart their development since 2003, it’s clear there are winners and losers.

As you can see, QuakeNet, EFnet, and IRCNet have lost a lot of users, while DALnet and Rizon are floundering without moving much up or down.

But it’s not all doom and gloom in the world of IRC, however. Freenode.net is not following the typical trend. Rather it seems to be growing by leaps and bounds, up 486% from 2003 to 2012 in terms of users. According to Freenode’s blog, the network passed 80,000 concurrent connections on April 2, 2012.

In fact, Freenode has, according to these numbers, just become the number one IRC network in the world, just bypassing QuakeNet.

Christel Dahlskjaer, President of Peer-Directed Projects Center(PDPC), the organization that operates Freenode, explains the network’s growth with its focus on free and open source software.

“Freenode has indeed grown and continues to grow,” Dahlskjaer explains. “Freenode has never been a ‘traditional’ IRC network though. Our users tend to come to Freenode because they contribute to or use a free and open source (or other peer-directed) project that has a channel or more on the network. Then in turn other projects come to Freenode because there is a lot of overlap when it comes to users and contributors across the various projects.”

“In short, Freenode isn’t growing because it is Freenode or because it is IRC. Freenode is growing because of the projects that chose to make Freenode ‘home’ are growing,” Dahlskjaer summarizes.

On the question of whether Freenode’s current good fortune is sustainable, Dahlskjaer is direct. “I see no reason to think that growth is likely to stall anytime soon. For the last six years at least, Freenode has been very steady,” she says.

Where is IRC headed?

It’s clear that IRC is declining in overall usage but growing in certain niche areas. Perhaps that’s where the future of IRC lies.

Lederer says that IRC has to innovate to compete with easy-to-use solutions such as Facebook. This, in turn, is driven by a change in mindset of developers of IRC-related software, who have to drive this innovation, client-wise as well as protocol and server-wise.

To Oikarinen,“more interoperability” with other systems such as 3D virtual worlds, multimedia, etc. is one “interesting path forward.” Oikarinen is no longer actively involved in the development of IRC, but he says that it’s up to individuals now.

“It does not necessarily require a large team to make significant progress. Just one person can make a huge difference,” Oikarinen says.

Lederer makes a similar point, saying that some stand-alone clients are already pushing the boundaries of what is possible on IRC. He points to projects like KVIrc, which brought video chat to traditional IRC, as well as Konversation, with which several IRC users can share a virtual whiteboard.

Long live IRC

Although there’s no reason to think that IRC will disappear anytime soon, there’s also cause for concern about the future of the once so popular technology. Although Freenode can serve as an example of a growing IRC community, that, in and of itself does not mean the future is secure for IRC.

We at Pingdom recognize the tremendous value that IRC has brought to users around the world for many years, and hope that IRC will keep being widely used. In fact, we’ve just set up our own IRC server, which we have some exciting plans for.

Note about the data: We used the Internet Archive’s Wayback Machine to go back in time and look at how IRC has developed over recent years, using the data from SearchIRC.com. We’d like to point out that not all IRC networks are indexed by SearchIRC.com, the service we used to create the charts for this article. For example, Undernet is not a part of the SearchIRC index. We believe, however, that the general trend displayed by the SearchIRC data is correct.Top image via Shutterstock.

Font Awesome, the iconic font designed for use with Twitter Bootstrap

$
0
0

Comments:"Font Awesome, the iconic font designed for use with Twitter Bootstrap"

URL:http://fortawesome.github.com/Font-Awesome/?src=hn


One font, 249 icons

In a single collection, Font Awesome is a pictographic language of web-related actions.

CSS control

Easily style icon color, size, shadow, and anything that's possible with CSS.

Infinite scalability

Scalable vector graphics means every icon looks awesome at any size.

Free, as in Beer

Font Awesome is completely free for commercial use. Check out the license.

IE7 Support

Font Awesome supports IE7. If you need it, you have my condolences.

Perfect on Retina Displays

Font Awesome icons are vectors, which mean they're gorgeous on high-resolution displays.

Made for Twitter Bootstrap

Designed from scratch to be fully compatible with Twitter Bootstrap 2.2.2.

Screen reader compatible

Font Awesome won't trip up screen readers, unlike other icon fonts.

Text Editor Icons

  • icon-file
  • icon-file-alt
  • icon-cut
  • icon-copy
  • icon-paste
  • icon-save
  • icon-undo
  • icon-repeat
  • icon-text-height
  • icon-text-width
  • icon-align-left
  • icon-align-center
  • icon-align-right
  • icon-align-justify
  • icon-indent-left
  • icon-indent-right
  • icon-font
  • icon-bold
  • icon-italic
  • icon-strikethrough
  • icon-underline
  • icon-link
  • icon-paper-clip
  • icon-columns
  • icon-table
  • icon-th-large
  • icon-th
  • icon-th-list
  • icon-list
  • icon-list-ol
  • icon-list-ul
  • icon-list-alt

Google Translate

Viewing all 9433 articles
Browse latest View live