Quantcast
Channel: Hacker News 50
Viewing all 9433 articles
Browse latest View live

The First Look at the New Oculus VR Prototype | Gadget Lab | Wired.com

$
0
0

Comments:"The First Look at the New Oculus VR Prototype | Gadget Lab | Wired.com"

URL:http://www.wired.com/gadgetlab/2014/01/oculus-rift


Every time virtual-reality company Oculus brings a prototype of its Rift headset to a show, it takes another big step forward. And the prototype at this year’s CES may be the biggest leap yet.

Last January, Oculus arrived at its first CES with a degree of uncertainty. It hadn’t yet released the developer-only Rift headset it Kickstarted in the previous fall. In fact, few outside the company had even seen it. Programmer John Carmack had brought an early prototype to videogame show E3 that summer, but since then there’d been radio silence as Oculus’ bare-bones staff worked heads-down on the developer unit. Last year’s CES was in many ways Oculus’ coming-out party. As it turned out, plenty of people attended: the Rift snagged “Best of CES” awards from everyone and their mother (including WIRED, though our mothers weren’t involved in the voting).

Since then, Oculus has continually improved and refined the Rift en route to a consumer release later this year. The display has been kicked up to 1080p; the form factor has become sleeker. Perhaps most importantly for adoption, potential latency has been greatly reduced, alleviating much of the “simulator sickness” that can accompany wearing VR headsets. And now, with another CES upon us, others are getting in on the act; Sony announced a new head-mounted display for movie viewing and games. It should be noted, though, that this is unlikely to be a direct competitor to the Rift — Sony’s unit gives wearers a 45-degree field of vision, compared to the Rift’s staggering 110 degrees.

Oculus unveiled even more this morning. There’s a new demo, courtesy of Epic Games. There’s a new AMOLED screen. There’s low persistence, a display technology that mitigates motion blur and “smearing,” both of which can contribute to user discomfort. For the first time, Rift is capable of positional tracking, which allows users to lean and move within the game environment by simply moving their head. And there’s a new prototype — known as “Crystal Cove” — that incorporates it all, getting latency down to around 30 milliseconds (on its way to the sub-20 threshold that Oculus considers the holy grail).

“We’ll need some seat belts for people. You want to stand up, you want to walk around.” — Oculus CEO Brendan Iribe

The new demo is visually similar to a previous demo that Oculus used throughout 2013 to show off the Rift’s immersive 360-degree playspace. Both were designed by Epic Games, and both occur within the universe of “Elemental,” Epic’s Unreal 4 game engine demo. The new demo places the user inside the same stone cave, facing the same horned lava-god/monster being as in the previous demo (bear with us here). This time, users play a top-down tower-defense scenario while the horned lava-god/monster guy watches. Like the two previous demos, the visual effects are plentiful.

Unlike the two previous demos, however, it monitors the user’s head movements in real space, and it’s able to translate those movements into not just orientation changes — looking up, down, or behind you — but also as actual motion, which previously was possible only by using a game controller in conjunction with the Rift. It utilizes an “outside-in” system: an externally mounted camera tracks small LED lights on the prototype’s faceplate, adding three “degrees of freedom” (forward/backward, left/right, and up/down) to the Rift’s tracking ability. Up until now, developers and early Oculus adopters have only been able to accomplish this by taping a Razer Hydra motion controller to the side of their Rift headsets. Now, though, leaning down while playing the demo brings you closer to the tower-defense game, and lets you watch the armies you control firing turrets and launching minions. It’s the first look at an untethered VR experience.

“We’ll need some seat belts for people,” says Oculus CEO Brendan Iribe. “You want to stand up, you want to walk around.”

The demo also highlights the display’s low persistence. In previous prototypes, turning your head quickly caused your surroundings to blur, an effect caused by the device registering new movement before the frame had a chance to update. Iribe describes it as “the wrong image being stuck to your face.” That’s effectively gone now.

“In the past,” Iribe says, “people would have to stop moving to stare at something. With low persistence, you can continue to stare at an object or read text while you’re moving your head.”

A second demo allows users to play EVE: Valkyrie, a space dogfighting game that’s part of the EVE: Online universe. Oculus brought the demo to E3 last year on its non-HD prototypes, but the company has updated it with the new feature set and the 1080p screen.

Of course, Oculus being Oculus, how the Crystal Cove prototype accomplishes low persistence and 6-DOF tracking are subject to change.

“This is just a feature prototype,” Iribe says. “It’s not at all representative of the final consumer look and feel. Once we feel like something is good enough and we’re confident we’ll be able to ship it with the consumer product, we feel good about announcing it. We still may change how it’s done, but we feel great about the positional tracking system. It’s been a year in the works, we’ve tried multiple different approaches, and this delivered the experience we were looking for.”

Even the display is subject to change. That’s why Oculus won’t even cop to the vendor it’s using for the screen. “We first showed HD without committing to what exactly it would be,” Iribe says. “It’s at least going to be 1080p, but we don’t know what screen we’re going to use, what size. We didn’t even know the resolution.”

Someday, all of those questions will be answered. Until then, there’s CES.


jordan-wright/email · GitHub

$
0
0

Comments:"jordan-wright/email · GitHub"

URL:https://github.com/jordan-wright/email#email


email

Robust and flexible email library for Go

Email for humans

The email package is designed to be simple to use, but flexible enough so as not to be restrictive. The goal is to provide an email interface for humans.

The email package currently supports the following:

  • From, To, Bcc, and Cc fields
  • Email addresses in both "test@example.com" and "First Last <test@example.com>" format
  • Text and HTML Message Body
  • Attachments
  • Read Receipts
  • Custom headers
  • More to come!

Installation

go get github.com/jordan-wright/email

Note: Requires go version 1.1 and above

Examples

Sending email using Gmail

e := email.NewEmail()
e.From = "Jordan Wright <test@gmail.com>"
e.To = []string{"test@example.com"}
e.Bcc = []string{"test_bcc@example.com"}
e.Cc = []string{"test_cc@example.com"}
e.Subject = "Awesome Subject"
e.Text = "Text Body is, of course, supported!"
e.HTML = "<h1>Fancy HTML is supported, too!</h1>"
e.Send("smtp.gmail.com:587", smtp.PlainAuth("", "test@gmail.com", "password123", "smtp.gmail.com"))

Another Method for Creating an Email

You can also create an email directly by creating a struct as follows:

e := &email.Email {
 To: []string{"test@example.com"},
 From: "Jordan Wright <test@gmail.com>",
 Subject: "Awesome Subject",
 Text: "Text Body is, of course, supported!",
 HTML: "<h1>Fancy HTML is supported, too!</h1>",
 Headers: textproto.MIMEHeader{},
}

Attaching a File

e := NewEmail()
e.AttachFile("test.txt")

Documentation

http://godoc.org/github.com/jordan-wright/email

Other Sources

Sections inspired by the handy gophermail project.

The Seven Deadly Sins of Startup Storytelling

$
0
0

Comments:"The Seven Deadly Sins of Startup Storytelling"

URL:http://firstround.com/article/The-Seven-Deadly-Sins-of-Startup-Storytelling


Make stories part of your culture — and more than that, the integrity of your culture. All-hands meetings can be pivotal here. Stories are often the best way to relate how a company is doing, what people are doing well, and what they could be doing better. And when leaders do this with transparency, honesty and humility, they make their employees feel good about their work — even if things aren’t all peachy.

In practice: Capturing moments, good or bad, in story form can authentically connect your employees to your company, and increase their commitment to their work. Consider kicking off staff meetings with stories instead of progress reports. There are a few ways to do this. As you go around the room, ask everyone to briefly talk about the strangest thing that’s happened to them since the last meeting, or a customer story that involved the greatest amount of surprise. Did someone use your product in a new way? Did a hater become a believer?

7. Proprietary. Companies with a stranglehold on what their corporate story is and who can tell it are missing a world of opportunities. And they're doing so at a time when social media makes it easier than ever to connect and share. Stories told by employees and by customers are incredible, sometimes invaluable assets (see Jared for Subway). Recognize the value in stories from internal and external sources, design ways to collect them, and enable your customers, advocates and employees to be storytellers too.

The best tactic here is to create an internal “story bank,” or database of stories, where employees and even customers can write and submit stories complete with titles. These stories can then be tagged by keyword, so that people looking for particular anecdotes or examples can easily find them. This also makes it easy for employees browsing through customer stories to reach out to the authors. 

Nike, Apple and eBay all harness stories as tools to crowdsource ideas — especially what their consumers are really passionate about. In doing so, they give employees the language and initiative to tell personal stories of meaning, and to amplify and distribute brand initiatives in story form.

In Practice: Comcast pioneered one of the very first effective campaigns on Twitter when it launched @ComcastCares. Once a hotbed of Comcast hate, Twitter became a huge brand building environment and customer service win for them. Comcast wrote the book on how people telling negative customer stories on social media (all-caps rants about poor cable service, photos of the cable guy asleep on their couch, etc.) could be co-opted and turned into authentic and powerful testimonials. 

To start, Comcast simply trolled Twitter for mentions of the company, identified complaints, and addressed individuals publicly on the platform. Employees introduced themselves by name (not as the faceless Comcast Customer service organization) and combined apology with sincere effort to help. Comcast quickly found that even the angriest individuals stop raging when a reasonable person is trying to help them in a public forum. From there, it built its strategy by acknowledging its negative image and visibly working to flip it on its head.

On the other end of the spectrum, JPMorgan skipped these critical steps and discovered the hard way that the rosy story they were telling themselves (and wanted their audience to retell) was not what caught fire when they launched #AskJPM on Twitter. Misjudging both the medium and the moment, the hashtag that they thought would showcase sage financial advice solicited public outrage not seen since the original Occupy Wall Street protests. The company quickly shuttered the campaign, but #AskJPM lives on as a social media joke and cautionary tale.

As content marketing increasingly becomes the norm, tactical storytelling is sure to be broken down to a science. But there’s danger in being overly reductionist.  What makes good stories work is the same unpredictable, creative, unintuitive quality that makes humans human. Breakout success won’t follow from the rote application of step-by-step guides or how-tos. Design your strategy to avoid the seven sins above, however, and you’ll be in good shape to forge a voice of your own. 

How We Got Our First 2,000 Users Doing Things That Don't Scale ⚙ Co.Labs ⚙ code + community

$
0
0

Comments:"How We Got Our First 2,000 Users Doing Things That Don't Scale ⚙ Co.Labs ⚙ code + community"

URL:http://www.fastcolabs.com/3024472/how-we-got-our-first-2000-users-doing-things-that-dont-scale?utm_source=Product+Hunt&utm_campaign=af88ca3153-daily-email-01-06-2014&utm_medium=email&utm_term=0_2cd7d34185-af88ca3153-104053773


Great products die every day. It takes more than product to build a successful business, yet founders proceed without addressing the important question: how do we get users? No matter how useful your product might be, it isn't a business without users.

With Product Hunt, we focused on user acquisition before we had a product. 20 days after its public launch, we had a community of 2,000 users that we acquired by doing things that don't scale. Here's how we did it.

Product Hunt, a daily leaderboard of new products, began as an email list using Linkydink, a tool for creating collaborative daily email digests. Contributors submitted links to products and each day subscribers received an email of new and interesting products. I seeded the community by inviting a few dozen founders, investors, and startup folks I knew. To my surprise, people really enjoyed the daily email and the subscriber base grew organically.

What began as an experiment, quickly grew into something much bigger. Encouraged by the positive feedback from the community, I sought to build the "real" Product Hunt and reached out to my buddy Nathan Bashaw for help.

Over Thanksgiving break, we designed and built Product Hunt. Meanwhile, we reached out to contributors in the MVP and other respected product people, sharing early mocks and gathering feedback. We weren't just doing customer development, we were getting them excited and making them feel like part of the product (and they were, helping guide our design decisions).

The conversation that proceeded helped us better understand our initial user base to build a desirable product.

5 days later, we had a very minimal but fully functional product. We emailed our supporters a link to Product Hunt, informing them not to share it publicly.

The supporters were thrilled to join and play with a working version of something they had thought about and indirectly, helped build. That day we acquired our first 30 users.

We still weren't ready to share Product Hunt publicly yet. It was buggy and we wanted to ensure people enjoyed the product before expanding to a larger audience. Over the next week we squashed bugs, gathered additional feedback, and invited a few more people to join.

Your first users matter. We knew how important it was to seed Product Hunt with the right people from the start. Initial users form the community's culture and once established, it is very difficult to change. By the end of the week, we had 100 users and felt ready to share Product Hunt with the world.

I reached out to Carmel DeAmicis, a reporter for PandoDaily. We met once before and the respect I earned guest writing on the popular tech publication helped me land a last minute meeting later that night. We met at Homestead, a bar in the Dogpatch district of San Francisco and I told her our story and vision for Product Hunt.

The next day Carmel confirmed an article would go live the following day. Immediately, we hopped back into our inbox to spread the news to our users.

Early contributors appreciated the note, hearing the backstory, and helping make Product Hunt a success. More than just share the news, our email included two specific asks:

Post a Product: It was important for us to have quality products and a healthy level of activity at launch. We were about to make a first impression for many. Share the Article: To maximize exposure, we asked early adopters--many who have a large following and influence--to share the article. To make it even easier, we provided a "click to tweet" link that opened Twitter with a pre-created message.

The launch was a success and by the end of that day we acquired our 400th user.

Growth was fantastic, but in reality, user acquisition wasn't our primary goal. Engagement and retention is most important at this early stage. If people don't stick around, press goes to waste. Or worse, founders are fooled into thinking they're making progress.

So why bother with press in the first place? The PandoDaily article was strategic--we weren't just trying to acquire more users. The primary goal was to get early adopters excited and prove to the tech community that Product Hunt isn't just another one of my ephemeral experiments. We kept beating the drum.

We reached out to Chris Dannen, an editor at Fast Company. Similar to PandoDaily, I contributed several articles over the past six months, which made connecting easier. I sent Chris a draft of my article, describing the story behind Product Hunt and the "20-minute MVP" used to validate demand for the product. I believed the Fast Company audience would enjoy the piece and so did Chris.

Three days later The Wisdom Of The 20-Minute Startup was published, generating another boost of growth. Soon after, we acquired our 800th user.

Since public launch, we carefully monitored who was signing up, identifying influencers and those that we knew would make good contributions to the community. Tools like Intercom and Rapportive were very helpful, translating nondescript email addresses into identifiable people, surfacing people's Twitter and LinkedIn profiles. Once we identified an influencer, Nathan or myself sent a personal email, inviting them to contribute and linking to the PandoDaily or Fast Company articles, to tell our story. A manual process indeed, but an effective way to recruit good contributors and open lines of communication for future feedback.

We also asked for referrals, emailing people using the product to ask if they knew of other product people that would make good contributions. We could have automated this but at the cost of delivering a less personal and effective message.

Most people had a few friends that immediately came to mind and gladly made introductions. As with getting press, asking for referrals was designed to build a stronger, more engaged community, not just acquire additional users. Product Hunt is more fun with friends, with people our community knows, respects, and trusts. The more one-degree connections, the more people are encouraged to use the product.

Our manual efforts growing the community paid off. 20 days after Product Hunt's private launch and several hundred emails later, we acquired our 2,000th user.

Although we've found early success growing Product Hunt, the future is always murky. Smart and skeptical entrepreneurs ask us:

  • Will people stick around?
  • How will you maintain quality contributions and discussion as the community grows?
  • Is Product Hunt limited to the early adopter, tech community or does it have mainstream appeal?

We think about these questions and will answer them as the product and community matures. We embrace this uncertainty as the best products are often born from polarization. If everyone knew the answer, Product Hunt would have already existed.

Ryan Hoover is the co-founder of Product Hunt. This essay is part of a series of posts where he shares the strategies, tactics, and surprises his team encounters building their product. Subscribe to my blog to follow along.

Hover.css - A collection of CSS3 powered hover effects

$
0
0

Comments:"Hover.css - A collection of CSS3 powered hover effects"

URL:http://ianlunn.github.io/Hover/


About Hover.css

All Hover.css effects make use of a single element (with the help of some pseudo-elements where necessary), are self contained so you can easily copy and paste them, and come in CSS and SASS flavours.

For best results, hover effects use a couple of "hacks" (undesirable but usually necessary lines of code). For more information on these hacks and whether you need them, please read the FAQ.

Many effects use CSS3 features such as transitions, transforms and animations. Old browsers that don't support these features may need some extra attention to be certain a fallback hover effect is still in place.

License

hover.css is open source, and made available under a MIT License. Distribute, use as-is, or modify to your liking in personal and commercial projects. Please retain the original readme and license files.

Placing author information in your stylesheet, credits page or humans.txt is much appreciated.

The homogenization of scientific computing, or why Python is steadily eating other languages’ lunch | (R news & tutorials)

$
0
0

Comments:"The homogenization of scientific computing, or why Python is steadily eating other languages’ lunch | (R news & tutorials)"

URL:http://www.r-bloggers.com/the-homogenization-of-scientific-computing-or-why-python-is-steadily-eating-other-languages-lunch/


Over the past two years, my scientific computing toolbox been steadily homogenizing. Around 2010 or 2011, my toolbox looked something like this:

  • Ruby for text processing and miscellaneous scripting;
  • Ruby on Rails/JavaScript for web development;
  • Python/Numpy (mostly) and MATLAB (occasionally) for numerical computing;
  • MATLAB for neuroimaging data analysis;
  • R for statistical analysis;
  • R for plotting and visualization;
  • Occasional excursions into other languages/environments for other stuff.

In 2013, my toolbox looks like this:

  • Python for text processing and miscellaneous scripting;
  • Ruby on Rails/JavaScript for web development, except for an occasional date with Django or Flask (Python frameworks);
  • Python (NumPy/SciPy) for numerical computing;
  • Python (Neurosynth, NiPy etc.) for neuroimaging data analysis;
  • Python (NumPy/SciPy/pandas/statsmodels) for statistical analysis;
  • Python (MatPlotLib) for plotting and visualization, except for web-based visualizations (JavaScript/d3.js);
  • Python (scikit-learn) for machine learning;
  • Excursions into other languages have dropped markedly.

You may notice a theme here.

The increasing homogenization (Pythonification?) of the tools I use on a regular basis primarily reflects the spectacular recent growth of the Python ecosystem. A few years ago, you couldn’t really do statistics in Python unless you wanted to spend most of your time pulling your hair out and wishing Python were more like R (which, is a pretty remarkable confession considering what R is like). Neuroimaging data could be analyzed in SPM (MATLAB-based), FSL, or a variety of other packages, but there was no viable full-featured, free, open-source Python alternative. Packages for machine learning, natural language processing, web application development, were only just starting to emerge.

These days, tools for almost every aspect of scientific computing are readily available in Python. And in a growing number of cases, they’re eating the competition’s lunch.

Take R, for example. R’s out-of-the-box performance with out-of-memory datasets has long been recognized as its achilles heel (yes, I’m aware you can get around that if you’re willing to invest the time–but not many scientists have the time). But even people who hated the way R chokes on large datasets, and its general clunkiness as a language, often couldn’t help running back to R as soon as any kind of serious data manipulation was required. You could always laboriously write code in Python or some other high-level language to pivot, aggregate, reshape, and otherwise pulverize your data, but why would you want to? The beauty of packages like plyr in R was that you could, in a matter of 2 – 3 lines of code, perform enormously powerful operations that could take hours to duplicate in other languages. The downside was the intensive learning curve associated with learning each package’s often quite complicated API (e.g., ggplot2 is incredibly expressive, but every time I stop using ggplot2 for 3 months, I have to completely re-learn it), and having to contend with R’s general awkwardness. But still, on the whole, it was clearly worth it.

Flash forward to The Now. Last week, someone asked me for some simulation code I’d written in R a couple of years ago. As I was firing up R Studio to dig around for it, I realized that I hadn’t actually fired up R studio for a very long time prior to that moment–probably not in about 6 months. The combination of NumPy/SciPy, MatPlotLib, pandas and statmodels had effectively replaced R for me, and I hadn’t even noticed. At some point I just stopped dropping out of Python and into R whenever I had to do the “real” data analysis. Instead, I just started importing pandas and statsmodels into my code. The same goes for machine learning (scikit-learn), natural language processing (nltk), document parsing (BeautifulSoup), and many other things I used to do outside Python.

It turns out that the benefits of doing all of your development and analysis in one language are quite substantial. For one thing, when you can do everything in the same language, you don’t have to suffer the constant cognitive switch costs of reminding yourself say, that Ruby uses blocks instead of comprehensions, or that you need to call len(array) instead of array.length to get the size of an array in Python; you can just keep solving the problem you’re trying to solve with as little cognitive overhead as possible. Also, you no longer need to worry about interfacing between different languages used for different parts of a project. Nothing is more annoying than parsing some text data in Python, finally getting it into the format you want internally, and then realizing you have to write it out to disk in a different format so that you can hand it off to R or MATLAB for some other set of analyses*. In isolation, this kind of thing is not a big deal. It doesn’t take very long to write out a CSV or JSON file from Python and then read it into R. But it does add up. It makes integrated development more complicated, because you end up with more code scattered around your drive in more locations (well, at least if you have my organizational skills). It means you spend a non-negligible portion of your “analysis” time writing trivial little wrappers for all that interface stuff, instead of thinking deeply about how to actually transform and manipulate your data. And it means that your beautiful analytics code is marred by all sorts of ugly open() and read() I/O calls. All of this overhead vanishes as soon as you move to a single language.

Convenience aside, another thing that’s impressive about the Python scientific computing ecosystem is that a surprising number of Python-based tools are now best-in-class (or close to it) in terms of scope and ease of use–and, in virtue of C bindings, often even in terms of performance. It’s hard to imagine an easier-to-use machine learning package than scikit-learn, even before you factor in the breadth of implemented algorithms, excellent documentation, and outstanding performance. Similarly, I haven’t missed any of the data manipulation functionality in R since I switched to pandas. Actually, I’ve discovered many new tricks in pandas I didn’t know in R (some of which I’ll describe in an upcoming post). Considering that pandas considerably outperforms R for many common operations, the reasons for me to switch back to R or other tools–even occasionally–have dwindled.

Mind you, I don’t mean to imply that Python can now do everything anyone could ever do in other languages. That’s obviously not true. For instance, there are currently no viable replacements for many of the thousands of statistical packages users have contributed to R (if there’s a good analog for lme4 in Python, I’d love to know about it). In signal processing, I gather that many people are wedded to various MATLAB toolboxes and packages that don’t have good analogs within the Python ecosystem. And for people who need serious performance and work with very, very large datasets, there’s often still no substitute for writing highly optimized code in a low-level compiled language. So, clearly, what I’m saying here won’t apply to everyone. But I suspect it applies to the majority of scientists.

Speaking only for myself, I’ve now arrived at the point where around 90 – 95% of what I do can be done comfortably in Python. So the major consideration for me, when determining what language to use for a new project, has shifted from what’s the best tool for the job that I’m willing to learn and/or tolerate using? to is there really no way to do this in Python? By and large, this mentality is a good thing, though I won’t deny that it occasionally has its downsides. For example, back when I did most of my data analysis in R, I would frequently play around with random statistics packages just to see what they did. I don’t do that much any more, because the pain of having to refresh my R knowledge and deal with that thing again usually outweighs the perceived benefits of aimless statistical exploration. Conversely, sometimes I end up using Python packages that I don’t like quite as much as comparable packages in other languages, simply for the sake of preserving language purity. For example, I prefer Rails’ ActiveRecord ORM to the much more explicit SQLAlchemy ORM for Python–but I don’t prefer to it enough to justify mixing Ruby and Python objects in the same application. So, clearly, there are costs. But they’re pretty small costs, and for me personally, the scales have now clearly tipped in favor of using Python for almost everything. I know many other researchers who’ve had the same experience, and I don’t think it’s entirely unfair to suggest that, at this point, Python has become the de facto language of scientific computing in many domains. If you’re reading this and haven’t had much prior exposure to Python, now’s a great time to come on board!

Postscript: In the period of time between starting this post and finishing it (two sessions spread about two weeks apart), I discovered not one but two new Python-based packages for data visualization: Michael Waskom’s seaborn package–which provides very high-level wrappers for complex plots, with a beautiful ggplot2-like aesthetic–and Continuum Analytics’ bokeh, which looks like a potential game-changer for web-based visualization**. At the rate the Python ecosystem is moving, there’s a non-zero chance that by the time you read this, I’ll be using some new Python package that directly transliterates my thoughts into analytics code.

 

* I’m aware that there are various interfaces between Python, R, etc. that allow you to internally pass objects between these languages. My experience with these has not been overwhelmingly positive, and in any case they still introduce all the overhead of writing extra lines of code and having to deal with multiple languages.

** Yes, you heard right: web-based visualization in Python. Bokeh generates static JavaScript and JSON for you from Python code, so  your users are magically able to interact with your plots on a webpage without you having to write a single line of native JS code.

R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more...
If you got this far, why not subscribe for updatesfrom the site? Choose your flavor: e-mail, twitter, RSS, or facebook...

Why CCP is still using Python 2 - RobG3D

ghash.io is becoming SHOCKINGLY AGGRESSIVE NOW, closing in 45%

$
0
0

Comments:"ghash.io is becoming SHOCKINGLY AGGRESSIVE NOW, closing in 45%"

URL:https://bitcointalk.org/index.php?topic=406152.0


descarte
https://blockchain.info/pools

did btcguild users jumped over to ghash?

or did ghash keep buying machines?

Can't imagine what is going to happen if they keep increasing their hashing power at this rate......

1389281342 Hero Member OfflinePosts: 1389281342Ignore 1389281342
Advertisement: WE DOUBLE YOUR FIRST DEPOSITJETWIN   |   No.1 bitcoin sportsbook 1389281342 Hero Member OfflinePosts: 1389281342Ignore 1389281342
1389281342 Hero Member OfflinePosts: 1389281342Ignore 1389281342
1389281342 Hero Member OfflinePosts: 1389281342Ignore 1389281342
toast

So? It will always keep surviving potential 51% attacks, until one time it doesn't. Also that thread is from 2011, and has no useful information about what the resolution was.

The resolution was more mining on other pools.

The community needs to take SOME preemptive action. Right now everyone is plugging their ears and saying "don't worry this problem will go away on its own".

If you're worried about ghash.io having too much power then buy an ASIC and either solo mine or join a different pool.

There is no such thing as "preemptive action" in a system designed to have no central authorities where 1 hash=1 vote.


If I had a botnet I could DDoS Ghash
If I had 10,000 btc I could subsidize small pools
If I was skilled as fuck I could make a cloud p2pool mining service with better marketing than ghash

Let me guess, "By 'no such thing' I really meant 'no reasonable thing', here's why all those are not feasible"

descarte
"Going away from ghash.io" ? What if ghash (cex.io) owns all the machines? They still mine even if you go away. The big jump in hashing power might be a massive purchase from somewhere. unless someone can confirm this is false.

"A profit-seeking person will always gain more by just following the rules". What if its not for profit reason? How can we guarantee that they are sane?

Despite all the arguments about not worrying about 51% attack, think about all the time and money you invested in this, can you seriously sleep in peace, knowing that ghash has 51% power?

Do you seriously think the public don't care what ghash can do with 51% on their hands?

what if the lost of confidence in bitcoin is not temporary?

"if this attack is successfully executed, it will be difficult or impossible to "untangle" the mess created - any changes the attacker makes might become permanent." - Are you really taking these words lightly?


Teen Reported to Police After Finding Security Hole in Website | Threat Level | Wired.com

$
0
0

Comments:"Teen Reported to Police After Finding Security Hole in Website | Threat Level | Wired.com"

URL:http://www.wired.com/threatlevel/2014/01/teen-reported-security-hole/


A teenager in Australia who thought he was doing a good deed by reporting a security vulnerability in a government website was reported to the police.

Joshua Rogers, a 16-year-old in the state of Victoria, found a basic security hole that allowed him to access a database containing sensitive information for about 600,000 public transport users who made purchases through the Metlink web site run by the Transport Department. It was the primary site for information about train, tram and bus timetables. The database contained the full names, addresses, home and mobile phone numbers, email addresses, dates of birth, and a nine-digit extract of credit card numbers used at the site, according to The Age newspaper in Melbourne.

Rogers says he contacted the site after Christmas to report the vulnerability but never got a response. After waiting two weeks, he contacted the newspaper to report the problem. When The Age called the Transportation Department for comment, it reported Rogers to the police.

“It’s truly disappointing that a government agency has developed a website which has these sorts of flaws,” Phil Kernick, of cyber security consultancy CQR, told the paper. “So if this kid found it, he was probably not the first one. Someone else was probably able to find it too, which means that this information may already be out there.”

The paper doesn’t say how Rogers accessed the database, but says he used a common vulnerability that exists in many web sites. It’s likely he used a SQL injection vulnerability, one of the most common ways to breach web sites and gain access to backend databases.

The practice of punishing security researchers instead of thanking them for uncovering vulnerabilities is a tradition that has persisted for decades, despite extensive education about the import role such researchers play in securing systems.

The Age doesn’t say whether the police took any action against Rogers. But in 2011, Patrick Webster suffered a similar consequence after reporting a website vulnerability to First State Super, an Australian investment firm that managed his pension fund. The flaw allowed any account holder to access the online statements of other customers, thus exposing some 770,000 pension accounts — including those of police officers and politicians. Webster didn’t stop at simply uncovering the vulnerability, however. He wrote a script to download about 500 account statements to prove to First State that its account holders were at risk. First State responded by reporting him to police and demanding access to his computer to make sure he’d deleted all of the statements he had downloaded.

In the U.S., hacker Andrew Auernheimer, aka “weev”, is serving a three-and-a-half-year sentence for identity theft and hacking after he and a friend discovered a hole in AT&T’s website that allowed anyone to obtain the email addresses and ICC-IDs of iPad users. The ICC-ID is a unique identifier that’s used to authenticate the SIM card in a customer’s iPad to AT&T’s network.

Auernheimer and his friend discovered that the site would leak email addresses to anyone who provided it with a ICC-ID. So the two wrote a script to mimic the behavior of numerous iPads contacting the web site in order to harvest the email addresses of about 120,000 iPad users. They were charged with hacking and identity theft after reporting the information to a journalist at Gawker. Auernheimer is currently appealing his conviction.

Google+ Invite Lands Man In Jail | Fast Company | Business + Innovation

$
0
0

Comments:"Google+ Invite Lands Man In Jail | Fast Company | Business + Innovation"

URL:http://www.fastcompany.com/3024452/google-invite-lands-man-in-jail


For some, Google+ notifications are nothing more than an annoyance--a pointless disturbance from what many see as a social network "ghost town." But for Thomas Gagnon, an alert apparently coming from his Google+ account was enough to land him in police custody.

Last month, according to a report, prosecutors said Gagnon's former girlfriend received an invitation to join one of his Google+ Circles. She'd recently broken up with Gagnon and had obtained a restraining order against him soon afterward. Upon discovering the unwelcome Google+ invite from her ex-beau online, she went down to the local police station with a print-out of the invitation. Roughly 90 minutes later, police arrested Gagnon for his Google+ activity and was later charged with violating the restraining order barring contact with her.

Gagnon's experience, while an extreme example, demonstrates the potential consequences of the lack of transparency surrounding Google+.

The only wrinkle? Gagnon's attorney claims his client never sent the request, arguing that he "has no idea how the woman ... got such an invitation" and "suggesting that it might have been sent by a robot," The Salem Newsreports. It sounds like an almost comical mishap fit for a soap opera, but the interaction is a common one on Google+, where it's often unclear how or when users are actually on the service--and whether they actually count as "users" to begin with.

To boost engagement on the network, Google began leveraging its more popular properties to force (if not surreptitiously slip) Google+ into our daily routines. In November, for example, YouTube, which is owned by Google, implemented a new commenting system that required a Google+ account in order to contribute to the site's discussions. Google+ Circles, which enable users to classify and manage friends in specific groups (coworkers, roommates), has become more and more embedded in Gmail's contacts feature. And even Google Glass auto-uploads all photos taken on the wearable computer to a private Google+ folder.

But it's not just product integration that is at issue--the company is using its other services to arbitrarily increase the user base of Google+. As The Information recently reported, "The Google+ stream is broadly defined. In the past, statistics about active users in the stream included anytime a person clicked on the red Google+ notifications in the top right corner of their screen while they were using Web search, Gmail, or other Google Web services. The person didn’t actually have to visit [the Google+ homepage] plus.google.com to be counted as 'active.'"

That policy has led to confusion over who is actually a member of Google+. Some users have even complained that Google is mining Gmail contacts to send out Google+ notifications. For example, when users register for Gmail, they're automatically welcomed to Google+, too. And by default, when someone joins Google+ and that person is in your Gmail contacts, Google will automatically send you a notification, along with an invitation suggesting that you "add him [or her] to your Circles to stay connected." The same occurs if someone adds you to a Google+ Circle. (Users have the option of adjusting these settings.)

Google has had a history of privacy hiccups. In 2011, it settled with the FTC over charges that the company used deceptive tactics for its rollout of its failed Buzz social network, which automatically enrolled some users into the service through Gmail, even if they weren't interested in joining.

Perhaps something similar happened to Gagnon--an automated Google+ invite accidentally triggered through his Gmail contacts. Perhaps he simply added or moved his ex-girlfriend to, say, an "Old Acquaintances" Circle, which, unbeknownst to him, caused Google to automatically send a notification to her suggesting that she add him to a Circle too.

Or, of course, perhaps he actually did violate the terms of his restraining order. (Multiple requests to Gagnon for comment were not immediately returned; his attorney was also unreachable. A representative for Google declined to comment on the record.)

Either way, Gagnon's experience, while an extreme example, demonstrates the potential consequences of the lack of transparency surrounding Google+. Neil Hourihan, Gagnon’s lawyer, told a Massachusetts judge that the charges were “absolutely unfounded.”

"[He] suggested that unlike Facebook, which requires users to select potential friends, he believes Google+ generates invitations for 'anyone you’ve ever contacted,'" The Salem Newsreported. "A Salem District Court judge admitted he wasn’t sure exactly how such invitations work on Google’s social media site."

Still, the judge set bail at $500. The case is set to begin in early February.

[Behind Bars: BortN66 via Shutterstock]

ongoing by Tim Bray · Software in 2014

$
0
0

Comments:"ongoing by Tim Bray · Software in 2014"

URL:https://www.tbray.org/ongoing/When/201x/2014/01/01/Software-in-2014


We’re at an inflection point in the practice of constructing software. Our tools are good, our server developers are happy, but when it comes to building client-side software, we really don’t know where we’re going or how to get there.

Happy times upstream· The art and science of building server-side code is just fine, thank you; the technology’s breadth and polish has been ramping for years and still is.

More or less everything is expected to talk HTTP, and it’s really easy to make things talk HTTP.

More or less everything is built with an MVC-or-equivalent level of abstraction, and there are good frameworks to help us work sanely and cleanly. It’s a pity some people still build important apps in PHP and Spring, but those aren’t choices anyone is forcing them to make.

We still have angst over dynamic and static typing, but I think the trade-offs are reasonably well understood, and we have really good programming languages in each flavor. I use both, and in some cases the choice is obvious; check out theBánffy-Bray criteria.

Concurrency· Functional Programming is getting a foothold on the mainstream, because if you care about performance you care about concurrency, and ordinary humans can’t do concurrency at scale (or really at all) if they’re sharing mutable objects.

A lot of people love Erlang but not that many are using it in production even though it gets concurrency and failover profoundly right, because types and classes.

Clojure’s concurrency primitives are functional, efficient, and beautiful; but being a Lisp is a handicap (empirically I mean, even if unlike me you grant the ineffable wonderfulness of Lisp). Scala discards loads of Java ceremony and has a plausible actor model; but also way too much syntax.

NodeJS isn’t really functional, but if everything’s a callback and you’re single threaded, who cares? Anyhow, my biggest gripe with Node is the JS part at the end; more on that later.

Go has made a deep impression on me, even though it doesn’t make me smile, the way C and Java and Ruby and Clojure did successively over the years. My intuition is that its types offer enough object-flavored utility to get by. And my strong intuition is that Goroutines and typed pipes hit a huge 80/20 point, ushering developers smoothly into writing functional code just because it’s easy and straightforward and readable. The next substantial server-side piece of software I build will be in Go.

And hey, if none of the above quite gets us where we want to go, we’ve got Rust and Elixir and Dart looming over the horizon; none built by dummies.

Storage, oh yeah· The persistence options are so great now. I’ve been sort of cool to relational data stores for decades, particularly at runtime in performance-critical systems. But they have their place, and there are multiple good open-source ones.

And on the postrelational side things are just fine too. Options range from lightweight memory caches to things that can operate at behemoth scale. Like for example Cassandra; If you’ve heard any of the recent presentations by Adrian Cockcroft about what Netflix does with it, your mind will have boggled appropriately.

On top of which everyone’s internalized that disk is the new tape and is using it sensibly in that way.

On the other hand...

The client-side mess· Things are bad. You have to build everything three times: Web, iOS, Android. We’re talent-starved, this is egregious waste, and it’s really hurting us.

Mobile sucks· I’m going to skip the differences between Android and iOS here because they’re just not that significant in engineering terms. Anyhow, here’s the suckage.

  • First of all, you have to do your mobile development twice.

  • The update cycles are slow. Days in the case of iOS, hours for Android (compared to seconds for browser-based apps). What’s worse is that you can’t even count on people accepting the mobile-app updates you send them. Got a critical data-losing account-compromising privacy-infringing bug? Sucks to be you.

  • The devices are memory-starved, CPU-starved, and battery-starved.

  • There are loads of form factors and it’s getting worse.

  • You don’t get a choice of languages; if you hate both Java and ObjC, get another job.

  • Unit testing is a bitch.

  • Fortunately for your users but unfortunately for you, the UX-quality bar on mobile is high and there is no fast or cheap way through; it requires inspiration and iteration.

  • The right way to use the Internet is to click in the search box at the top of your browser and type in what you want, and find it, and click on it, and use it. But if the information or service or whatever that you’re looking for has been sucked into mobile, you have to install the app, which means another level of search in the mobile-app store, where search isn’t nearly as good as what Google and Bing provide.

  • You can’t make money. Seriously, Apple is always talking about the billions and billions they pay out of the app store, so why is it that I don’t know anyone who’s making serious money on mobile apps?

Of course, the HTML5-rocks crowd is at this point rolling their eyes and pointing out that if everyone just built mobile web apps, then all these downsides (especially the first) would melt away.

Except for...

Browsers suck too· This is an unfashionable opinion, but I can’t see why it’s controversial.

  • JavaScript is horrible.
    > [5, 10, 1].sort();
    [ 1, 10, 5 ]

    Et cetera. Thus Coffeescript and Dart and other efforts to route aroundTheElephantInTheRoom.js.

  • The browser APIs suck too. Sufficiently so that jQuery (or equivalent) is regarded as the lowest level that any sane person would program to; in effect, the new Web assembler.

    Thus, for actually building applications, you’re going to have to pick a higher-level framework. There are lots of them and they compete vigorously, it’s easy to poke around the Web and find knockouts and cage matches; one good high-level comparo is Rich JavaScript Applications – the Seven Frameworks (Throne of JS, 2012) but wait it’s eighteen months old thus probably now wrong, which is a symptom of the problem. “What problem,”you ask, “choice is good, right?” It is, but this isn’t an orderly choice, it’s a Cambrian Explosion. I’m sure the software archeologists of 2113 will enjoy studying it, but it’s a problem.

    (Oh, and also readFrameworkless JavaScript by Tero Piirainen.)

  • CSS sucks too. I’d explain why, except for Dan Cederholm wrote Why Sass? so I don’t need to. Also, check out Luke Page’s Less vs Sass vs Stylus; did I mention “Cambrian explosion”?

  • There’s no app store for your browser-based client with anything like the scale and size and polish of those for mobile apps.

OK, I know that at every big Web-centric conference, bright-eyed enthusiastic young true browser believers show you how HTML5 rocks and they can write apps that use the accelerometer and microphone and (wait for it) are indistinguishable from a mobile app!

Well, then why isn’t everyone just doing that? Hint: See the bullet points above.

When I said “Mobile sucks”, I wasn’t talking about engineering suckage; in fact, Cocoa Touch and the Android app framework are both very decent GUI-building platforms, embodying a lot of history and hard-won lessons. Crucially, for most of the things you’d want to put in a UI, there’s usually a single canonical solid well-debugged way to do it, which is the top result for the appropriate question on both Google and StackOverflow.

But look at all the energy going into browser tech; surely it’s going to catch up with mobile tech any day now? Maybe, except for the mobile frameworks are being polished and expanded by elite teams at Apple and Google, including some of the world’s best GUI engineers. So I’m sort of expecting the picture to remain fairly stable, going forward.

Diminishing returns· I’m an old guy, and I remember the first wave of Web apps going through, and a whole generation of Visual Basic and Motif and Java and Win32 clients being swept out with the trash, because people liked dealing with everything through a browser.

Of course, fifteen minutes later, software VIPs started saying how the browser’s interface was too dumb and insufficiently responsive and we’d have to find plan B, and I couldn’t help noticing that every one of those VIPs was trying to sell a proprietary plan B. Now we have Plan B and at least it’s right there in the browser, and standards-based.

But I’m still dubious. Yeah, I like it when the app is responsive to gestures, and objects slide in and fade out; but all that feels like icing on the cake, and I confess to wondering how far past the 80/20 point — a well-designed Web app with most of the logic on the server — the ROI stays positive. And I totally fucking hate having four independently-scrollable areas on the screen controlled by weird-looking JavaScript-genius-handcrafted scrollbars. And then I’ll be working with some fancy single-page app, accidentally hit the tab key and everything goes a little sideways. I hate it worse when a nontechnical friend or relative gets caught in this sort of strange loop and I have to try to explain what’s going on.

What’s next?· On the server side, no drama I think; everything gets smoother and better. These are the good old days.

On the client, I just totally don’t know. Historical periods featuring rococo engineering outbursts of colorful duplicative complexity usually end up converging on something simpler that hits the right 80/20 points. But if that’s what’s coming, it’s not coming from any direction I’m looking, so color me baffled. Maybe we’re stuck with clients-in-triplicate for the long haul.

Seven Habits Study Guide/Quick overview of the seven habits - Wikibooks, open books for an open world

$
0
0

Comments:"Seven Habits Study Guide/Quick overview of the seven habits - Wikibooks, open books for an open world"

URL:http://en.wikibooks.org/wiki/Seven_Habits_Study_Guide/Quick_overview_of_the_seven_habits


The Seven Habits Quick Sheet[edit]

Be proactive (take action and be responsible) Begin with an end in mind (consciously plan out and visualize your actions) First things first (set priorities and carry them out) Think win-win (in negotiation, seek solutions that help both yourself and the other person) Seek first to understand, then be understood (in communication, listen actively before you talk) Synergize (in work, open yourself to others to work effectively in teams) Sharpen the Saw (relax, rejuvenate, and revitalize yourself)

Private victory, the path to independence[edit]

Habit 1: Be proactive[edit]

Take action and take responsibility. This is the basis of all further habits and a cornerstone of success. You will influence your life more than anyone else. You have the opportunity to use your free will and hard work to change yourself and your circumstances. You are only a victim if you allow yourself to be; if you are reactive rather than proactive. The emphasis of this habit is to do whatever is in your power to improve your situation. You are the creator, the actor and the doer in your life; get started and "just do it". Since, in a situation, the thing that you can influence and change the most is your response to it - choose your response to any situation and you will find yourself in control. No one can "make" you angry if you decide you don't want to get angry. Don't let life set you up in a bad situation. Have confidence in yourself and believe that you can succeed in anything in life.

In your internal dialogue, replace language such as "I must do X" with "I choose to do X", "I have to" with "I want to", "If only..." with "Let's see about..." etc. - I choose to not be angry in my work environment - i choose to spend only planned expenses in my personal - I want to be more present and involve in my family happiness

Habit 2: Begin with the end in mind[edit]

Visualize where you want to go. Before you start something sit down and plan it out. A little planning will usually save you a lot of actual work later. Use your creative forces to create images and plans in your head first, then carry out your plan. The plan is called the first creation; the second creation is formed when you carry out the plan, and its success depends on a well thought out first creation.

It's extremely easy to get caught up in an activity trap, in the business of life, the thick of thin things.

_

-I want to be family physician - I want to have a full registration with HPSCA - I want to become a usefull consultant - I want to be a good husband, to be a good father - I want to be respected

Habit 3: Put first things first[edit]

Set priorities. Decide which of your roles and goals are most important, then determine what steps will best achieve those goals. Basically it means doing life with your values in hand. It means defining your idea of success in life from the image you would like to leave in the roles that you assume like (spouse, grandparent, voter, activist, student, employee, manager). The idea is to have these clearly defined and on a piece of paper.

We need to schedule our priorities. We can use the time management matrix to determine where to spend our time.

There are four quadrants where we spend our time:

Important and Urgent Important but not Urgent Urgent but not Important Not Urgent or Important

To be effective we need to take care of everything in quadrant 1 and then spend as much of our remaining time as possible in quadrant 2. We need to live in quadrants 1 and 2.

Quadrant 1 activities are the things that are important and urgent: emergencies, deadline-driven projects, crises, some meetings, some phone calls. These are the things we cannot and should not ignore. They demand our immediate attention.

Quadrant 2 activities include: all work in each of the 7 habits, maintenance, recreation, self-care, learning, reading, and relationship building. These are the things we don't do because they're never urgent. They're important, but once we finish dealing with the Urgent and Important crises, we often don't want to work in quadrant 2. We get distracted by Urgent things that are not important—quadrant 3 activities. We might want to retreat to the gratifying but wasteful activities of quadrant 4 because we feel like we deserve a break, we don't realize that we are setting ourselves up for more crises in quadrant 1 by ignoring the important activities of quadrant 2.

Public victory, the path to interdependence[edit]

Habit 4: Think win-win[edit]

Many people grow up with a competitive mindset ("I win, you lose"), a beaten-down mindset ("I give up, do what you want to me"), or a mix of these and other mindsets. Each of these has its place. However, for your most valuable family and business interactions, the most mature and effective goal is usually to seek situations which benefit everyone involved.

When you negotiate you should seek to make deals that help everyone. In cases where this is not possible, it is best to have the mindset from the outset that you will walk away from the deal ("win-win or no-deal").

Habit 5: Seek first to understand, then to be understood[edit]

To influence and help others, you must first actively listen to them and understand their situation and concerns. For example, imagine a doctor who gives a prescription over the telephone without knowing all the necessary information about the patient and their condition. This could be a serious or even fatal error if the patient takes the wrong medicine. In the same way, when giving someone advice we should be quite careful to understand their circumstances well. Even excellent advice can be useless and wasted if it does not apply to the situation of the person receiving it.

It is most effective to listen actively with empathy, consciously trying to understand and to see the world from the other person's perspective. It is also beneficial to listen without judging. Avoid "hearing" through a filter formed by your own worldview, and do not impose your preconceived ideas on what you hear, because doing so will inhibit your efforts to put yourself in the other person's shoes.

Once you have clearly understood the point of view of another person, it is equally important to be understood by them. You need to build the courage to respond to what you've heard and present your own view that takes what the other person has said into account.

Habit 6: Synergize[edit]

This habit deals with teamwork and opening yourself emotionally to work with other people. Optimistic, emotionally-charged individuals who are living out the previous habits can work together in amazing ways and see new paths none of them would have found alone. One plus one is three, or more, when these "third alternatives" appear. This synergy is a bit chaotic but is also fun and stimulating. When you use synergy you are also improving your spiritual, emotional and social side of your life.

The Seventh Habit - Renewal[edit]

Habit 7: Sharpen the saw[edit]

Take time to rejuvenate yourself and help prepare yourself to work better in the future. This often means relaxing, enjoying nature, meditating and praying (Steven Covey is a devout Mormon, as he explains in the book's introduction), reading Scripture or great literature, listening to classical music, and spending time with high-quality relationships.

The purpose of this habit is to regularly exercise the four components which many believe make up the human being: body, mind, heart and spirit.

  • Body: Exercise for a sense of well-being.
  • Mind: Exercise to sharpen the intellectual abilities.
  • Spirit: Exercise with meditations and inner reflections.
  • Heart: Exercise care for important relationships.

The fourth category is not an exercise like the others, but rather a commitment to use habits 4, 5 and 6 in everyday life.

Keep positive enthusiasm.

External links[edit]

JPMorgan Pays for Shorting Madoff Without Telling Anyone - Bloomberg

$
0
0

Comments:"JPMorgan Pays for Shorting Madoff Without Telling Anyone - Bloomberg "

URL:http://www.bloomberg.com/news/2014-01-07/jpmorgan-pays-for-shorting-madoff-without-telling-anyone.html


Photographer: Jin Lee/Bloomberg News That's okay Bernie Madoff didn't want JPMorgan's investments anyway. That's okay Bernie Madoff didn't want JPMorgan's investments anyway. Close Close Open Photographer: Jin Lee/Bloomberg News That's okay Bernie Madoff didn't want JPMorgan's investments anyway.

When I started working at an investment bank there was a series of compliance videos that one played in a corner of one's computer while doing other work. Nonetheless, the money-laundering one seems to have stuck with me: I still remember the basic plot, in which someone asks a banker questions about how he ignored red flags in a client's account-opening documents, and then at the end the camera moves back and you see that the banker is in prison. Prison for letting someone else launder money! Really makes you think, I guess, primarily about the realism of the compliance videos.

But maybe not, since JPMorgan is going to prison for missing some red flags while working as Bernie Madoff's primary bank. Hahaha no that's impossible, but it is forfeiting $1.7 billion [update: plus a separate $350 million fine and $543 million in private lawsuit settlements], which is really quite a lot of money, to settle money-laundering and so forth charges relating to the fact that it sort of noticed that Madoff was a Ponzi scheme and then didn't do anything about it.

The documents, including particularly the Statement of Facts that JPMorgan has admitted to, are pretty interesting. One obvious reaction is that it's a bit hard on JPMorgan that they're paying $1.7 billion for not catching Bernie Madoff even though they were his bank. The government regulators, led by the Securities and Exchange Commission, also didn't catch Bernie Madoff, even though they were his regulators and that was literally their job. And while yes JPMorgan ignored some red flags, so did the SEC. Like, the many many credible letters they got to the effect of "Bernie Madoff is a big ol' Ponzi scheme."Should the SEC be paying an even bigger penalty than $1.7 billion?

Of course, the regulators were missing some evidence, such as the Suspicious Activity Report that JPMorgan should have filed with the Financial Crimes Enforcement Network (FinCEN) but didn't. But hold on wait what:

In or about 1996, personnel from Madoff Bank 2 investigated the round-trip transactions between Madoff and the Private Bank Client. As a result of that investigation, which included meeting with representatives of Madoff Securities, Madoff Bank 2 concluded that there was no legitimate business purpose for these transactions, which appeared to be a "check kiting" scheme, and terminated its banking relationship with Madoff Securities. ... In addition, although unknown to JPMC at the time, Madoff Bank 2 filed a SAR in or about 1996 identifying both Madoff Securities and the Private Bank Client as being involved in suspicious transactions at Madoff Bank 2 and JPMC "for which there was no apparent business purpose."

That's paragraph 25 of the Statement of Facts (emphasis added). The details are a little boring,but the point is that in the mid-'90s FinCEN got a Suspicious Activity Report detailing some of the naughty business that Madoff was doing at JPMorgan -- and that JPMorgan is now accused of covering up -- and, y'know, nothing happened. "If only JPMorgan had filed a Suspicious Activity Report, we woulda caught Madoff!," say the regulators, extremely counterfactually.

Not, though, to excuse JPMorgan too much. This is a story full of terrible ineptitude. It starts with the relationship banker who oversaw what the Feds call the "703 Account," which was "the bank account that received and remitted, through a linked disbursement account, the overwhelming majority of funds that Madoff's victims 'invested' with Madoff Securities," and which regularly contained multiple billions of dollars. And here is what the JPMorgan banker responsible for it thought it was (paragraph 20 of the Statement of Facts):

Madoff Banker 1 believed that the 703 Account was primarily a Madoff Securities broker-dealer operating account, used to pay for rent and other routine expenses. Madoff Banker 1 also believed that the average balance in Madoff Securities' demand deposit account was "probably [in the] tens of millions." He did not understand that the 703 Account was, in fact, the account used by Madoff's investment advisory business, and achieved balances of well more than $1 billion beginning in approximately 2005, and up to approximately $5.6 billion by 2008.

Oops! I like this guy. I feel like there are a lot of bankers who want to overstate the value of their client relationships. Madoff Banker 1 is like the one banker on earth who underestimated his client's business by a factor of 100 or so. "Boss, I've made the firm thousands of dollars this year," he probably said, "and I deserve a bonus of at least $200."

But some bits of JPMorgan come out of this settlement looking pretty good. Like the part that shorted Madoff! Did you know that you could short Madoff? Well, you couldn't, but JPMorgan could. The people at JPMorgan who ferreted out -- or, at least, assumed out -- Madoff's fraud were mainly working on or with the equity exotics desk, which was selling structured notes linked to the returns on Madoff feeder funds.

Oversimplifying slightly, equity exotics worked as a Madoff feeder fund itself: It sold clients notes linked to Madoff's performance, and hedged those notes by investing its own money into Madoff feeder funds. It did this in $105 million-ish size in June 2007, and then sought permission to increase its exposure because lots more clients wanted to invest in Madoff-linked notes. That triggered JPMorgan's internal investigation, in which smart people concluded things like "the main risk this trade poses is systemic fraud risk at the BLM [i.e., Bernard L. Madoff] level," and in which the head of due diligence joked that they should visit Madoff's accountant's office "to make sure it was not a 'car wash.' "Still, while JPMorgan was investigating, its Madoff exposure got as high as $379 million, but ultimately dropped to $81 million after it concluded the investigation and decided to walk away slowly with its hands in its pockets while whistling a jaunty tune.

Meanwhile, JPMorgan was unwinding its client-facing structured note trades, but it put somewhat less urgency on that project than it did on getting its own money out of Madoff. Paragraph 68:

Although JPMC sharply reduced its hedge position in Madoff feeder funds, it was exposed to substantial risk in the event that Madoff Securities continued to perform successfully because it had not been able to unwind or otherwise cancel an equivalent value of JPMC-issued notes linked to the performance of the Madoff feeder funds.

Hahaha that's "they were short millions of dollars of Madoff." In hindsight the right trade was just to write lots more structured notes on Madoff, sell them to clients, not hedge them, and wait for the whole thing to collapse and your obligations to go to zero. I guess that would be too cute though, and JPMorgan does seem to have made semi-heroic efforts to unwind the trades and save its equity exotics clients.

But it left Madoff's other clients to be hosed: While the equity exotics guys in England ultimately reported their suspicions to the U.K.'s Serious Organized Crime Agency, no one at JPMorgan ever filed a Suspicious Activity Report with the U.S. FinCEN. Nor does anyone from the exotics side seem to have gotten in touch with hapless Madoff Banker 1, who thought he was overseeing a smallish miscellaneous-expenses account rather than the main instrument of Madoff's Ponzi scheme.

Part of that is understandable? I mean, the part where they filed a suspicious activity report in the U.K. and not the U.S. is just plain dumb, I have no excuse for that, it's just a lazy failure of communication. But the lack of coordination between the equity exotics traders and the custodian bankers makes a bit more sense, even if the lack of coordination between their respective compliance supervisors is harder to understand. In October 2008, an equity exotics analyst wrote a memo summarizing his due diligence, and his suspicions, about Madoff, concluding that "we seem to be relying on Madoff's integrity ... and the quality of the due diligence work (initial and ongoing) done by the custodians." The custodians were at JPMorgan!

But he never called them. Because that's not how it's done. Equity exotics sat in London and was a public-side trading desk. Calling up Madoff's custodial banker at JPMorgan would be unsporting, and probably somehow illegal. You can't just use your position as Madoff's custodial banker to help your business trading derivatives on Madoff. The nice thing here would have been to use the insights developed by the Madoff-derivative traders to help out the custodial bankers, but I guess the arrow doesn't run that way either.

What should you conclude about JPMorgan here? "JPMorgan is too big to manage" is sort of a lame thing to think and I don't think that the failures of coordination really get to that. Equity exotics actually did a good job staying away from Madoff (and shorting him). Custody banking did a garbage job of shutting him down, but that's probably because any custody bank would have done a garbage job of shutting him down. The failure was not that JPMorgan was dumb, but that its smarts in one business didn't end up helping where they needed to.

JPMorgan is not so much a single firm moving through the world with an integrated business strategy as it is a giant bundle of loosely sorted independent bits that together provide every conceivable financial service. The benefits are mainly in cross-selling opportunities: You never know when your checking account clients might need some equity exotics, so you can make some extra money (and keep the client happy with a one-stop shopping experience) by just offering everything.

If you think of JPMorgan's businesses as operating more or less independently, but occasionally making each other money by cross-selling, then this mess makes more sense. A London investment bank that considered and rejected a derivative-linked investment in Madoff would have no obligations to report its suspicions to U.S. regulators. A boring custody bank that ran Madoff's checking accounts but had no derivatives traders to get suspicious about him also probably wouldn't be in trouble for missing the Madoff red flags. Combine the two businesses and the same behavior gets you in trouble. In that sense, JPMorgan's $1.7 billion forfeiture here looks a bit like a tax on bigness and integration: You can grow huge, offer a loosely integrated set of every conceivable financial product, and bask in the cross-selling opportunities, but every now and then it'll cost you a couple of billion dollars. So far that trade-off still seems to be worth it for JPMorgan.

Though also an arbitrary one. Here, from the forfeiture complaint:

The $1.7 billion that JPMC has agreed to forfeit to the United States pursuant to the Deferred Prosecution Agreement represents a portion of the funds leaving the Madoff Securities accounts at JPMC from October 29, 2008 (i.e., the date of JPMC's report to SOCA) until Madoff's arrest on December 11, 2008 ...

So at the time JPMorgan filed a report with the U.K.'s Serious Organized Crime Agency saying that it thought Madoff was a fraud, there were several billion dollars in Madoff's JPMorgan bank account. (Not totally clear how much, but there was $5.6 billion in August 2008 and $234 million in December 2008.) That money vanished on JPMorgan's watch. And now JPMorgan is paying back "a portion" of it.

I guess Harry Markopolos is the obvious citation here. And of course there were those at JPMorgan -- like its global head of equities -- who didn't believe that Madoff could be a Ponzi because he was "regulated by the SEC, NYSE, NASD etc." I guess everyone figured catching Madoff was someone else's job.

Madoff and a big private banking client at JPMorgan (actually Chemical Bank, a predecessor) got themselves into a little check-kiting scheme, where Madoff would write checks (on his account at "Madoff Bank 2," i.e. not JPMorgan) to the private bank client, then transfer cash from his JPMorgan account to that account, and then the private client would transfer cash from his account to Madoff's JPMorgan account, with the purpose being basically to give Madoff the interest on the float for some reason.

Also a little fun is that the reason JPMorgan couldn't do due diligence directly on Madoff was that Madoff "did not approve of the Madoff-linked derivative products and would not allow JPMC to conduct due diligence on his fund directly." Even Madoff didn't like Madoff-linked derivatives. In hindsight Madoff-linked derivatives sound terrible.

Also, in other, dumber, news, I liked that one equity exotics employee who was tasked with doing due diligence on Madoff couldn't find any way to replicate Madoff's investing results, but "e-mailed a colleague that he did 'take comfort from the fact' that two separate Madoff feeder funds were reporting close to the same returns for the period." Consistently reporting your own fraudulent results seems like a pretty minimal standard for due diligence.

By the way there's great stuff here on JPMorgan's efforts to unwind those notes. Check out paragraph 65 of the Statement of Facts, in which "JPMC sought, with the assistance of legal counsel, to cancel or otherwise unwind certain of the structured products" by "invoking a provision of the derivatives contract that enabled it to de-link the notes from the performance of the Madoff feeder funds if JPMC could not obtain satisfactory information about its investment." That's some amazing one-way optionality for JPMorgan there! "We'll sell you a thing that gives you upside in Madoff's funds, unless we're too confused, in which case, never mind." Clients were understandably (though wrongly!) annoyed, including one distributor of the structured notes who (in paragraph 61) "expressed displeasure about JPMC's proposed action and referenced having 'Colombian friends who cause havoc . . . when they get angry. . . .'" That threat caused JPMorgan to file another, more salacious, suspicious activity report with the U.K. Serious Organized Crime Agency.

Easel Blog: Easel Acquired by GitHub

Untitled

$
0
0

Comments:"Untitled"

URL:http://www.scribd.com/vacuum?url=http://www.publicaffairs.ubc.ca/wp-content/uploads/2011/05/Happy-Guys...in-pres-Emotion.pdf


 

relationships (e.g., Buss, 2008). Thus, women may find male pridedisplays more attractive than male happiness, given that a high-status man is likely to be a better provider than a friendly andapproachable man. In contrast, men may show the reverse prefer-ences in female expressers, given that a friendly and approachablewoman may seem more sexually interested or receptive than ahigh-status woman. This prediction is also consistent with socio-cultural gender norms which, in many cultures, require that womenappear submissive and vulnerable, and men dominant and confi-dent (Cicone & Ruble, 1978; Rainville & Gallagher, 1990). Indi-viduals whose behavior and appearance is consistent with thesegender norms tend to be considered most attractive (Brown, Cash,& Noles, 1986; O’Doherty et al., 2003), so a proud man and happywoman may be valued for reasons of gender-norm consistency, aswell as for their potentially high mate value. Indeed, perhapsbecause women are known to smile (the key behavioral componentof the happy display) more frequently than men (LaFrance, Hecht,& Paluck, 2003), happy displays have been associated with fem-ininity (Becker et al., 2007).In contrast to these generally positive emotions, shame’s low-status message may reduce attractiveness, at least in male express-ers. Women who display shame might benefit from the gender-norm consistent message of low status or submissiveness, but,when sent by men, this message would be both gender-norm-inconsistent and indicative of low mate value. On the other hand,given that the shame display functions as both a low-status mes-sage and an appeasement mechanism generating forgiveness forone’s transgressions, the expression’s impact on male attractive-ness may not be entirely negative. The shame expression is thoughtto be recognized and displayed, despite its potentially harmful (tothe expresser) message of low-status, because it protects a trans-gressing expresser from overly negative social appraisals via itsappeasement effect (Keltner, Young, & Buswell, 1997). IndeedGilbert (2007) and Fessler (2007) have argued that the shameexpression has been co-opted from its ancient role as a submissiondisplay to now function as a signal of trustworthiness and willing-ness to cooperate. Although, on the one hand, it may seem odd thatboth pride and shame could increase attractiveness, on the otherhand, if shame functioned only to signal failure, it would bemaladaptive for the sender and thus unlikely to have evolved.Rather, shame displays may communicate an individual’s commit-ment to his or her social group and its norms and beliefs, a messagethat could promote attractiveness in both genders. Furthermore, if shame communicates high group-commitment while also inform-ing of a social trespass, it could elicit sympathy or nurturance—traits previously found to increase attractiveness (Cunningham,Barbee, & Pike, 1990). In sum, it is somewhat unclear preciselyhow shame might affect attractiveness, and whether its effectvaries by gender. Competing hypotheses exist and no previousstudies have examined this issue.Previous studies have, however, provided evidence relevant tothe impact of happy and proud displays on attractiveness. Theappearance of dominance, which is communicated by pride, hasbeen shown to increase male attractiveness in several Americansamples (e.g., Cunningham et al., 1990; Sadalla, Kenrick, & Ver-shure, 1987; Reis et al., 1982), and in nonhuman primates (e.g.,Struhsaker, 1967). In one of the only studies to directly examinethe attractiveness of several distinct female expressions, happinesswas found to increase women’s attractiveness (Mueser, Grau,Sussman, & Rosen, 1984). In several other studies examining thesocial impact of smiling, these expressions increased the attrac-tiveness of female targets but had no effect on males (Penton-Voak & Chang, 2008; Schulman & Hoskins, 1986); another study foundthat the presence versus absence of a smile had no effect on maleattractiveness, but the broadness of a man’s smile was a positivepredictor (Cunningham et al., 1990).

2

In one study that examinedthe attractiveness of male and female happy faces, there was nooverall cross-gender effect (O’Doherty et al., 2003).Thus, given limited previous research and somewhat equivocalfindings on the impact of happy displays, we examined the relativeattractiveness of happy, pride, and shame expressions, as well asneutral, in several large samples of younger and older adults. InStudy 1, we compared attractiveness judgments made for one maleand one female target individual, each showing all four expres-sions. In Study 2, three samples of participants, varying in age,viewed 240 different male and female targets, with different tar-gets showing each expression.

Study 1Method

Participants and procedure.

 In this study, 184 Canadianundergraduates (50% female; age

 

 17–49 years,

 median

 

 21;52% Asian, 48% Caucasian)

3

were approached by an experimenterof the same gender and asked to view one 8

 10

 laminatedphoto of an opposite sex target posing an expression of happiness,pride, shame, or neutral. By asking participants to view and judgeonly one image, we ensured that effects would not be influencedby any tendency to make comparisons among different targets oremotions. Given our interest in studying the effects of emotionexpressions on

 sexual

 attraction, all participants viewed and pro-vided judgments for an opposite sex target only, and nonhetero-sexual participants (i.e., those who rated themselves 3 or above a1–7 scale where 1

exclusively heterosexual

, 4

bisexual

, and7

exclusively homosexual

) were removed from analyses. Whileviewing the image, participants responded to the question: “Howsexually attractive do you find this person?” using a 9-point scaleranging from 1 (

not very

) to 9 (

extremely

).

Materials.

 Photos were taken from the University of Cali-fornia Davis Set of Emotion Expressions (Tracy, Robins, &Schriber, 2009), a Facial Action Coding Scheme (FACS; Ekman &Friesen, 1978)-verified set of expressions. The photos featured oneCaucasian male and one Caucasian female target from the waist up(see Figure 1). All emotion expressions featured in these photoshave been shown to be reliably recognized significantly better thanchance (Tracy et al., 2009), and to convey the behaviors found tobe associated with each of these expressions, and no other behav-iors. More specifically, as is shown in Figure 1, the pride photos

2

Although smiling (i.e., raised lip corners, or, activation of the

 zygo-maticus major 

) is only one component of the prototypical, cross-culturallyrecognized happy display, it is the most essential component. The onlyother component, raised cheeks (activation of the

 orbicularis oculi

), ispresent only sometimes; smiles not accompanied by raised cheeks are stillreliably identified as happiness, despite being less reliably associated withthe experience of happiness (Ekman, 1992).

3

Ethnicity did not moderate any results in Study 1.

2

 TRACY AND BEALL

Fn2Fn3AQ: 1F1

tapraid5/emo-emo/emo-emo/emo00311/emo2483d11z

 xppws S

1 4/11/11 7:52 Art: 2010-1581


Article 30

Sagacity

The Limits of Neuroplasticity « Science-Based Medicine

$
0
0

Comments:"The Limits of Neuroplasticity « Science-Based Medicine"

URL:http://www.sciencebasedmedicine.org/the-limits-of-neuroplasticity/


Posted by Steven Novella on January 8, 2014 ( Comments)

I am daily annoyed by overhyped headlines reporting medical and other science news. I think news outlets and the public would be better served if they fired all their headline writers and let the authors and editors craft headlines that actually reflect the story. Of course, often the story is overhyped as well, so this would not be a panacea to annoying science reporting.

Take this headline from The Week (please): “This pill could give your brain the learning powers of a 7-year-old“. The article discusses a recent study (full article here) looking at the effects of a drug, valproic acid, on the ability of young adult male subjects to learn pitch. It might be a good exercise for regular SBM readers to take a look at the full article now and analyze the strengths and weaknesses of the study.

The study found that those subjects taking valproic acid, which is a drug used to treat seizures, migraines, and mood disorders, did slightly better overall in learning to identify the pitch of various tones. The main limitation of the study is that it is very small – 24 participants enrolled, 18 completed. Further, they did not establish a good baseline performance, as the subjects were practicing as they went along.

There are other limitations but these are enough to classify this study as preliminary. It’s an exploratory study that should only be used to decide upon later studies. In my opinion, it should not be reported to the public at all – or if it is, it should be made abundantly clear that these results are so preliminary we cannot conclude anything from them.

If the effect is real, however, what could it mean? It does not mean valproic acid gives you the learning ability of a 7 year old. Perfect pitch is being used as a research paradigm to look at learning windows – periods of time where people have a much greater ability to learn some skill or ability (language, or perfect pitch, for example), and also where later that window closes and learning that skill becomes much more difficult. Researchers are trying to figure out the neurological mechanism of such windows.

If valproic acid has some effect on such learning, it might provide a clue as to the underlying mechanism. The authors of this study conclude:

In sum, our study is the first to show a change in AP with any kind of drug treatment. The finding that VPA can restore plasticity in a fundamental perceptual system in adulthood provides compelling evidence that one of the modes of action for VPA in psychiatric treatment may be to facilitate reorganization and rewiring of otherwise firmly established pathways in the brain and its epigenome

I think they are overstating their conclusions. I would have thrown in at least a “possibly” in there, given the extremely preliminary nature of their results. But again, assuming the results hold up, it is reasonable to speculate about mechanisms.

I would also note that valproic acid is a serious drug with serious side effect, including sedation, weight gain, and foggy mentation. As an anti-seizure drug it also carries the possibility of withdrawal seizures if stopped suddenly. I doubt it would have a net cognitive benefit if taken regularly, which is another reason why I am suspicious of the findings of this study.

This is exactly why I am concerned about the hyped reporting of such preliminary studies to the general public. If this were a supplement rather than a prescribed drug, you bet you would be hearing about it on Dr. Oz, and products would be popping up on health food store shelves.

A Dolphin’s Tale: The Story of GameCube — Dromble

$
0
0

Comments:" A Dolphin’s Tale: The Story of GameCube — Dromble"

URL:http://www.dromble.com/2014/01/07/dolphin-tale-story-of-gamecube/


Inside of the Peacock Room in San Francisco’s Mark Hopkins hotel, Silicon Graphics chairman Jim Clark announces an agreement to create the technology for Nintendo’s next gen console, the Ultra 64. An animated 3-D image of Mario is projected onto a movie screen behind Clark. “Jeeeemy, I may be a big star, but I don’t let it go to my head,” says the CGI animated Mario. Jim Clark turns his head to the animated Mario and responds,”Mario, I’d like to be the first to welcome you to your new home at Silicon Graphics.  I think you’re really going to have a nice, happy time here.”  At one point, Silicon Graphics was considered to become the next Apple, and received praise for their state of the art technology used on films such as “Jurassic Park” and “Terminator 2″.

Edward McCracken, the president and CEO of Silicon Graphics, releases a statement: ”By pooling the best and brightest talent from both our companies, [Nintendo Ultra 64] will propel Silicon Graphics’ leading digital media technologies into homes everywhere. Nintendo’s financial and technical investment combined with Silicon Graphics’ engineering resources will enable our two companies to continue leading the visual computing and home entertainment industries in the ’90s.”

Unfortunately, those happy times at Silicon Graphics were about to come to an end. One year after Nintendo signed the Project Reality deal with Silicon Graphics, drama was gradually escalating behind the scenes causing analysts to question whether Jim Clark was slowly losing power over the company.  Multiple reports detail a power struggle over the company with constant fighting back and forth between Jim Clark (Chairman) and Ed McCracken (Chief Executive).

“Fucking Ed McCracken,” shouted Jim Clark. “That fucking Ed McCracken”.  Clark would call him “Fucking Ed McCracken” so many times that employees at Silicon Graphics started thinking that was his real name. That was according to the book, “The New, New Thing” written by author Michael Lewis.  Clark would say, “he [McCracken] may have helped to stabilize the company, but now he’s destroying it. He can’t see what’s happening.”

Jim Clark knew personal computers would one day perform everything that a Silicon Graphics workstation could do. Clark could see trouble brewing from a mile away with Microsoft quickly dominating the market for personal computers.  ”You could see a time when the PC would be able to do the sort of graphics that SGI machines did,” says Clark. “And SGI would be toast. Eventually, Microsoft would take over its business.”

He believed that the only way SGI’s business would survive is if they created more mass market products with their technology.  These beliefs are what lead Silicon Graphics to create a chipset for the Nintendo 64. Basing their business around mass market products was a vision that McCracken and venture capitalists like Glenn Mueller didn’t share.  With Ed McCracken and Glenn Mueller running the show, Clark was quickly losing power over the company.  Clark had sold 40 percent stake in Silicon Graphics to Glenn Mueller and kept 15 percent stake for himself.  As time passed by, Clark regretted the decision, and he started to believe Mueller had cheated him and his engineers out of a significant amount of money.

“I was saying, ‘Goddamn we’re out of our minds,” said Jim Clark during an interview. “I was so worried about the PC.  I was adamant that we had to build a low-end product, and that it had to be something that sold for under five grand.”

The book “The New, New Thing” talks about how Clark would use meetings to insult Ed McCracken’s character and lack of foresight. McCracken would break out into tears as Clark described everything that was wrong with him. Author Michael Lewis says, “One time, Clark replaced the nameplate on McCracken’s door with one that said ED MCMUFFIN (the joke grew out of the inability of Clark’s teenage daughter to recall McCracken’s name). For four days, no one noticed. Clark would later haunt McCracken at board meetings with references to this incident, heaping insult upon humiliation.”

In 1994, Jim Clark resigned from Silicon Graphics, one year after Nintendo signed a deal with Silicon Graphics. His resignation raised many eyebrows at Nintendo of America, and it would be the first of many blows to the relationship between Nintendo and Silicon Graphics. Glenn Mueller, the man that Clark felt cheated him and his engineers, caught wind that Clark was starting up a new company called “Netscape”. Mueller had made some bad investments, and he begged Clark to let him be part of this new company. Clark rejected him multiple times, and on the day that Netscape was officially founded, Mueller shot himself in the head.

The same year Clark leaves, Silicon Graphics releases a video to show off the capabilities of their Nintendo Ultra 64 technology.  Thomas A. Jermoluk, the president of Silicon Graphics, is the man that you first see in this video. You may also remember Jermoluk as the person who revealed Nintendo 64 at the E3 press conference with Howard Lincoln and Peter Main. Unfortunately, Silicon Graphics, the company behind the cutting edge technology of the Nintendo 64, transformed into a frat house by the year 1997. According to a BusinessWeek article, insiders describe a July ’96 European Sales conference where managers and executives were getting drunk and carousing into the after hours.  President/COO Thomas A. Jermoluk (and several colleagues) were seen mooning other SGI employees at one the company’s annual lip-synch contests.  Another incident in Hawaii talks about Jermoluk drinking and throwing up at a poolside sales meeting.

Jermoluk says, ”Am I guilty of getting drunk a few times?’ ”Sure.  Probably inappropriately at times? Yeah. Did I party hard? Was I leading a wild life? No. I was working too hard, man.” A former executive told BusinessWeek that employees weren’t used to seeing the president get drunk.

On August 1st, 1996, Thomas Jermoluk resigned as president from Silicon Graphics, two months before the Nintendo 64 would launch in North America.  With Clark and Jermoluk both gone, most of the people that Nintendo of America had trusted at Silicon Graphics were no longer there. Edward McCracken would assume Jermoluk’s duties when the company was still going through major transitions.

As Silicon Graphics accumulated losses, the company’s employees and board members remained optimistic that CEO Edward R. McCracken would eventually turn things around.  But after McCracken’s marriage fell apart, he established a public relationship with a younger SGI employee. Sources told BusinessWeek that McCracken was constantly walking with stars in his eyes while SGI employees were in a funk. BusinessWeek says there were times when nobody bothered to coordinate introduction schedules at SGI’s product divisions. Other incidents include SGI computers shutting down without warning or engineers discovering flaws in the microprocessors that would cost the company millions of dollars. Employees began losing confidence in McCracken’s ability to lead the company, and many of their most talented engineers left to pursue better opportunities.

On October 1997, Ed McCracken resigns from Silicon Graphics, and between 700 to 1000 employees (9 percent out of a 11,000 total workforce) are laid off.

The news of Ed McCracken stepping down caused Nintendo to rethink their relationship with Silicon Graphics and begin searching for a new partner.  Silicon Graphics was struggling to retain a contract with Nintendo for their next gen console.”Nintendo has been looking to replace MIPS as its chief source of microprocessors for game units since last year, when former SGI CEO Ed McCracken stepped down. Nintendo was nervous about SGI’s commitment to low-end 64-bit processors,” according to CNET.

Next Gen Magazine describes Silicon Graphics having arguments with Nintendo over component profits after the Nintendo 64′s American’s launch MSRP was lowered at the last minute before launch. Although both companies eventually worked out a deal, the arguments rubbed Nintendo the wrong way, and SGI would no longer be considered a guaranteed partner for Nintendo’s post N64 game console.

Unfortunately, Nintendo’s quest to find a new chip-maker wouldn’t be easy. The industry’s top 3D chip experts were being lured away by NEC and Nvidia, and Nintendo was running out of options for potential partners with their new console design team. One group that showed serious interest in developing Nintendo’s post-N64 console was The 3DO Company Hardware Group.

According to the LinkedIn account of a software engineer named Rene Eiffert, 3DO’s negotiations with Nintendo lasted from 1995 through 1997 (the year after Nintendo 64 launched in North America).  The 3DO Company Hardware Group later changed their name to Cagent, and negotiations with Nintendo started back up again in 1997 through April 1998.  Both Linkedin listings below describe a sales effort to persuade Nintendo into serious negotiations to use an engine in their next game console.

3DO’s hardware division had operated under the name 3DO Systems which developed and sold their M2 technology to Panasonic for over $100 million.  Eventually, 3DO decided to exit the hardware business and sell off their hardware division to Samsung where it was renamed to CagEnt.  Cagent owned multiple technologies like a realtime MPEG encoding system, a DVD playback system, and a completed game console with brand new chip designs.  Next Gen Magazine says, “The MX chipset was a dramatically enhanced version of the M2 chipset sold to Panasonic and Matsushita, now capable of a 100 million pixel per second fillrate and utilizing two PowerPC 602 chips at its core. (CagEnt’s executives also boasted of a four million triangle per second peak draw rate, though the quality of those tiny triangles would of course have been limited).”

Rene Eiffert’s Linkedin Page:

Next Gen Magazine reported that Nintendo executives Howard Lincoln and Genyo Takeda had visited CagEnt’s facilities. By late 1997, Nintendo made a formal offer to acquire CagEnt.

Next Gen Magazine says, “At this point, Nintendo had terminated its development contract with SGI. As purchase negotiations continued, Nintendo worked with CagEnt engineers on preliminary plans to redesign the MX architecture around a MIPS CPU, as Nintendo’s manufacturing partner NEC has a MIPS development license but none to produce the PowerPC 602. Nintendo and CagEnt flip-flopped on whether the finished machine would include a built-in CD-ROM or DVD-ROM as its primary storage medium, with Nintendo apparently continuing to insist that ROM cartridges would remain at the core of its new game system. Yet as DVD and MPEG technologies would have been part of the CagEnt acquisition, Nintendo would probably have found some reasonable use for those patents eventually. The MX-based machine was to be ready for sale in Japan in fall 1999— in other words, development of games for the new console would begin within literally months, starting with the shipment of dev kits to key teams at Rare and Nintendo’s Japanese headquarters.”

The clock was winding down. Nintendo wanted to release their next gen console as soon as Fall 1999 and as late as holiday 2000, but they would get hit by a string of bad luck. Samsung and Nintendo couldn’t reach a final agreement on the ownership of CagEnt, and Samsung would eventually sell CagEnt to Microsoft where it would become the Web TV division.

After the CagEnt deal fell through, Nintendo shifted their focus to a small company named ArtX made up of former Silicon Graphics engineers who designed the Nintendo 64′s hardware. Nintendo’s visit sparked debate inside of ArtX about how a relationship with Nintendo could impact them in the long run.  

“We said we really didn’t want to divert ourselves, but Nintendo can make a pretty compelling argument and it was a pretty huge opportunity, so we decided to go ahead in mid-’98,” said Tim Van Hook, chief designer for the Nintendo 64 and a founder of ArtX.  

For a new company like ArtX, the opportunity to work with Nintendo seemed too good to pass up. ArtX eventually forges a deal to develop the Flipper Chip for Nintendo’s next gen console, and in return, ArtX would receive royalties from Nintendo.

Not everyone was thrilled over this new agreement.  SGI filed a trade secret violation against ArtX for using SGI’s secret information to directly compete against SGI and steal away their partners.  SGI and MIPS threatened multiple lawsuits including: Engaging in unfair competition, misappropriated trade secrets, interfering with prospective economic advantage, and breaching contract . Some of the ArtX employees named in suit included ArtX Chief executive Wei Yen and ArtX founder Tim Van Hook.

“I not only want to get the company performing to where it needs to, but we will also be protecting our intellectual property and we intend to reinforce this,” said Richard Belluzzo, the new CEO of Silicon Graphics.

ArtX’s Greg Buchner told EETimes, ”They [Nintendo] had given up on SGI. The last of the people they trusted were gone, and they went looking for the people. It’s not a company-to-company thing for [Nintendo]; it’s a person-to-person thing.”

By May 28, 1998, Silicon Graphics would eventually drop its lawsuit with ArtX.  Nintendo, who previously contributed $45 million annually to the profits of MIPS Group, would officially break off their relationship. Ex-SGI employees told The Register that the lawsuit was dropped because SGI didn’t want to intimidate Nintendo.

On May 4, 1999, the internet swirled with rumors from an IGN article describing Nintendo’s next gen console code-named Dolphin.  The source told IGN that four companies (Rare, Retro Studios, EAD, and NST) already had development kits and were currently in the process of creating software for the Dolphin.”Management is claiming better graphics than the PSX2 (PlayStation 2),” a Nintendo insider told IGN64. “And supposedly it will run on DVD, but that’s still a big maybe at this point.” The article continues, ”The buzz is that the system is a lot easier to program for than the Nintendo 64.  And it appears Art-X [the system's graphics chip provider] managed to slap out a nasty chipset for pretty cheap.”

During Nintendo’s press conference at E3 1999, Nintendo of America Chairman Howard Lincoln took the stage to officially announce their next gen console.  Lincoln told the press that Nintendo’s next gen console would be code-named “Dolphin,” and it would be “extremely powerful and not expensive”. He announced that the graphics chip would be developed by Art X, and they would be lead by Dr. Wei Yen who was responsible for the N64′s graphics chip.

Lincoln told the audience, ”We are absolutely confident that Dolphin’s graphics will equal or exceed anything our friends at Sony can come up with for Playstation 2.”

The Dolphin would feature a 400 MHz CPU called the Gekko processor which would be created by IBM.  In addition, Lincoln announced that Dolphin would not feature ROM cartridges which resulted in applause from the press.  He remained quiet on what type of medium that Dolphin would use instead.

Lincoln wrapped up his speech, “We’ve lifted the curtain a little on Dolphin. But we aren’t going to lift it all the way.  We’re going to continue to be very circumspect in revealing all of Dolphin’s specs… for a very simple reason — there are more technological surprises to come, and we’d like to keep them just that — surprises — for you and especially for our competitors. But as I stand here this afternoon, I think Nintendo is very well positioned to take on Sony and Sega.”

————————————————————————————————————————————————————————————-

On October 19, 1999, Nintendo ordered more than 300 billion yen ($2.8 billion) worth in conductors for it’s next-generation machine, codenamed “Dolphin”.  The first round of semiconductors would be ready by August 2000. According to IGN, NEC announced that they would spend an estimated 80 billion yen ($761 million) to construct a factory in southern Japan.  This factory would concentrate on the production of memory and graphics semiconductors for the Dolphin.  Nintendo’s Hiroshi Imanishi spoke on the challenges of keeping Dolphin’s costs down when the price of semiconductors keep increasing.

“We will need to make it cheaper than PlayStation 2,” said Imanishi.

Although Hiroshi Imanishi was never a household name, he was second in command during his tenure at Nintendo, and he spoke on behalf of former Nintendo president, Hiroshi Yamauchi.

Yamauchi echoed Imanishi’s comments about pricing.  He believed that pricing should be as low as possible so more consumers can play the software.

“A game machine that sells for close to 40000 yen [$380] can be bought by young people old enough to work part-time jobs, but at any rate, is too expensive to be aimed at children.  The Dolphin will not be that expensive.” said Yamauchi. In a seperate interview, he further elaborated, ”People do not play with the game machine itself.  They play with the software, and they are forced to purchase a game machine in order to use the software. Therefore the price of the machine should be as cheap as possible.”

In 2000, many consumers were patiently waiting for the launch of Sony’s highly anticipated PlayStation 2. That didn’t stop Nintendo executives from hyping up a possible Dolphin launch around the same time-frame. They would also take an opportunity to downplay PlayStation 2′s graphical performance.

“We plan to stick to this date,” promised Nintendo’s Managing Director Sales & Marketing, Axel Herr.   He then spoke on Dolphin’s specs: ”In terms of graphics, we came up with extremely fast chip architecture that, according to our technicians, will be 33% above the projected performance data of [Sony's] PlayStation 2.  That’s easily twice as fast as [Sega's] Dreamcast.”

Some analysts believed Sony’s PlayStation would have never gained a significant lead in market share if the Nintendo 64′s launch hadn’t experienced constant delays.  Nintendo promised that they wouldn’t make this mistake again with their next gen hardware. Yamauchi reconfirmed Nintendo’s plans to launch Dolphin shortly after PlayStation 2 in late 2000. ”We would like to release the Dolphin closer to the PS2, but since we’re aiming for the Christmas 2000 shopping season, I don’t think the time difference is that big of a handicap,” said Yamauchi.

A Nintendo 64 developer told IGN that if “Perfect Dark” and “Donkey Kong 64″ sell well, Nintendo wouldn’t release their next generation hardware until 2001.  If Nintendo feels threatened by the competition, then the next Nintendo console would release October 2000.  Another source told IGN that Nintendo was planning to ship the console by October 2000, but some people in the company weren’t entirely convinced it was possible.

Unfortunately, ‘Dolphin’ missed the targeted Christmas 2000 shopping season.  The delay would give Sony the advantage of releasing their console one year before its competitors.  For investors, the delay of Dolphin’s launch felt like Nintendo 64 all over again, where Sony accumulated market share before N64 had a chance to launch.  According to Nintendo’s Hiroshi Imanishi, the reason for Dolphin’s delay was obvious — the software wasn’t ready.

“It’s always the case with Nintendo, the hardware is already completed, but the software is not,” said Imanishi.

Nintendo downplayed the disappointment of the delay, and said the delay was good because it allowed fans to spend more money on games for the Nintendo 64.

“A 2001 launch for Project Dolphin is not only in keeping with the normal product lifespan for our home consoles, but provides two important benefits,” says Peter Main, Nintendo’s executive vice president of sales and marketing. “First, it allows the millions of current Nintendo 64 owners to devote their video game dollars to the best lineup of new games in our history-without having to buy a new system. Secondly, the new launch date for Dolphin means that our system will come to market next year with a portfolio of game names across all genres that simply can’t be matched by any other company.”

Howard Lincoln was later quizzed by GameSpot about whether Dolphin would be harmed by the delay. GameSpot reminded Lincoln that Nintendo 64 was the last to launch against Sega Saturn and Sony PlayStation.

Lincoln replies, “I think the key in this is that if you look at the N64 experience, I don’t think we were hurt at all by being late or after Sony. Very likely we were hurt by the number of games – but the number of games had more to do with problems with development tools, and we won’t make the same mistake again on the Dolphin.” He later adds, ”We’re not so concerned about being late, as much as we’re concerned about focusing on taking care of any disadvantages we may have had. Or in other words, maximizing our competitive advantages – franchises, characters, and things of that sort. In terms of technology, I’m absolutely convinced the Dolphin will be as good as the PlayStation2. I’m sure they’re both going to look just super.”

——————————————————————————————————————————————————–

The Developer Friendly System

Throughout the lifespan of the Nintendo 64, the company faced criticism for not having third party developers in mind when creating the Nintendo 64′s hardware and development tools. Nintendo of America Chairman Howard Lincoln would publicly apology for the lack of third party support. ”With the complexity of N64 technology it is incumbent upon us, and good business sense, to fully support third parties through the development process. To date, I don’t think we have provided as much support as we did with the Super NES and NES platforms,” said Lincoln.

Years later, Lincoln would reflect back on the Nintendo 64 days, and promised that we wouldn’t see a repeat of that situation. “I would say that we are deliberately making the Dolphin easy to program for – very strong development tools – because we learned our lesson with the Nintendo 64,” says Lincoln.

Miyamoto spoke on the challenges of Nintendo 64 development and the transition from 2D to 3D game development.  He believed that the struggles some third parties faced during the N64 era were necessary because it forced weaker developers out of the industry and made the remaining developers much stronger.

“It was hard to develop for the Nintendo 64, especially because the software libraries were delayed. However, the Nintendo 64 truly brought developers into the era of 3D, and there were bound to be problems with that,” says Miyamoto.  ”I suppose developers who have been working with pseudo-3D on the PlayStation, are now finding themselves playing catch-up working in real 3D on the PlayStation 2. In that sense, I think the PlayStation 2 is even harder to develop for than the Nintendo 64. Nintendo 64 weeded out weaker developers at an early stageIn the long term, I think that was necessary. Almost a rite of passage.”

Shigeru Miyamoto believed that developers who have grown comfortable creating Nintendo 64 software won’t find Dolphin difficult to develop.  Due to the GameCube’s power, he didn’t think developers would have to put in so much work into creating special effects like the Nintendo 64.  Instead of dealing with difficult hardware, developers could spend more energy into the actual projects.

Shigeru Miyamoto says, “When the transition from one platform to another is occurring, the technology is different and everything is difficult as far as new technology is concerned. Having said that, there should be some advantage to making games for Nintendo’s new platform because when Nintendo 64 launched to the market it was already the next next-generation system. In other words, Nintendo 64 already realized a complete 3D technology when it was shifted from Super Nintendo. So, those developers who have already created good software for Nintendo 64 are already in a stage where they will be able to produce good software for Dolphin.  In the case of Nintendo 64, we have to be very experienced. We have to have full knowledge in what will be able to run on the console. When it comes to Dolphin, it’s so powerful that we don’t have to put so much energy in making some special effects and sophisticated movement. In other words, we can put priority on the realization of our own game ideas, rather than trying to make some special effect work.”

Genyo Takeda, director and general manager of Nintendo, understood that there was a cultural divide between Nintendo and the engineers at Silicon Valley.  But since many of the ArtX designers were former members of Silicon Graphics, the culture clash between the two companies would be smoothed over, and they worked toward the goal of succeeding with Dolphin where they failed with Nintendo 64. Takeda believed that no one standout technology should dominate the Dolphin’s hardware. Instead, he asked ArtX, IBM, and Nintendo to create well-balanced technology that worked together in harmony with each other.

“The most difficult issue in the Dolphin project was the gap between the technology-driven culture in Silicon Valley and our intention to pursue entertainment. It was most difficult to ask Valley engineers to swallow their pride,” said Takeda in an interview with the eetimes. “Through discussion, we could establish a good relationship with them because the partners in Silicon Valley are mostly the same members who worked for Nintendo 64. We told them, ‘Let’s achieve what we could not do with Nintendo 64″

Howard Cheng, technical director at Nintendo Technology Development, was an SGI engineer who collaborated on Nintendo 64 development and then joined Nintendo a few years later. Cheng worked as the liaison between Nintendo and Silicon Valley engineers.

“Takeda gave me a very simple requirement, to make a high-performance machine that game developers can effectively work with,” Cheng said. “That itself is very difficult to ask as the speed of the processor and everything is so fast today. Balancing everything is very tricky, and it has to have a good price for consumers. That continues to be most challenging.”

Greg Buchner says ArtX and Nintendo worked on Dolphin [GameCube] for over three years to figure out what should be the main priorities in helping developers create software.  The most important priority was to create a high performance machine that would fight the rising costs of game development.

“We thought about the developers as our main customers. In particular for GameCube, we spent three years working with Nintendo of America and with all sorts of developers, trying to understand the challenges, needs, and problems they face,” says Buchner.  He further elaborates, “First among these is the rising cost of development. The GameCube can see high performance without too much trouble; it isn’t a quirky design, but a very clean one. It was important we didn’t require jumping through hoops for high performance to be achieved. On top of that, it is rich in features, and we worked to include a dream group of technical features that developers requested.”

ArtX kept notes of what developers liked or disliked about programming for the PlayStation (PS One). Since the PlayStation was originally planned as a collaboration between Sony and Nintendo, Nintendo already had an understanding of how Sony’s engineers approached console design.  ArtX had also received information on the PlayStation 2′s architecture before Sony even made their console’s specs public. Another mistake Sony made was going public with their specs and architecture one year before the PS2′s launch. This would give ArtX even more time to analyze the PlayStation 2′s flaws.

“The GameCube [Dolphin] has been made from scratch, and is a very new architecture and design, but we certainly looked at what had gone well and what hadn’t for the PlayStation. There is a long history of working with Nintendo and their legacy, plus we got an inkling of how PlayStation 2 was going to be.  We drew from all those different consoles in terms of what was right and the direction we should go, what directions we should avoid. With that said, it wasn’t like we copied anything,” says Buchner.

Buncher continues, “They [Sony] went public with what the PS2 was going to be spec-wise and architecture-wise about a year before the product launched, which is a long time.  When we saw their design, it really validated to us that we had made the right choice and done something different and efficient. They [Sony] made some mistakes in the architecture, and it made us feel good about what we had created.”  He later adds, “From a very high level, going from the PlayStation to the PS2, they made it harder to develop for. With the GameCube [Dolphin], we made it much easier to work with than the Nintendo 64. From a development point of view, it looks like they went in the opposite direction we went, and that isn’t good.”

Shigeru Miyamoto had heard from developers that Sony created a console that was more difficult to make games for than the Nintendo 64.

“Well, of course I’ve never worked on the PS2 hardware, so I really don’t know, but what I’ve heard from many different people is that they have somehow created a machine that is even more difficult to make games for than the N64. In terms of the Gamecube, we have created the hardware so that it’s much easier to program for than N64, and yet we can guarantee several dozens of times better performance than N64.  In other words, Gamecube is probably far superior to the PS2 in terms of the friendliness for game developers,” said Miyamoto

To ensure that GameCube was more developer friendly, Nintendo brought developers on board to help influence the hardware’s design.  One of those developers was Martin Hollis, the director of Goldeneye 007 and Perfect Dark for the Nintendo 64.

“My responsibilities were chiefly to advise on the development of GameCube at Nintendo Technology Development Inc. (NTD). This little known group was newly created to architect and direct the development of GameCube hardware and associated software. My role was to bring the point of view of a game developer to the table, and to ensure the hardware was game developer friendly,” says Hollis.  ”The experience was fascinating, as I have always loved hardware and enjoyed having a full and deep understanding of what is going on under the hood. The chance to influence and to learn more about hardware design was very exciting. I learned an enormous amount from Howard Cheng in that era, especially on the subject of high level architecture and console design strategy.

ArtX would spend most of 1998 figuring out what the Flipper chip needed to be, and by 1999, they cranked out the silicon and produced the first part of the chip.  In 2000, the final silicon was ready for production, and it was shown at Spaceworld 2000 along with various tech demos.

————————————————————————————————————————————————————————————

At Nintendo’s Kyoto headquarters, Ashida Kenichiro and his team were busy designing the overall look of the Dolphin.  At a roundtable, Shigeru Miyamoto says the goal was to create a system that looked like the ultimate gaming system, and not another piece of audio/visual hardware that would sit on a shelf and not move. Studies revealed that most Japanese families preferred smaller electronics in their household, and these discoveries lead to a small, compact, and efficient design.

Kenichiro explained in an old interview, “At first, I proposed several ideas that were less tall and more wide than what you see now.  One of the early ideas looked something like a UFO.  These designs weren’t working very well, so I took a close look at how real game players and there game systems interact. I wanted to make the Nintendo GameCube fit well into a user-friendly environment. The cube shape gives the impression of very compact, especially when you see it sitting on the floor. I like the simple, polished look of the final design.”

The company discovered that many gamers became personally attached to their consoles.  They would take their consoles over to a friend’s house to play, or they would move their console from one room to another. Nintendo decided to include a handle on the GameCube to give it portability and a more personal, friendly look.

“Before I started thinking about the design of the Nintendo GameCube, I did some research to determine how video game systems were used in game players’ homes.  I discovered that a lot of players actually moved the console away from the television and closer to themselves while playing games.  Adding the handle to the system makes it easier for players to do this, and it also gives the system a friendly look,” said Kenichiro.

Miyamoto also added, “GameCube’s design is pretty peculiar, different from what we are in the habit to see in a gaming console. First thing we had in mind was the evolution of videogaming itself: the reduced size of the GameCube allows you to carry it from room to room quite easily. Every family has at least two TV-sets at home, and it is very easy to bring the GameCube from the living-room to the sleeping-room, if you want, not to consider a friend’s place. We looked for simplicity and practicalness: GameCube wants to be a console that fits all the family, from the youngsters to the elders.”

Another issue brought up at the Kyoto headquarters was deciding which colors to launch their console with. Nintendo conducted studies on a wide variety of colors ranging from lime green to bright pink. They found that black was much more popular in the United States while countries like Japan preferred indigo and orange.

“One of the colors we’re using for GameCube is Indigo, which was adopted from a color found in the Nintendo GameCube logotype. We are aware, however, that people have a wide range of likes and dislikes when it comes to color,” says Kenichiro. “We did alot of research, and found that many people wanted to see a Jet (black) version of Nintendo GameCube. We are releasing that both in Japan and the U.S. In order to appeal to a wider audience in Japan, we released a Spice (orange) version as well.”

As the hardware’s development was being finalized, there was a debate over what the name of the console should be.  Shigeru Miyamoto pushed for the name“Dolphin” to be the console’s official name, but not everyone supported this idea.”I am of the opinion that the dolphin can be the actual official name of the product, but some people disagree,” says Shigeru Miyamoto.

A website named TendoBox did some investigating in the U.S. trademark database where they discovered that Nintendo registered the name “StarCube” three times.   The similarities between the Nintendo 64 logo and the Star Cube logo were pretty striking. The official Nintendo website of Sweden would post information about this new name called “StarCube” on their website, but Nintendo of America and Japan wouldn’t confirm anything.

According to the official Nintendo website of Sweden, “Sources from Nintendo in Japan have now confirmed that Nintendo’s new videogame console, earlier called Project Dolphin, will be named Starcube. The network, over which you will be able to play against people all over the world, will be named Star Road. Much more about Starcube and Star Road will be presented at Spaceworld August 24-27.”

When Nintendo of America executives, Perrin Kaplan and Jim Merrick, were asked about the Starcube name, they both denied ever hearing of the name.

“Yeah, there was a lot of debate. We really wanted something that reinforced the design and defined the system really well. There was a lot of debate. I don’t remember ‘Starcube’ coming up, but there were a lot of other possibilities,” says Merrick.

Perrin Kaplan told IGN that the name “GameCube” was a collaborative effort between Nintendo of Japan and Nintendo of America.

“There was really a lot of effort on both sides”, said Kaplan. “And Nintendo Japan really took a lot of time with the name. NOA and Japan were involved with the logo design and everything.”

————————————————————————————————————————————————————————————

In an interview with MCV Magazine, Sega of America’s VP of Development, Greg Thomas said he believed Dolphin would be a bigger threat to the Dreamcast than PlayStation 2 or Xbox. Why? Because he believed Nintendo was working on a secret “sensory controller” for the Dolphin.

“I don’t care how many polygons X-Box can put out,” Thomas said. “It’s all about who can deliver the next great gameplay experience. I’m not nervous about X-Box or PlayStation2, because we think we can make better games. No one will have head-to-head Internet play but us. What does worry me is Dolphin’s sensory controllers [which are rumored to include microphones and headphone jacks] because there’s an example of someone thinking about something different.”

Another developer who spoke on the GameCube’s [Dolphin’s] motion controls was Julian Eggebrecht, the CEO of Factor 5. Factor 5 played a prominent role with the development of the GameCube’s hardware and tools. They helped create middleware such as DivX and MusyX for the console, and they were one of the first developers to receive Dolphin development kits.

Julian Eggebrecht says Factor 5 received early prototypes of a GameCube controller with motion controls as they developed Rogue Leader. This would line up with the quote from Sega of America’s VP of Development who talked about Dolphin having “sensory controllers”.

“When we were doing Star Wars: Rogue Leader for the GameCube actually, we had an early prototype of that controller, and that had motion control. So we thought our style of gameplay – especially when it comes to flight – about motion control for a long time, so we were kind of anticipating it, and I was always keeping it in the back of my head,” says Eggebrecht.

The biggest misconception is that Nintendo moved to motion controls due to the GameCube’s failures, but Nintendo was interested in motion controls as soon as the GameCube launched. In September 24th, 2001, the same month GameCube launched in Japan, Nintendo licensed two US application Patents from a company called Gyration that deal with tracking human motion and translating it into linear movement of computer graphic images.  During that month, Gyration had pitched some GyroPad prototypes to Nintendo’s executives, which you can see a few pictures below provided by Gizmodo.  When CVG’s Rob Crossely asked a Gyration employee about Julian’s comments, he says his company never made any gyroscope accessory for the GameCube.

 

 

Although Gyration says their company never worked on any motion controllers or add-on’s for GameCube, it’s interesting to note that there’s been many patents filed regarding motion controls for GameCube.

In a patent filed in 2003, pictures show how you can use either a Game Boy Advance (Fig. 1) or a GameCube controller (fig. 17)  to create motion (tilting a world or tilting characters).

The patent says, “In this conventional technique, when a handheld game device or a game controller (hereinafter, referred to as “a game device, etc.” in place of “a handheld game device or a game controller”) is tilted, a game image is generated in which an object such as a player character, etc., moves (rolls over) in the direction of tilt, thereby allowing a player to feel as if a game space is actually tilted in accordance with a tilt of the game device, etc.”

A patent with a priority date of 2004 and a filing date of 2005 shows a GameCube controller (not a Wii remote) being used as a motion controller for Wii Sports.

The patent explains, “A game system for executing a sport game in which a player character and an opponent character rally a hit object in a virtual game space, comprising: a movement controller configured to move the player character in the virtual game space in accordance with an operation by a player; a moving amount detector configured to detect the player character’s moving amount per predetermined unit time at each predetermined time interval; an adder configured to add the player character’s moving amount per predetermined unit time for accumulation; and a motion controller configured to control the motion of the player character so as to provide the player with the option of instructing the player character to perform a hitting motion of a first type and to prohibit the player from instructing the player character to perform a hitting motion of a second type, different from the first type, when a result of addition by the adder does not exceed a predetermined value, and so as to provide the player with the option of instructing the player character to perform the hitting motion of the first type or the hitting motion of the second type when a result of addition by the adder exceeds the predetermined value.”

But then there’s another patent that was filed in 2006 that shows the Wii remote being used with the GameCube’s Wavebird adapter.  The patent shows a user playing Wii Sports with the GameCube.

This information is not to discredit Gyration’s claims that there was never a motion controller in development for GameCube. Obviously, the company was working very closely with Nintendo on the Wii remote’s technology. The purpose of this information is meant to explain why some people believe there was a motion controller in development for the GameCube. Remember, we didn’t even know that GameCube was designed with 3D stereoscopic capabilities until 2011, so you never know what experiments were going on at NCL during the GameCube days. Especially since Nintendo was experimenting with motion controls way back at Spaceworld 2000 using the Game Boy Advance with the GameCube for Kirby Tilt N’ Tumble.

——————————————————————————————————————————————————-

(Note: I published this section on September 29th, 2013, but it was originally written for this article.)

Reactions to the Nintendo 64′s controller were mixed — it received praise for revolutionizing controls for 3D movement, but it was also criticized for being too large and uncomfortable. Speaking to the press in 1999, Shigeru Miyamoto said, “The major problem is that the Japanese user says that the N64 controller is too big and the American user says it’s the appropriate size.”

The common belief was that Shigeru Miyamoto designed Nintendo 64′s controller around “Super Mario 64″, and many wondered if GameCube’s controller followed a similar development path.  Nintendo’s Jim Merrick shot down any speculation that Miyamoto designed GameCube’s controller around a single game.

“No, and I think that’s a misnomer to say that the N64 controller was designed around Super Mario 64. Yes, Mr. Miyamoto wanted analog control because he had a vision of how he wanted that game to work, but the controller wasn’t designed specifically for one game,” said Merrick.

Satoru Iwata, Nintendo’s director of planning (at the time), spoke to Japanese magazine nDream about the GameCube.  Iwata told the magazine that their obsessison with creating and redesigning controllers is why their controllers receive so much praise.

Miyamoto devoted more time and energy into the GameCube’s controller than any of Nintendo’s previous controllers.  The NES controller was the first controller to add a D-Pad, the SNES controller would be the first to add shoulder buttons, and Nintendo 64′s controller introduced the analogue stick to 3D consoles. How would Miyamoto be able to top his previous three achievements?  For starters, Shigeru Miyamoto’s new mission was to completely reinvent the shape and feel of the controller so anyone — young or old, with big or small hands — could comfortably hold it.

“This Gamecube controller is the one on which I spent the longest time on designing. As far as controller designs are concerned, I think that this is the fourth or fifth version since the original design,” says Miyamoto.  ”Our target user for this controller is not very specific, it’s very general, as even a beginner who has never touched the controller can use it, your grandmother can use, or even children with small hands can use it. I think it’s already three years or so since I first started working on this controller design. I don’t know if any of you are a developer for the Gamecube, but if you are, you are going to receive the development tools pretty soon, and the controller included with the tools is already different from the one that shall be the final version that will go to market.”

Over the span of three years, the GameCube controller would dramatically evolve with ideas being added and removed on a monthly basis.  Miyamoto, like any artist, wanted to top himself by creating something better than his previous work.  For example, the ‘B’ button was originally kidney shaped like the ‘X’ and ‘Y’ buttons, but the shape would later change to being small and round.

Saturu Iwata says, ”I’m frankly pretty surprised at the number of changes we’ve made to this controller [laughs]. I think that it really contains a lot of the DNA of Nintendo, if you will.  And I think that Nintendo puts a lot more emphasis and uses the controller more than any of the other companies.”

Miyamoto’s first idea was to reinvent the traditional placement of the A,B,X, Y buttons that had become standard in the industry.

“I don’t want to appear self-important, but I was the first to put four buttons on the right hand of the pad, when I designed the Super NES controller and Sega, Sony and now even Microsoft have followed that idea. I don’t want to state they copied from us, but it is obvious that the four buttons became a standard. Now I have decided to renounce this shape.  I invented it and I can afford to renounce it. (smiles),” says Miyamoto.

The GameCube’s controller would place a far greater emphasis on having a “main” button, a large green button surrounded by smaller buttons.  The sizes, shapes, and positions of each button would help players identify each button’s level of importance on the controller’s layout.

Miyamoto explains, “I wanted to focus on the immediate recognition of the main button on the joypad.  In SNES it was the “A” button, in the GameCube, it is the green one. It is pleasant to the touch and the player is immediately aware what button is the most important one, the main control between him and what permits him to interact, for example, with Mario. ”

It would be designed as a culmination of his past ideas from previous controllers, but instead of every single button being round, many of the buttons were kidney-shaped for more intuitive controls.

Well, the whole idea was to be able to feel them [kidney shaped buttons] with your eyes closed and have them more intuitive versus everything always being round,” said Perrin Kaplan, when asked about the controller’s unusual button layout.

As the GameCube controller went through various changes, Miyamoto had hinted at the possibility of dropping the D-Pad altogether.

“We had to fine tune many aspects of the controller, but the basic layout and design hasn’t changed. The real differences have been the sizes and placement of the buttons.” said Miyamoto. “We wanted the perfect design and I think we nearly achieved that.  In the future, perhaps we won’t need the cross button [D-Pad].”

IGN wrote an article claiming their sources had seen early prototypes of the controller without a D-Pad. After writing the article, they spoke to more developers working on Dolphin hardware, and one developer told the website, ”I don’t know where [the D-Pad] is going to be, but they’re going to stick it on there somewhere now.”

“To this we say, “Buh?” Was the D-Pad a go all along or is this some new development?,” wrote IGN. “We’re not sure. One thing is for certain, though – as of last month, developers were still telling us that there was absolutely not going to be a D-Pad on the Dolphin controller, which leads us to believe that Nintendo decided to go ahead with one in the eleventh hour.”

It’s hard to imagine a Nintendo controller without a D-Pad, but this almost happened with the GameCube. Did Shigeru Miyamoto believe that the shift from 2D to 3D gaming would make D-Pads irrelevant? Unfortunately, when the D-Pad was added to the controller, it felt like an afterthought instead of a priority. Similar in size to Game Boy Advance’s tiny D-Pad, the D-Pad was awkwardly placed on the GameCube’s controller, and it seemed obvious that Nintendo ran out of ideas on where to put it.

Ashida Kenichiro, one of Nintendo’s hardware designers on the GameCube, says the controller was designed to make you forget that you are holding a controller.  But it was challenging to incorporate so many features into the controller while keeping it comfortable.

“In my opinion, the ideal controller is one which the player forgets he is even holding.  It was very difficult to accomplish this task with the Gamecube controller, because we wanted to incorporate so many features into it.  From the beginning to end of the project I kept asking myself, “How can I arrange the features comfortably?,” says Kenichiro.

Miyamoto felt that he had succeeded in creating a controller that was superior to the Nintendo 64′s controller, but he also believed the GameCube controller would set a new standard for future game controllers from Nintendo, Sony, and Microsoft.

“I had some confidence with the N64 controller, too. However, when I compare the two, I can tell that the GCN Controller is better designed for game play. What I really want to say is, “Get accustomed to the GCN Controller because, 10 years from now, this controller will be the standard“ said Miyamoto.

At E3 2001, months before launch, Iwata mentioned that Miyamoto still planned to tweak the controller’s C-stick before release. Nintendo’s engineers worked around the clock to sneak in Miyamoto’s final requests before Japan’s launch in September.

Satoru Iwata explains, “But [Miyamoto] mentioned earlier that he’s still not totally happy with the feel of the c-stick and he’s still working on it and it will probably change again. But the hardware team is intent that the one we’re showing at E3 (2001) is the final one.  But because Mr. Miyamoto has asked for this final change and we do want to get this in the final product, the hardware team is back at home working very hard to make sure that they get it done in time and from my opinion, it looks as though everything is going well. Obviously, when we make prototypes at Nintendo, we then pass those along for everybody to comment on and give their input on. And as Mr. Miyamoto is known to do, he likes to continue to make his changes and maximize all the functions of the controller, even up to the very last minute at which point we’ve got our hardware people scrambling to get his changes in [laughs].  I think it’s because of this perfectionist attitude that Mr. Miyamoto has and the kind of last minute changes he makes to improve the controller is part of the reason why people really feel that Nintendo controllers have the best feel and are very easy to use.”

 

Years after the GameCube released, Shigeru Miyamoto viewed GameCube’s controller from a more critical perspective.  Although Miyamoto was happy with the controller’s final design, he didn’t believe it was absolutely perfect. The clickable L and R buttons seemed innovative at the time, but Miyamoto seemed disappointed that they didn’t use the functionality for more unique ideas.

“Using the analog and the L and R shoulder buttons was maybe a little hard for the younger players. We were not able to use that functionality very well in games either. On this next one [Revolution], we’re really looking at solving some of those problems,” said Miyamoto. “With the GameCube, we originally thought we’d be able to use the functionality of the L and R buttons to create some really unique things. In the end we just made basic games and didn’t really utilize the full potential, but with the Revolution we’re hoping to do is utilize the interface to create more interesting and unique games.”

Miyamoto was asked if any specific game stuck with him after all of these years.  Miyamoto replied, “It’s not a game, but maybe the GameCube’s controller. We made it as a culmination of everything leading up to it, but it really underwhelmed. ‘This line of thinking doesn’t give us anything else to shoot for, does it?’ That’s how I felt.”

When asked for more specifics, Miyamoto admitted that the Gamecube controller lead to the creation of the Wii remote. “The GameCube controller is a product of us feeling that, without this or that, people wouldn’t be able to play the games we make. But then we realized that was a problem, that we were thinking based on that controller as the premise.”

It wasn’t just Miyamoto who would look back at the GameCube’s controller with a different perspective. Kenichiro Ashida, one of the lead designers of the GameCube’s controller, said he felt incompatible with the GameCube controller over the years.

“I personally felt that the GameCube controller was the culmination of all controllers that had come before it, and that it couldn’t be improved via the traditional concept of simply adding to it. More than anything else, I felt as though the controller and I were incompatible. Having a family, the time I had to play hard games decreased, and a gap between my “creator self” and my “player self” was born,” said Ashida.

———————————————————————————————————————————————————————————–

According to the book Opening the Xbox: Inside Microsoft’s Plan to Unleash an Entertainment Revolution”, Microsoft was ready to spend $25 billion for the Japanese game giant. Nintendo’s U.S. president Minoru Arakawa wasn’t sure what to think when Microsoft made the offer. “I was surprised,” Arakawa admitted. “We didn’t need the money. I thought it was a joke.”

Arakawa and Nintendo of America still took the offer seriously and brought discussions to its corporate office in Japan. “Some Nintendo executives seemed interested and the meetings went on through the winter,” the book describes. “The parties met six or seven times. Microsoft wanted Nintendo to drop its GameCube console and get behind the Xbox. But Hiroshi Yamauchi, the aging CEO and Nintendo, didn’t like the idea. By January 2000, the talks were over.”

The two companies parted ways and wished each other luck. “Our ability to remain independent was unquestioned due to our financial status,” explained Arakawa. “And it became clear that our objectives and their objectives were not the same.”  Microsoft said the door would remain open if Nintendo ever considered changing their mind. In German magazine Wirtschafts Woche, founder and chairman Bill Gates expressed interest in buying game giant Nintendo should the company put itself on the market. ”If Hiroshi Yamauchi calls, he will be directly transferred to me,” said Gates.

“It would be a great investment,” said Perrin Kaplan, vice president of marketing and corporate affairs for Nintendo. “We’re a very successful company.  But the bad news for Mr. Gates is we’re not for sale.”

Yamauchi had a great deal of respect for Bill Gates, but he did not believe Microsoft understood the game business or Japanese gamers.

“Everyone agrees that Bill Gates is a great businessman,” Hiroshi Yamauchi said“but he’s only human. There is one thing he knows nothing about, and that’s games.  If you know nothing about sumo, you can’t expect to take on a Yokozuna…I expect even in a year’s time they’ll be able to see the consequences of this.”

In a separate interview, Yamauchi slammed Microsoft’s intentions of entering the game industry. ”There are a lot of people who really do not know about games in the industry. Especially one big company in the USA thinks that it can use a lot of money to surround itself with software companies and accomplish what Nintendo has done.” said Nintendo President, Hiroshi Yamauchi.  “We don’t think it will go that easily. It seems that they will be bringing their system out next year, and the results of how well they actually do will be known to us in the beginning of the following year.”

Hiroshi Yamauchi also shot down comparisons between what Nintendo and Microsoft are offering.

“Microsoft is going after performance only, and does not understand that the game is played with software. A Nintendo is ultimately a toy. It is the most advanced machine for playing games, and it is totally different from the Microsoft product. It is just like trying to compare a sumo wrestler and pro-wrestler; they play by totally different rules. We do not consider Microsoft to be our competitor,” said Yamauchi.

Yamauchi explained to an interviewer, ”I don’t think the Xbox will do very well in Japan. The same old genres and development methods still work in America, but they won’t interest the more discriminating gamers of Japan. The PS2 was just discounted too, but I can’t expect that will have much of an effect.”

Satoru Iwata, Nintendo’s Director of Planning in 2001, criticized Microsoft for focusing more attention on their marketing budget than actual software.  In his opinion, a true entertainment company understands that marketing doesn’t work if the product is undesirable.

Iwata says, “What really showed me that they [Microsoft] don’t have the experience in the entertainment industry is that they started off by announcing the $500 million marketing amount. So that’s what they’re going to go with and that’s a big number and of course it’s great because it got them in the news. But our philosophy is that you don’t start off with a number of what you’re going to market something with. Rather, you look at the product, you look at the entertainment and what you’re trying to package and what you want it to be. Then you think about the best way to convey that to the consumer. That’s what entertainment is about and that’s why we don’t go in for those types of tactics. If you come up with a product that people don’t want, it doesn’t matter how much money you spend, they’re still not going to want it.  We don’t necessarily agree with how they’ve gone about doing things so far and don’t think it’s for us.”

—————————————————————————————————————————————————————

 

After selling 100 million consoles, “PlayStation” had quickly become a well recognized brand name around the world.  In 1999, Sony announced the PlayStation 2, and they promised that this system would change everyone’s lives forever.  They positioned the PS2 as a “multi-media computer entertainment system” and they told the media that Sega and Nintendo were only interested in making toys.

“The new system is not just a technology upgrade,” said Kaz Hirai, president and chief operating officer of Sony Computer Entertainment of America Inc. “These are not just next-generation consoles but the vision of the next (big thing) in home entertainment. We have developed the most advanced entertainment platform in the world.”

Sony portrayed the Sega’s Dreamcast as a minor upgrade that wasn’t built for the future, and they told customers that the next generation wouldn’t arrive until the launch of PlayStation 2.  Sony’s executives did everything they could to hype up the capabilities of the PS2.  They even bragged about how their workstations were being surpassed by the PlayStation 2. The media reported how PlayStation 2 was so powerful that Iraq was buying them to use for military operations.

Phil Harrison, former vice president of SCEA, told the press,”Being half our price is indicative of the fact that they [Sega] are less than half our technology. This delivers a future proof element to our technology. It is many years ahead of its time.”

Behind the scenes, Nintendo was planning to make a strong comeback and unseat Sony from it’s dominant position. Unfortunately, Nintendo would have quite a mountain to climb if they wanted to be the top dog again. Hiroshi Yamauchi’s policies in the 1990′s lead many third parties to rally behind Sony and abandon their support for Nintendo.  However, even after the success of the first PlayStation, Yamauchi remained adamant about his position on quality over quantity.

“I’ve been told that Sony won over Nintendo by surrounding itself with software companies, and I will admit that situation was there in the past. However, times have changed, and it’s no longer a race to see how many useless companies you can get on your side.” said Yamauchi.

After the PlayStation 2 had launched in Japan, Hiroshi Yamauchi criticized PlayStation 2′s Japanese launch lineup. He then proceeded to mention how software on the PlayStation 2 was selling below expectations.

“If we release software for the console similar to what Sony has for PlayStation 2, that would be a failure. Currently, most games released for the PS2 are not selling well at all. I am confident that this is because game makers truly believe that they cannot make money on the PS2,” said Yamauchi.

Hiroshi Imanishi, Yamauchi’s right hand man, shrugged off the idea that Sony was a major competitor.  He claimed that Nintendo’s handheld division was a bigger threat to the GameCube than the PlayStation 2.

Well if I have to name one console which threatens Gamecube, it would be Game Boy Advance. Honestly, some may say that we are pretending to be strong, but we are not specifically thinking about so-called competition or rivals. People say, “Well, Sony should be the biggest competitor for the new Nintendo system.” Sony was originally a hardware-oriented company, and we are pretty much different. We are simply going ahead in our own direction, making fun and interesting games, and providing these games for our users. So we haven’t specifically been concerned with any competition.” says Imanishi.

Imanishi wasn’t the only one who didn’t feel threatened by the PlayStation 2.  Shigeru Miyamoto said he didn’t care about the existence of the PlayStation 2 because he believed GameCube’s software was strong enough for people to buy two consoles.

“Well, I don’t feel the slightest threat from the PlayStation 2, and very frankly speaking, I am not too concerned about the existence of the PS2 at all, because we have created the GameCube hardware and will create the software that will become the requisite for everyone even though they already have the PS2. That is why I’m not too concerned about the PlayStation 2,” says Miyamoto.

—————————————————————————————————————————————————————

On October 22, 1999, Japanese gaming magazine Famitsu Weekly conducted a survey in which a number of respected game companies and retailers were asked the following question: How well do you think Dolphin will sell? Around 45% of Japanese gamers surveyed said they have a positive outlook for Nintendo’s Dolphin while another 33.9% were still unsure one way or the other. Considering that no information had been released on the system (at this point in time), the results were better than expected.

The results:

  • It will sell very well. — 3.2%
  • It well sell pretty good. — 41.9%
  • It won’t sell very well. — 17.8%
  • It won’t sell at all. — 1.6%
  • Not sure yet. — 33.9%
  • No response. — 1.6%

PC Data also conducted a study tallying input from 1,500 home Internet users.  The results showed that one out of every three people surveyed plans to buy at least one next-generation console. Sony’s PlayStation2 was the most wanted system by those polled, pulling in a whopping 63% of the vote. Ranking at number two, Sega’s Dreamcast pulled 22.4%, Nintendo’s Dolphin system pulled 17.2% of the vote, and Microsoft’s X-Box came in last place with only 11.9% of the votes.

In a June 2000 reader survey conducted by gaming magazine Famitsu Weekly, Japanese gamers once again demonstrated their unanimous support for PlayStation 2 and significant disinterest in Nintendo’s next-generation console.

When asked if they would buy Nintendo’s Dolphin:

  • More than a quarter (over 25%) of those surveyed answered that they will wait and see how the system does before deciding.
  • Meanwhile, a massive 34.3% of Japanese gamers polled said they don’t know if they will buy Dolphin or not.
  • Only 5.1% of those surveyed said they will buy Dolphin the first day it is released in Japan.

This wasn’t too surprising since the Nintendo’s 64 popularity was at an all time low in Japan and most Japanese gamers had lost interest in Nintendo altogether.

———————————————————————————————————————————————-

At GDC 2000, most software houses were already well into production with PlayStation 2 and Xbox titles, but very few developers had signed up to create games for Nintendo’s Dolphin. According to various sites, Dolphin was the butt of jokes at GDC, and one major respected developer told IGN, “We’ll develop for Dolphin in five years when Nintendo finally releases some information on it”. Another developer said, “Nintendo is making the same old mistakes, it’s not giving us any incentive to bother with Dolphin”.

At the same event, Bill Gates called for mass developer support to drive the Xbox, and Microsoft was aggressively pursuing third party developers by delivering development kits to as many studios as possible. Technical director Jim Merrick admitted to IGN that it did not yet have a formal development program in place.  He suggested that potential Dolphin developers could prepare for Nintendo’s future system by creating prototypes of their games on the highest performance PC that can be configured

By June 2000, no US companies had working GameCube development kits yet and were still in prototyping stages. Miyamoto explained, “There are several different stages of the development tools and until the final one is ready we just cannot mass produce them. Even at NCL we’re still dealing with simulation and emulation, but very soon we’re going to have the final development kit ready. In the interim, we’re working with incomplete setups. But soon the final kits will go out.”

On October 20, 2000, Technical Director Jim Merrick would give another update on the GameCube development kits.

Merrick said, “Development kits are constantly evolving. It was only three weeks ago that we did a major revision on the N64 development kits. We’ll continue to evolve these things throughout their life — it’s part of their design. But do I think that the Gamecube development kits now are what they’ll be when the system launches? No. The development kits are pretty functional right now though. Is absolutely every feature implemented to the degree that we’d like it to be? No. But you can definitely make a game. We didn’t show any playable Gamecube games here, but I have played Gamecube games.”

In the December 2000 issue of Next Gen Magazine, approximately 7 months after GDC, Imanishi says they are not approaching third parties to make games for GameCube. Instead, they expect third parties to come to them once the GameCube starts growing their install base.

“Nintendo’s position is that we are going to sell our hardware with our own software titles, and if consumers buy a number of Gamecubes, then licensees would become interested in making games for Nintendo Gamecube. That’s the general idea in Nintendo’s business. So we are not actually approaching them [third parties] and asking them to make software for Nintendo. Already there are a number of requests [from publishers] who would like to make the software for Gamecube, so probably in September we will start explaining the technology and delivering the development kits to them. Once again, it’s their decision. If they would like to make Gamecube software, that’s fine, but we will never demand them to make games for Gamecube.”

Nintendo’s Perrin Kaplan was asked if the company runs the risk of underselling the GameCube console to game developers by keeping the GameCube so secretive.  Many developers working on games for PlayStation 2 and Xbox were stumped when IGN asked them about GameCube development.   Kaplain says Nintendo is less focused on trying to get tons of third party support and more focused on quality third party titles.

Kaplan responds, “I will say, and Microsoft will tell you this themselves, we’re a little more like a pyramid and I think Sony and Microsoft are inverted. They want to have the masses and third-parties producing stuff and few games come out of their own house, versus Nintendo which is fewer and greater, and then spreads from there. I mean, we do have a lot of folks that are working on games, and Jim and his group continue to bring more and more folks on — and now it’s going to escalate even further. But it’s a wholly different approach. We’re in the hit business and I think Sony is in the business of just having a lot of software to pick from.

On August 2001, Peter Main echoed Kaplan’s comments, “Our focus is not on maximizing the number of developers on this system, it’s in optimizing the quality of those developers. We’re absolutely convinced we’ve got top-quality developers around the world working on this project.”

Nintendo’s Satoru Iwata emphasized a long belief by Hiroshi Yamauchi that Nintendo does not believe in overwhelming third party support.

“The GameCube has been well received by the development community, but we don’t believe in overwhelming third party support. However, we’re certainly talking with more developers about the possibility of working together. Frequently, developers use our platforms solely for their own self-interests, so it’s hard to form management relationships.  Rather than business to business relationships, we’ve chosen more personal collaborations such as creator to creator.  Capcom’s decision to release Biohazard on the GameCube is a direct result of that,” says Iwata.

Famitsu Weekly interviewed sixty Japanese game companies to get their opinions on Nintendo’s Gamecube console before it launched.  These were the results.

What do you like about the Gamecube?

  • Design — 16
  • Supportive attitude towards developers — 11
  • The new 8-inch disk format — 9
  • The specs — 8
  • It’s a game machine (instead of a multimedia machine) — 7

How many units do you think it will sell in its first year?

  • Less than 1 million — 7
  • 1-2 million — 33
  • 2-3 million — 14
  • 3-4 million — 3
  • More than 4 million — 1
  • Other — 2

Do you think it will outsell the Nintendo 64?

  • Yes — 36
  • About the same — 13
  • Don’t know, no answer — 10
  • No — 1

Will you develop for the Gamecube?

  • Yes — 5
  • Researching the possibility — 15
  • Don’t know yet — 38
  • Other, no answer — 2

CNN reported that Nintendo was charging a much higher licensing fee for GameCube ($11) while Microsoft and Sony charged ($7 – $9). This fell in line with a report from IGN reported that Microsoft was charging $8 (and possibly lower) licensing fee standard, and one major publisher told them “Microsoft has been very accommodating” when it comes to fees.

When Next Gen Magazine asked Peter Main, Nintendo’s top Marketing executive, how they deal with Microsoft and Sony wooing the same companies, Main shrugged it off.

Third-party publishers know our price list, and it’s very similar to anything that’s out there. We know that’s consistent. What may be in question is marketing money. On the one hand, there was someone [Microsoft] who had no stake in the business and at any cost had to get in. They went to a variety of developers and agreed to pay, in essence, 100% of development costs with no strings attached, which had a lot of developers [coming] here and saying they have free goods that they can port to our system,” said Main.

Next Gen told Peter Main that it sounded like a good deal for Nintendo.  Main replied,  ”And [Microsoft] paid for the whole deal… We’ve been in the business for a few years, and we didn’t do any deals like that. For better or worse, a $10 royalty plus the cost of their goods is about the same deal that all three companies are running on.”

On top of GameCube’s higher licensing fees, the shortages and delays of GameCube development kits would also push away developers from making games for the GameCube.

Technical director Jim Merrick says, “We are now and will be in a shortage condition for development kits for a long time to come. But going to low-cost disc-based media is breaking down some of the economic barriers that publishers felt toward the cartridge-based business. So we have a lot of interest. And we’re not going to be able to satisfy the demand for development kits. In fact, probably six months after the Gamecube launches we’ll still be in a shortage of development kits. We just can’t produce them fast enough.

In 2001, many developers still hadn’t received their development kits, and this caused rumors to spread around the industry that GameCube might not launch that year.  Nintendo’s George Harrison put those rumors to rest.

GameCube is definitely going to ship this year. I think there has been speculation because some of the third-parties have only received their development kits more recently and they’re saying, “Gee, if I can’t get a game ready, how could they possibly launch GameCube?” But the truth is that we prioritize the shipment of development kits.” says Harrison.  “We certainly first provided them to our internal development groups, next to our second-party partners where we have investments like Rare and others. And then finally to what we really consider to be the cream of the third-party developers like Electronic Arts and a few others. So a third-party that has just received development hardware and says, “I can’t make a game by the end of 2001 so they won’t launch GameCube” is not keeping focused on what’s really important for us, which is that we have to have the best possible, exclusive software for the launch this October.”

The first twelve months of GameCube’s third party support was hurt by high licensing fees, shortages/delays of development kits, and Nintendo being very selective on who they worked with. After the Nintendo 64, this was not a good way to start the first year of third party support for Nintendo’s new console.

—————————————————————————————————————————————————————

Nintendo’s Criticism of Multi-platform Third Party Games

Since the GameCube days, Nintendo’s executives shared the opinion that multiplatform third party titles do not really benefit Nintendo’s business. Why? Because Nintendo’s business is about creating an experience that competitors can’t duplicate, and all three executives believed third party multiplatform titles on GameCube should offer something unique from the Xbox and PlayStation 2 versions. From Nintendo’s perspective, straight ports of multiplatform titles was a contradiction to the “Nintendo Difference” message that Nintendo was promoting at the time.

On February 7th, 2001, former Nintendo president Hiroshi Yamauchi strongly criticized the industry for creating one game and then porting it to all three consoles.

“Now software companies are going multi-platform, running one game on lots of consoles, just to sell that little bit more. Even Sega. I can understand why the industry’s flowing this way, but, speaking for Nintendo, I can hardly welcome it,” said Yamauchi.  ”When a user chooses a game, he always searches for something new and fun in a way he’s never seen before. If games on Nintendo machines are do-able on other companies’ consoles, then we’ll lose those users’ support. If we can’t succeed in separating ourselves, then we won’t win this battle. And that’s the reason why I’m not overjoyed about multi-platform tactics.

On May 16, 2001, the week of GameCube’s big debut at E3, Satoru Iwata criticized third party publishers for porting AAA blockbuster titles to multiple consoles.

Satoru Iwata said, “If that (keeps happening), the console business becomes a commodity business. There is no reason to choose one console over another, except price,” he said. “Then it doesn’t matter which machine you choose–they all play the same games.”

Just like Satoru Iwata and Hiroshi Yamauchi, Shigeru Miyamoto was also not a fan of third party developers creating a game and porting it to multiple consoles.

If you are just simply comparing the 3 hardware consoles in terms of functionality, you can make similar games and many people are now trying to introduce multiplatform games. It may be good for game users but when it comes to some kind of unique interactions with the hardware I don’t think multiplatform games are contributing a lot. Whilst I think it is good to have many different titles for the platforms, I think that only Nintendo can provide certain experiences,” said Miaymoto.

Most Japanese publishers (during the early 2000′s) concentrated all of their development on the best selling platform, but rising development costs were interfering with that strategy.

Yoshihiro Maruyama, general manager of Xbox’s Japanese division, told 1Up.com in 2002, ”The business model is different. You see companies in the U.S. using a multiplatform strategy, developing games for several consoles at once, with Electronic Arts leading the way. However, Japan concentrates all its development on the top platform alone, so it’s easy to run into dead ends. Of course, I’m not saying we should all be like EA. If Japanese games lose their workmanship, their quality… Before the PlayStation, there were lots of games made with this ‘Hey, this is neat, let’s try this’ attitude that sold way beyond expectations. Now it’s more, like, ‘How many copies will this sell? If it won’t sell, we can’t do it’, and it’s getting harder to make new games.”

—————————————————————————————————————————————————————

Nintendo’s Views on Graphics Before GameCube’s Launch

Seven months before GameCube’s Japanese launch, Hiroshi Yamauchi criticized the industry for focusing so heavily on pretty graphics and power. ”Software companies have run out of new ideas, so now all they strive for is more graphics and more force,” said Hiroshi Yamauchi.

On May 16th, 2001, Satoru Iwata, director of Nintendo’s strategic planning, explained to reporters how graphics are now reaching diminishing returns.  ”There is not much more developers can do to impress players only with pictures. The hardest thing is to entertain,” said Iwata at a press conference before the opening of the Electronic Entertainment Expo.  Two months after the GameCube’s successful North American launch, Hiroshi Yamauchi remained firm with his position that the industry should focus less on realistic graphics and more on ways to expand the market.

“Every game developer is shooting for nothing but realism and flashiness, so we’re seeing an overflow of games that look exactly the same,” said Yamauchi. “What does realism and flashiness have to do with fun? If more games with new types of gameplay and fun come on the market, the market will expand, companies will have more support, and there’d be a business to work with.”

Yamauchi also highly criticized large-scale AAA games and said companies who make these kinds of games will eventually go bankrupt. ”Large-scale games are done for. If they continue to be made, then companies around the world will go under,” said Yamauchi.

When asked why Nintendo chose smaller disks for GameCube instead of standard DVDs, Shigeru Miyamoto explained that smaller disks sends a message to developers that they don’t need to make long games with realistic graphics.

“I’m not sure if it’s the whole world demanding realistic graphics or just a limited number of games players, but some developers are in the mindset that they feel threatened by the world into making realistic looking games right now,” says Miyamoto. “Therefore, they just cannot afford the time to make unique software because they feel the pressure to make realistic games and are obsessed with graphics.”

He continues,”In the end, they cannot recoup their investment in the game. So in a way, the smaller disc is a message from Nintendo that you don’t need to fill out the capacity of a normal sized DVD disc. If we want to make larger software, then we just make the game on two or three discs.”

Speaking with IGN in 2002, Miyamoto further elaborated that games are becoming more expensive to produce, and it’s difficult to provide games at regular intervals when consumers expect longer games with richer graphics.  To combat the rising costs of software development, Miyamoto says there is a greater tendency to create more compact games that focus on new ideas.

———————————————————————————————————————————————————————————————————————————

Three days before the GameCube would launch in Japan, the United States would face the largest terrorist attack in American history. Even the Japanese, who normally aren’t concerned with American politics, found themselves glued to the television.  Due to the September 11th terrorist attacks, Nintendo’s share price had fallen 12% and reached its lowest point of the year.

“The attacks prompted the sell-off since investors are worried the incidents will chill consumers’ Christmas shopping appetite for games,” said Nobumasa Morimoto, senior analyst at Tokyo-Mitsubishi Securities Co.

At the time, it was unknown what impact the September 11th terrorist attacks would have on the November U.S. launches of the Xbox and the GameCube. Keep in mind that the United States was also facing a weakened economy around this time. Would U.S. shoppers still be in the mood to buy game consoles two months after the biggest terrorist attack on American soil?  Wes Nihei, a former GamePro magazine writer told USA Today, “when you factor in the economic atmosphere and the aftermath of the tragedy, you have to wonder how many people beyond the leading-edge, hard-core gaming community are going to be willing and able to spend several hundred dollars on video games.”

“I don’t think the U.S. attack will affect us,” Nintendo spokesman Yasuhiro Minagawa said.  Fairfield Research says there was already a slight decline in consumer interest to buy Xbox or GameCube before the terrorist attacks. Schelley Olhava, an analyst with IDC, said “From everything that I’m hearing, hardware and software game sales are still holding out strong and not being battered by the economy or terrorist attacks.”

Nintendo decided to postpone a Hollywood GameCube launch event that was originally planned for September 25th.  Nintendo’s Perrin Kaplan said, “People just weren’t ready to smile. People are just now getting ready to at least take a little bit of a break from all the soul-searching.”

When the GameCube finally released in the U.S., Nintendo president Hiroshi Yamauchi believed that the impact of the terrorist attacks may have helped the game industry.

“We released the system to the world’s largest market in November and the initial store shipment of 1.3 million systems is mostly sold out. About 4.5 million video game systems were sold overall; all three consoles were at nearly equal footing. I think the impact of the terrorist attacks helped all home entertainment products sell better.” said Hiroshi Yamauchi.

—————————————————————————————————————————————————————

In early 2001, Nintendo’s marketing chief Peter Main sent out a letter to ten major retail stores asking why Xbox was being promoted so early.  The letter told retailers that manufacturers, publishers, and retailers rarely promote new products beyond the maximum 4-8 weeks before launch.  Main blamed 2000′s 5-6% decline in sales on retailers promoting the PlayStation 2 instead of promoting current products already on store shelves.  A Microsoft spokesperson laughed off Nintendo’s letter to retailers and said, “We’re flattered that Nintendo is taking the Xbox seriously, but it’s a bit odd to see them trying to take control of the information that gamers are getting at retailers.”

Nintendo launched a $75 million launch campaign for the Nintendo GameCube to highlight first and second party exclusive titles.  During that 2001 holiday season, GameCube commercials were shown during “Harry Potter and the Sorceror’s Stone” and “Lord of the Rings: The Fellowship of the Ring”.  In print media, eight GameCube print ads appeared in magazines such as “Maxim”, “Stuff”, and “Sports Illustrated for Kids”.  Nintendo promised that more than 85 full-page print ads will appear in over 50 publications. Out-of-home GameCube advertising included large Nintendo GameCube banners in malls across the United States.  Advertisements were placed during Channel One Broadcasting inside of high schools throughout the country.

For the launch campaign, Nintendo partnered with Dr. Pepper to promote an instant-win game with GameCube prizes and Nintendo characters appearing on 20-oz, 2-liter, and 12-can packages.

Before launch, Nintendo threw a celebrity-studded Hollywood party for Nintendo GameCube.  Celebrities at the launch event included Ryan Reynolds, Paris Hilton, Tara Reid, Christina Aguleira, Michelle Rodriguez, and Lil Kim.  There were also Nintendo “Cube Club” events in 15 different U.S. cities featuring live music and alcoholic drinks where people could play the system before launch.

The GameCube’s $75 million launch campaign looked like chump change in comparison to Xbox’s $500 million launch advertising campaign.  Microsoft would make an attempt to overshadow Nintendo’s marketing campaign at all cost. Xbox’s launch advertising included television ads aired on networks/shows such as Fox, Comedy Central, WWF Raw/Smackdown, ESPN, and MTV. Print ads for Xbox were distributed in magazines such as Maxim, Rolling Stone, FHM, GamePro, Game Informer, and Electronic Gaming Monthly. Microsoft also launched national promotions with companies such as Taco Bell, SoBe and Vans.

Peter Main, Nintendo’s Marketing Chief,  seemed surprised that any company would launch a product with a $500 marketing campaign.

“Is $500 million a lot to launch that? $500 million even in my sometimes-big numbers — you’d need one of those spreaders they use on a farm in order to really spread that kind of money around the world. It’s a huge number. They’re going to come and they’re going to make a big noise, and the jury is still out,” said Main.

Three months before the launches of GameCube/Xbox, IDC conducted a survey where they found that 32.9% of people intend to purchase a new console.  An overwhelming majority, 62.9% say they plan to purchase a PlayStation 2.  Only 6.3% of respondents said they plan to buy a GameCube and tiny 4.5% said they plan to buy an Xbox.

Goldman Sachs conducted a survey among 49 major retail chain stores in large cities to see how GameCube and Xbox were selling one week after their U.S. launches.  73 percent of U.S. retailers said they had already sound out of Microsoft’s Xbox while 48 percent were sold out of Nintendo’s GameCube.  But Goldman Sachs says Nintendo shipped 700,000 units of GameCube to retailers while Microsoft had only shipped 300,000 units of Xbox.

In 2003, The Ziff Davis Media Game Group conducted a survey of 2000 gamers, and asked them which game consoles are they currently looking forward to.  15 percent of respondents said they wanted to buy a Xbox while only 9 percent wanted a GameCube. The report also found that 50 percent of Xbox owners and 41 percent of PS2 owners had a broadband internet.  Only 31 percent of GameCube owners said they had a broadband connection. During that same year, the Chicago Sun-Times reported a new GameCube advertising campaign for both TV and print with the tagline”Who Are You?”.  The ad campaign’s budget would cost over $100 million to produce.

——————————————————————————————————————————————————————

One of the biggest problems that plagued the GameCube was the “kiddy image” which pushed third party publishers away from releasing adult content on the console.  For example, at the time there was a significant amount of attention surrounding “Metal Gear Solid 2″ for the PlayStation 2.  In Electronic Gaming Monthly (issue 147), Metal Gear Solid creator Hideo Kojima was asked about the possibility of MGS2 coming to the Nintendo GameCube. Kojima responded to EGM, “When I pick hardware to do a game, I don’t look at the specs of the machine. I don’t really care about that stuff. I don’t care about how good the system is, because all consoles right now are at about the same level of power. I look at the audience that it has. Releasing a Metal Gear game on a Nintendo console would be ridiculous. I don’t know about GameCube, but [their] machines [until now] have been for younger kids.”

Eventually, the GameCube would receive “Metal Gear Solid: Twin Snakes,” but the project was not a priority for Kojima’s team, and it would be left in the hands of Silicon Knights to develop.

Silent Hill 2′s producer Akihiro Imamura was asked by IGN if a version of the franchise would come to Nintendo’s GameCube.  Imamura told IGN that a Silent Hill game would be too adult-like for the console. “Not likely,” he said. “The machine will probably be good, but the demographic will be largely younger gamers initially. That doesn’t really fit in with our market for the Silent Hill series.”

Howard Lincoln, who had left his role as chairman at Nintendo of America, was asked by the Gaming Intelligence Agency about the Silent Hill 2 producer’s comments. ”I think anybody who sees GameCube as just for young kids is crazy.” said Lincoln.  Toward the middle of the interview, Lincoln explains, “There has always been a perception, and this is beyond Nintendo, that video games as a whole were passé or just for kids and didn’t appeal to a wider group. I think that because Nintendo has been successful with a younger audience as well, it leaves that impression in the minds of many that Nintendo games somehow just appeal to the young.”

Even developers like FromSoftware decided to pile on the criticism of Nintendo skewering to younger demographics. By 2003, FromSoftware would makes 15 titles for the PlayStation 2 but only two for the GameCube.

“Our target is high school students and older, not Nintendo’s grade school demographics,” says Minako Gotoh, an executive at FromSoftware.

FromSoftware was not the only company that felt this way around the GameCube. According to IGN, “Regarding the Nintendo GameCube, [Koei] believes Nintendo is targeting a low age group with the system and thus plans on developing games which concentrate on this group.”

Nintendo’s Shigeru Miyamoto attempted to persuade third party developers to create unique content for the GameCube. However, most developers assumed that creating Nintendo content meant you have to skewer to younger audiences.  Most companies at the time didn’t see much money in making games for that audience.

“I often talk to other developers and they think that we make GameCube software childish so that we can sell lots more games to children. Whenever we have talks with licensees or potential licensees, we make a point of asking them to make their game exclusive to Nintendo. They obviously mistook it for thinking that Nintendo wanted more childish looking games. So we had a lot of meetings with the licensees, like Sega, and explained that we really needed something unique from Sega. Then they said, “Actually we wanted to make more adult orientated games, rather than making them look childish,” so I said, “Actually, that’s what we wanted to say at the beginning.” said Miyamoto.

In a 2001 interview with Edge Magazine, Miyamoto clarified that Yamauchi was unhappy with the perception that Nintendo products were strictly for children.

“When it comes to the GameCube, a lot of the third parties are going to provide the variety of game genres once again, so in that sense I think we can be relieved. But it’s true that [NCL president] Mr. [Hiroshi] Yamauchi kind of hates for Nintendo to be called a toy for elementary school children—that’s one thing Mr. Yamauchi hates to hear,” says Miyamoto.  ”As a matter of fact, we’ve never made a Mario game for school children. It is true that these school children are playing a central role as core Nintendo users, but we will always make efforts to widen the audience. Having said that, this doesn’t mean we’re going to make 20-something-specific games. Rather, our aim is to make games which can appeal to small children as well as the parents, so that they can play together.”

In 2003, HSBC Securities Japan Ltd. analyst Ben Wedmore blamed GameCube’s childish image for the poor sales. “Nintendo has to give the market more of what it wants. Today’s 11-year-olds aren’t going to buy Mario,” said Wedmore.

Another analyst, John Taylor of Oregon-based Arcadia Investment Corp, says Sony did a much better job attracting children under 12 to the PlayStation 2 than Nintendo did for the GameCube.

“Nintendo has appealed to two primary segments of the market,” says Taylor. “One is young households — households with children below 12 years of age. And actually, Sony has even done better than Nintendo in that group. I hear the number floating around that Sony outsells Nintendo in households with sub-twelve-year-old players two-to-one, and that is Nintendo’s traditional core base.”

When asked why they are publishing “Eternal Darkness: Sanity’s Requiem”, an M-rated horror title, Nintendo’s Perrin Kaplan responded, ”The kids who played Mario when they were 6 are now 28. They still want to play Mario, but they want some other stuff.” She continued to explained,”Our industry has matured. We’re like the movie industry. You see R-rated movies and M-rated games.”

Franchises like “The Legend of Zelda” also added fuel to the GameCube’s childish image which made people believe Nintendo was slowly abandoning older gamers. At Spaceworld 2001, Nintendo would reveal the new “Legend of Zelda” game for the GameCube, but it was a huge departure from the art direction of “Ocarina of Time” and “Majora’s Mask”.  Fans around the world protested the game’s art direction and some downright refused to buy a GameCube because of it.

“They all said, “Oh, so is Nintendo now taking Zelda and trying to aim it only at kids?” Really, the whole concept we had behind it was that we thought it was a very creative and new way to show off Link. All the sudden it had been interpreted as Nintendo’s new strategy, and that was a shock for us,” said Shigeru Miyamoto.

Nintendo’s former president Hiroshi Yamauchi believed that violent games were responsible for hurting the sales of GameCube’s software.

“There have always been differences between players in Asia and those in North America and Europe,” he commented, “and I think those differences are becoming more clear. Sales of GameCube software fell short in North America and Europe last year, and I believe that’s due to the popularity of violent games on other consoles,” says Hiroshi Yamauchi.

Yamauchi continues, “The culture of Japan is very different and less accepting of such titles.  Our target market is the entire world, so it’s very difficult to develop software that appeals to everyone – and that’s the lifeline of our business. That’s why it’s hard to achieve success in America and Europe for Japanese developers, even the most talented ones.”

Image from CNN

David Gosen, a former managing director at Nintendo of Europe, called the Grand Theft Auto series a ”a dead-end street.”  He said he was glad GameCube didn’t have Grand Theft Auto because he didn’t want to have to defend his company from the issue of violent games.

Shigeru Miyamoto disagreed with Nintendo’s David Rosen. He liked the design of Grand Theft Auto and said it should be welcomed by the industry.

I have looked at Grand Theft Auto.  The basic concept was very well done.  Regardless of what the content of the game was, the level of freedom that you had in that one big city was a very good idea.  Obviously it has gotten a lot of press because of the moral issues; but even aside from that, the game was done in such a way that gives it great gameplay,” said Miyamoto.  He continues,”I think that is the reason that Grand Theft Auto is selling.  Looking at this from the other side, I think we should welcome this game. Everybody is making all of this fuss about the incredible graphics and movies that they have in the games these days.  For a game like Grand Theft Auto, which is not nearly as polished in terms of the graphical look, to do so well is positive for the game industry.”

During the GameCube era, Nintendo of Japan became confused by North America’s obsession with violence, epic cinematic stories, and photo realistic graphics.  By 2004, games such as “Halo 2″ (sold 8 million units) and “Grand Theft Auto: San Andreas” (sold 17.33 million units) were stealing attention away from Nintendo’s first party titles.  Kyle Mercury , a brand specialist, technical director, and event specialist. for U.S. Concepts from 2001 through 2007, says Nintendo became bothered by western cultures’ obsession with violent games.

By 2005, discussion of high definition consoles were reaching an all time high with the launch of the Xbox 360, but Nintendo still had to focus their marketing efforts on current generation products

“In meetings it was clear NoJ [Nintendo of Japan] could not understand why the brand had fallen so far here in North America or comprehend why the mature titles, and more powerful consoles, were so successful. Nintendo represented fun, in the purest sense of the word, they always have. When you play Nintendo games you laugh, you yell, you smile, and you jump around. You have FUN. Someone, sadly I forget who, would later quote in one of those meetings that “Consumers don’t want fun anymore; they just want to kill people… in HD.” It was actually kind of true, and with the cultural differences between Japan and the US, it was easy to understand the confusion. The problem, though, was that NoA wasn’t confused by the situation at all, they understood those cultural differences quite well, but even if they could defy the marching orders from [Nintendo of Japan], I’m not sure they even would have. Gaming was growing up. This is when things started to get real ugly for a while inside those hallowed walls.”

Nintendo never understood the criticism of catering to young children when their company was the only one aggressively catering to that market.  Nintendo’s success was all about creating a formula that could work no matter age, gender, or geographic location. Satoru Iwata explains,”This criticism has always confused us for a couple of reasons. First, youngsters are the people with the most time to play the games, and often the most passionate. The fact is that Nintendo is the only manufacturer who seriously targets this market. Second, young people are important for another reason. They are the purest and truest indicator of game quality.”

In an interview with Spotlight, Iwata singled out WarioWare Inc.: Mega MicroGame$ and Donkey Konga as examples of games that can appeal to people whether you are old or young.  He believes creating those kinds of games is the correct strategy for Nintendo’s business.

“Game software should neither be exclusively targeted at children nor adults. Instead, we will develop software which anyone can instantly understand. When we market new software for adults, we should publicize it as software that everyone can enjoy… That will be more effective than undertaking a promotion specifically aimed at adults,” says Satoru Iwata.

“We always get Sony and others saying that Nintendo is for children, but it really isn’t. It’s about producing a formula that works everywhere around the world,” said Shigeru Miyamoto in an interview with Japaninc.

George Harrison, VP of Nintendo of America, admitted to Game Informer that the “kiddy” perception of GameCube was real, but the demographics of GameCube’s audience told a different story.  Harrison says Nintendo’s products have a large adult audience, but those same adults are too embarrassed to admit to their friends that they play Nintendo’s software.

“No, it is a real perception. But for us it’s somewhat mystifying because we look at the demographics of who has bought our hardware systems and it’s 40 percent over the age of 18 for GameCube and that type of thing. So it’s never been as complete as people make it seem or seem to believe,” says Harrison. “The number of people who are in college fraternities and in their 20s and 30s playing Mario Kart is kind of astounding. But if you go out to talk with your friends, you’re, “oh yeah. I don’t play Mario Kart, I play Need for Speed: Underground,” or something.”

———————————————————————————————————————————————————————————

 

When ArtX first began development on Dolphin’s hardware, they pitched the idea of doing a multi-media console that would be capable of more than just playing games.  Nintendo considered the idea as they tried figuring out what direction that Dolphin’s hardware needed to go. Unfortunately, Nintendo realized they didn’t have the resources like Sony and Microsoft to create a console with non-gaming functionality.

The founder of Art X, Timothy Van Hook, explains, “We prepared our all-inclusive competing proposal, a universal “multi-media engine.”  Sure, it would cost more, and early production units might have to be subsidized by game sales.  Nintendo considered our vision.  We waited.  Finally, they explained that they would not be able to proceed in that direction.  “We are a very small game company.  We do not have the resources of these others.” Yep.  A little humility goes a long way.  Pick your battles, marshal your cash flow, do what you do best.”

Although the GameCube was powerful, the console’s design was heavily criticized for looking like a child’s Fischer Price toy. The lack of DVD player functionality, Nintendo’s disinterest in online gaming, and a cheaper price also helped encourage the console’s toy image. In a 2000 issue of Hyper Magazine,  Oddworld Inhabitants’ co-founder Lorne Lanning expressed disappointment in Nintendo’s philosophy of treating game consoles as toys.  At the time, the developer had “Munch’s Oddysee” in the works for PlayStation 2, and they had no plans to develop for Nintendo’s forthcoming console at this time.   Munch’s Oddysee would later become an exclusive launch title for the original Xbox.

“As for the Dolphin, it seems as though Nintendo has made it clear that they are a toy company only and have no interest in being a true media entertainment company,” said Lanning. ”They want to keep making machines with limited potential so that they can keep control over game developers and publishers while enforcing those insanely high manufacturing costs.” Lanning then continued, ”The Dolphin is not on our radar screens.”

Robert J. Bach, Microsoft’s Xbox chief told Businessweek, “I don’t think Nintendo is here for the digital-entertainment revolution. They are a toy company.”

Gaming-only consoles like Dreamcast and GameCube were being slammed as niche products (or toys) while Sony and Microsoft products were hailed as the “future of entertainment”.

“The battle is over entertainment. Period. If you don’t have that vision, you are forever going to be a niche player,” declared Jack Tretton, executive vice president at Sony Computer Entertainment America.

Mark Rubenelli from THQ echoed the toy comments.

“Nintendo has very much said that Gamecube is going to the toy crowd, that it’s not going to be a convergence machine; it’s going to be a box that plays games exclusively, not a DVD-player or a multimedia super system. You know, it’s going to play games and it’s going to play games really well. I think that we feel like that caters to a little bit of a younger crowd. With that said, we obviously have a lot of products that work towards that audience.”

Publishers like Take-Two Interactive and THQ criticized Nintendo for creating a stand-alone game machine. Marketing research revealed that most people are looking for systems with extra features like DVD and online capabilities.

Jeff Lapin, CEO of Take-Two explained to CNN, ”I think Nintendo is going to have to redefine its hardware if it wants to compete,” he says. ”The simple fact is that people are looking for extra features.”

When asked about the GameCube, Former THQ president Brian Farrell told CNN,”I’m not sure there’s room for a stand-alone game machine,”

Former Nintendo employee Jim Merrick told IGN,  ”The challenge is to produce something that differentiates itself. But we don’t want to be confused as a consumer electronics device. We’re not trying to be another component in your home stereo rack.  It’s not the look that we’re going for — we want something that’s more approachable and more fun. ”

Nintendo’s Jim Merrick and Perrin Kaplan defended the GameCube from criticism that their company is only interested in making toys.

“I mean, what’s a toy? How do you define a toy? Is Aibo a toy?,” asks Merrick.  ”Aibo is a very sophisticated piece of technology, you know. Is it because of the technology that we don’t want to call it a toy? A toy is something a kid likes to spend time with. It’s an important part of a child’s development and playtime. Is there something wrong with being a toy? I have toys.”

Kaplan also adds,”You do have a lot of electronics and gadgets. But, I wouldn’t classify GameCube as a toy. We’re an entertainment company.”

Shigeru Miyamoto remained firm on his stance that DVD players have no place in game machines. He described the problems associated with combining DVD players in game consoles such as accessing time for media and the costs of repairing it.

Shigeru Miyamoto says,”We have come up with the best hardware ever with the Nintendo GameCube system. I don’t know about Europe, but in Japan, DVD players are now very cheap. There is no need to combine the games machine with the DVD player as that can lead to other problems. These kind of hardware devices are destined to be broken and will need repairing: even if your games system is OK, if your DVD system playback is out of order you’ll have to get it repaired which can cost a lot of money. ” He further explains,”Also, when it comes to the accessing time for the media, this is quite good in comparison: for example, in order to access a bigger disc, you need more power. Then there is the problem of the disc scratching when it is released on a DVD sliding tray. ”

At a TGS 2003 keynote speech, Satoru Iwata warned that the convergence of entertainment functions with gaming consoles could lead to problems.  One example he gave was how games on cell phones were a bad idea because they use up battery power quickly which impacts your ability to make calls. He also expressed his concern over companies using non-gaming capabilies to sell gaming consoles.

“Although PS2 was a sales success because it had a DVD player function, it troubled me that we had moved to a hardware where the sole function wasn’t playing games,” said Satoru Iwata.

——————————————————————————————————————————————————————-

After selling 20.64 million units of Nintendo 64 in North America, Nintendo of America had successfully maintained most of the SNES’s North American install base of 23 million. Nintendo of America had successfully convinced most of the North American audience who loved SNES to buy a Nintendo 64 thanks to western exclusives like “Goldeneye 007”, “Turok 2: Seeds of Evil”, and “Star Wars: Shadows of the Empire”. Unfortunately, Nintendo of Japan (5.54 million) and Nintendo of Europe (6.75 million) couldn’t find similar success which contributed to Nintendo 64′s low worldwide sales performance.

With 62% of Nintendo 64′s worldwide sales from North America, this convinced Nintendo of Japan to loosen the leash over Nintendo of America.

NOA Chairman Howard Lincoln explained, ”I’ve seen a natural transition over the years from the early 80s when our parent company [NCL] was essentially doing everything to today where Nintendo of America has a great deal of involvement with the development of Dolphin. A lot of the control, not only in Dolphin development systems, but actual development of the ArtX chip, relations with IBM and what-not are all being handled here at NOA. And certainly the relationship that we’ve developed with second-party developers, for example Rare, Retro, Left Field and others — those are all companies that we deal with.  So we are much more involved with both the development of the hardware and software [for Dolphin] and I anticipate the same thing will be the case for Game Boy Advance.”

To continue building a strong presence in North America, NOA president Minoru Arakawa and Chairman Howard Lincoln made the decision to invest in more developers that skewered to western tastes. This would lead to the creation of Nintendo Software Technology, a first party developer in Redmond, Washington.  NST was founded in 1998 by two men, Claude Comair and Scott Tsumura, who worked around the clock to get a next generation “Wave Race” ready for the Dolphin’s launch.

When Claude Comair was asked about how NST was created, he replied, ”It’s actually the brainchild of Mr. Arakawa. He [Arakawa] invited myself and Scott Tsumura [president, NST] to dinner at his place. He asked if we could be compatible together. Mr. Arakawa proposed the idea of this wonderful adventure and as you can imagine we accepted. We are here today.”

On May 2000, one year after creating NST, Nintendo of America invests in a Canadian developer called Silicon Knights.  At the time, they were busy developing “Eternal Darkness: Sanity’s Requiem” for the Nintendo 64, but the project would be moved to the Dolphin [GameCube]. Arakawa told the press.”Quality always has been the Nintendo mantra, so investing in Silicon Knights fits perfectly with our corporate goal.”

When asked about how Nintendo decides which developers are worthy of second party status, Nintendo of America’s marketing executive Peter Main replies,”Well, we don’t have a help wanted ad out. We continue to search the world for the right people and the right situations. Our announcement a week ago about Silicon Knights was exactly that kind of process. Proven capabilities, technically very competent people, and ready to do some great work on the fly with us. And we’ll continue to look to existing second-party people in helping them grow their own teams and leverage their own skills. Our financial support there can be very important in addition to new embryonic situations that we can grow.”

Peter Main explained that the reason Arakawa and Lincoln recruit second party developers is because Nintendo of Japan understood the limitations of what type of games they can make. Since the NES days, publishers have become less loyal to one specific platform, and Nintendo saw second-parties as a logical extension to their Japanese developers.

“We continue, let me say, to have the utmost respect for the third-party world and its potential to be a very important part of our program going forward. However, in recent times a number of things have happened beyond that. Brand loyalty, by virtue of games being available on every platform, has somewhat reduced since the NES days when we did have platform exclusivity. We know that core to our business are the franchise characters that we own. We know the limitations of our own development skills,” says Peter Main.

Main continues,”In that line, we set out to find partners who could further build the brand identity and personality with product that would appear exclusively on our platform. We’re not a cash poor company, so bringing financial resources to groups that show tremendous creativity and development potential makes good sense.The second-parties have become very logical extensions of what we do so well with our group in Japan and NST now in America. And given the rate of change with new platforms, and what essentially in engineering terms becomes fast-track projects, in order to make them happen in the least amount of time, you need to bring people into your tent long before you’re finished. That is better done with internal partners than external ones. That all said and done, we think it helps us, especially in our next go-around, come to market with more product, and more diverse product, created by people who didn’t have to wait for final tools to be made available, and instead could do it on the fly with us as we develop the components.

Retro Studios was founded in Austin, TX on October 1998 by former Acclaim Entertainment employee Jeff Spangenberg.  He had convinced Howard Lincoln to fund the studio and position it to become the “Rareware of the United States”. Retro Studios had begun work on four projects for the GameCube: “Nintendo NFL Football”, “Thunder Rally” (also known as “Car Combat”), “Raven Blade” (aka “Rune Blade”), and a first person shooter vaguely titled “Action Adventure”.  Nintendo of America counted on Retro Studios to create launch titles for the Dolphin that would cater to adult audiences.  When Retro pitched the idea of Mario Football, this is why the idea was quickly shot down for a NFL licensed game instead.

This is when things started to go downhill at Retro Studios.  Projects at Retro Studios were being understaffed and employees were being overworked.

Jeff Spangenberg was excessively absent, projects went without supervision, and Nintendo was always kept in the dark about the company’s progress.  Higher up executives at Retro Studios were accused of embezzling hundreds of thousands of dollars from the company and fleeing the country. Other executives were accused of using Retro Studio’s computers to run porn websites. One source told IGN that Retro Studios had become an extremely paranoid company. They regularly kept track of employees with security cameras and micromanaged how long employees took lunches.  ”It honestly felt a little like living in a communist block country: you kind of didn’t know who to trust, who would rat on you, that sort of thing,” said one former Retro Studios employee.  By 2001, photos had surfaced of Spangenberg in hot tubs with half-naked women on a website registered to a mailing address from Retro Studio.

Nintendo’s NFL football game, originally scheduled for GameCube’s launch, would eventually be cancelled along with other titles.  The game had missed the market window for an NFL football title

Jason Hughes, a former Retro Studios employee, explained,”The nice way to put it is we missed the market window for an NFL football title. The not-so-nice way is that the GameCube was originally planned to ship at the start of football season, but was pushed back to several months after the start of football season. A new flagship football launch title would have a hard time capturing the core NFL audience at that time of year on a new console.”

Although the game had reached close to completion, certain aspects of the game weren’t ready when they needed to be.

“Some aspects were well advanced, while others had barely begun. For instance, we had networking support, a full replay system, a fast stadium renderer, but almost no collision detection, on-field AI was under constant revision, and the animation system was not fully functional,” explained Hughes.

Retro’s “Thunder Rally”, the furthest Retro Studios project in development, was pitched to Nintendo of America as a Twisted Metal Black Killer with elements of Mario Kart and QuakeWorld.  Initially, the game would combine four player split-screen with online multiplayer, but since Nintendo didn’t believe online gaming was cost effective, these ideas never materialized. After Nintendo evaluated Retro’s projects, they decided that Thunder Rally was too high risk of an investment, and the game was quietly cancelled.

In April 2000, Shigeru Miyamoto visited Retro Studio to review the progress of Retro’s numerous projects. One source told Electronic Gaming Monthly that, “It was like the Emperor visiting the Death Star. He didn’t seem to like any of the games very much, especially the racing title, which was probably our best-looking. Nintendo would come down about three times a year and rip on most of the games, except [Retro NFL] Football, which was under the radar.”

“We could have literally blown [Twisted Metal Black] away on the level of interactivity the game could’ve had. Much more strategy in manipulation, avoiding, changing world objects to modify gameplay, and so on,” an employee at Retro Studios told IGN. “We would have had many more weapons than Twisted Metal Black, with multiple functions including counters, additive effects, complementary effects, etc.”

A trailer for Retro’s role-playing title, “Raven Blade” debuted at E3 on May 2001 with much applause from gamers and the press. But behind the scenes, the studio had been struggling to create a decent combat system ever since development began in 1999. Nintendo gave Retro Studios a deadline to significantly improve Raven Blade by June 24, 2001 or the project would be terminated.

On July 19, 2001, Nintendo officially announces the cancellation of Raven Blade and layoffs of 26 staff members.  The remaining members of the 35 member team would move over to the Metroid Prime project.

“Retro, along with Nintendo, has decided its most effective approach as a videogame developer is to focus on Metroid and give it the attention the franchise deserves,” said Nintendo. “To do that, they refined staffing and are laying off 26 people, mainly the team that worked on Raven Blade. Raven Blade is now cancelled.”

Nintendo decided that Retro Studios should focus all of their energy on Metroid Prime. Fans were already protesting the idea of a western studio handling Metroid, but the idea of turning Metroid into a first person action title caused an even greater uproar on the internet.  It’s not surprising that Miyamoto suggested the idea of turning Metroid into a first person title.  Originally, Miyamoto considered creating Zelda 64 as a first person action game after being inspired by other Nintendo 64 games like “Turok: Dinosaur Hunter”.  After Zelda 64, Miyamoto was never ever able to fully let go of the idea of creating an action game with a first person perspective.  Thankfully, both studios (Nintendo and Retro Studios) proved the criticism of Metroid’s new direction wrong, and “Metroid Prime” went on to become one of the most critically acclaimed games in Nintendo’s history.

Because Retro Studios was surrounded with controversy, Jeff Spangenberg knew that he wouldn’t be able to sell his company for very much money. For a bargain price of $1 million, Nintendo acquired Retro Studios from Spangenberg, and Retro Studios would now become a first party developer.

————————————————————————————————————————————————–

Hiroshi Yamauchi was looking forward to retiring, but he wanted to see the GameCube successfully launch before doing so. The retirement of Yamauchi would bring significant change to the corporate culture at Nintendo. Yamauchi told the press, “I’ve been thinking about it for more than two years now, but I think I want to retire before this summer. Nintendo isn’t going to work under one person anymore, though; it will be run under a group-leadership system.”

Before Hiroshi Yamauchi could retire, Nintendo of America was already going through a transformation of its own. John Taylor, managing director for Arcadia Investment Corp, explained“Japan is probably becoming a bit more influential in the day-to-day management decisions at Nintendo of America. It seems that might have been going on for a little while.”

It’s natural for executives and employees to eventually depart from a company, but the late 90′s through the early 2000′s would leave the biggest impact on the corporate culture of Nintendo of America.

Timeline for Nintendo of America Departures

  • In 1996, Tony Harmon retired from his position as Director of Development and Acquisitions at Nintendo of America.  He had worked for the company for over 7 years (1989 through 1996) where he oversaw western software development and product acquisitions for the company.
  • In 1996, Brian Ullrich, Product Development Manager at Nintendo of America, leaves the company.  According to his linkedin profile, he produced and directed multiple third party development projects for the SNES and N64 including FX Skiing, and Body Harvest. Ullrich also designed and produced Major League Baseball Featuring Ken Griffey Jr. (Nintendo 64) in collaboration with Angel Studios.
  • In 1997, NOA’s Development Manager Jeff Hutt would leave the company after working there for over 9 years (1988 – November 1997).  Hutt had produced games such as Major League Baseball Featuring Ken Griffey Jr., TinStar, Super Play Action Football, Mario Paint, Super Soccer, NHL Stanley Cup, and Yoshi’s Cookie.  He negotiated vendor agreements for the development of original games featuring Nintendo properties.
  • In 1998, Russel Braun, NOA’s Director of Engineering, leaves the company after working there for 7 years (Dec. 1990 through Feb 1998).  He helped direct all developer support work outside of Japan, and he directed all Network products for Nintendo.
  • In 1999, Mike Schacter, V.P. of Sales & and Marketing at NOA and Director/General Manager of Latin America leaves the company.  He had worked with Nintendo of America for seven years.
  • In 1999, Erich Waas, producer at Nintendo of America, leaves the company.  He worked at NOA for nine and a half years where he was a producer on the Game Boy Camera, The New Tetris, Ken Griffey Jr’s Slugfest, Goldeneye 007, Blast Corps, Donkey Kong Country 2, and Donkey Kong Country 3.
  • In 2000, Nintendo of America Chairman Howard Lincoln retires from the company to become the owner of the Seattle Mariners.
  • On January 2002, Nintendo of America president Minoru Arakawa retires from the company.
  • On January 2002, Peter Main, NOA’s Vice President of Marketing and Sales, steps down from the company.
  • On February 2002, Left Field announces that they have ended their second party status with Nintendo.
  • On March 2002, Nintendo of America producer Ken Lobb steps down from his position and joins Microsoft.
  • On May 2002, Nintendo president Hiroshi Yamauchi retires from the company.
  • In 2002, Claude Comair, co-founder of Nintendo Software Technology, leaves the company to focus on being President of DigiPen.
  • On September 24th, 2002, Nintendo sells Rareware to Microsoft for 100% ownership for $375 million.
  • In December 2003, Phil Borkowski, NOA’s Vice President of Manufacturing, retires from his position. He joined the company in 1987 and worked at NOA for almost 17 years
  • In December 2003, Phil Rogers, NOA’s Director of Development and VP of Operations, retires from the company. He had joined Nintendo in June 1982 as Director of Development where he completed the acquisition of Nintendo’s Redmond Campus. He expanded the Operations Division to more than 500 employees.  Phil Rogers had worked at Nintendo of America for over 20 years.
  • In 2004, John Bauer, Executive Vice President of Administration for Nintendo of America Inc. leaves the company.  He joined the company in 1994 and worked there for 10 years.
  • In 2004, Silicon Knights ends exclusivity second party agreement with Nintendo of America.
  • In 2004, Factor 5 says they are no longer making games exclusively for Nintendo.
  • In 2005, Henry Sterchi, producer at Nintendo of America, leaves the company to join Silicon Knights, and then later Microsoft. Sterchi had worked at Nintendo of America for 11 years (1994 through 2005) producing games such as “Perfect Dark”, “Excitebike 64″, “Eternal Darkness”, “Star Wars: Shadows of the Empire”, “Diddy Kong Racing”, and “Super Punch-Out”.

Tony Harmon, former Nintendo executive, was the person who persuaded Ken Lobb to join Nintendo of America in 1994. Around 2000, Lobb was involved as a producer/supervisor on projects such as “Star Wars Rogue Squadron 2: Rogue Leader” and “Metroid Prime”.  Before GameCube launched, Ken Lobb says NOA employees were playing 16-player LAN multiplayer of Halo inside of Nintendo of America’s Treehouse.  At the time, Lobb was interested in online gaming, but Nintendo did not have an aggressive approach to the online area.

With Minoru Arakawa and Howard Lincoln retiring, Ken Lobb lost interest in staying with Nintendo of America, and chased after new opportunities over at Microsoft.

Ken Lobb says,”I have infinite respect for Mr. Arakawa and Mr. Lincoln and it was awesome to see them get the lifetime achievement awards. Extremely well deserved.  If Mr. Arakawa had not left Nintendo, I would probably still be there today.” He explains,“When I was at Nintendo–especially when it was Howard [Lincoln], Peter [Main] and Mr. Arakawa–they were such great guys. I have infinite respect for them, they really understood the industry, maybe better than anyone else in the world. Having them leave changed what Nintendo was like, especially when Mr. Arakawa left. I didn’t even look for this job. It was offered to me by a friend.”

In a separate interview, Lobb explained that he had already seen the games that Nintendo was bringing to GameCube before they were revealed to the public. Based on what he saw, he wasn’t very confident in the GameCube’s future.

“I was spending, from the launch of Xbox and GameCube, 35-40 percent of my time on PS2, 45 percent on the Xbox, and barely touching my GameCube. And knowing what the portfolio was going to be on the GameCube this year and going way out, I decided that I didn’t want to do this again. I did it on the N64 and it was hard. Having seen that that was going to be that–or was going to be worse, I said forget it. I’m tired of second-class third-party ports and I’m tired of first-party games spread way out,” says Lobb.

On May 31st, Satoru Iwata would replace Hiroshi Yamauchi as the new president of Nintendo Co., Ltd.  Due to Iwata’s knowledge of Nintendo’s software and hardware, Mr. Yamauchi believed that Iwata was the right person for the job.  While many executives are successful, Yamauchi didn’t trust bringing in anyone who had little understanding of Nintendo’s products.  In a press conference, Hiroshi Yamauchi explained his decision for appointing Iwata as the new president of the company.

Yamauchi addressed the press,“The reason for Iwata-san’s selection comes down to his knowledge and understanding of Nintendo’s hardware and software. An executive, regardless of his vast successes, is fundamentally an executive, who doesn’t intimately understand our products.” Yamauchi continues, “Within our industry there are those who believe that they will succeed simply because of their successes in other ventures or their wealth, but that doesn’t guarantee success. Looking at their experiences since entering the gaming world, it’s apparent that our competitors have yielded far more failures than successes. It’s been said that Sony is the current winner in the gaming world. However, when considering their ‘victory’, you should remember that their success is only a very recent development. Though Sony is widely held to be the strongest in the market, their fortunes may change. Tomorrow, they could lose that strength, as reversals of fortune are part of this business. Taking into account the things I’ve encountered in my experiences as Nintendo president, I have come to the conclusion that it requires a special talent to manage a company in this industry. I selected Iwata-san based on that criteria. Over the long-term I don’t know whether Iwata-san will maintain Nintendo’s position or lead the company to even greater heights of success. At the very least, I believe him to be the best person for the job.

For over five decades, Hiroshi Yamauchi had been running the company ever since it was passed on from his great grandfather Fusajiro Yamauchi.  On May 31st, the company’s corporate structure would shift and regroup as Yamauchi relinquished his duties to a group of representatives.

With Satoru Iwata becoming the new president, the following executives would join his side:

  • Tatsumi Kimishima : Director and president, Nintendo of America.
  • Atsushi Asada : Chairman, formerly representative director, executive vice president.
  • Yoshihiro Mori : Senior managing director, general manager, corporate analysis and administrative division.
  • Shinji Hatano : Senior managing director and general manager, licensing division.
  • Genyo Takeda : Senior managing director and general manager, integrated research and development division.
  • Shigeru Miyamoto : Senior managing director and general manager, entertainment analysis and development division.
  • Masaharu Matsumoto : Managing director and general manager, finance and information systems division.
  • Nobuo Nagai : Managing director and general manager, manufacturing division.
  • Eiichi Suzuki : General manager, general affairs division.
  • Hiroshi Imanishi: Would retire from his position in the corporate communication division and become a corporate advisor.

When Rareware employee, Mark Betteridge, was asked about how Nintendo has changed over the years, he said Nintendo was just changing with the times.

Times change, and Nintendo as a company has changed, even when we were with them. We were with Mr. Minoru Arakawa and Howard Lincoln primarily, but we knew Satoru Iwata quite well, and Shigeru Miyamoto-san, obviously. They used to come and visit, and they were great people. The unusual thing, I suppose, at Nintendo is that the execs [that are in place] now were originally developers, which is quite unique. So it’s not so much that I miss the original people; I just miss the people and the personal contact. There were some happy times there,” says Betteridge.

—————————————————————————————————————————————————————-

Rare Ltd., also known as Rareware, was one of the first developers to receive development kits for Nintendo’s Dolphin.  Rare Ltd’s Chairman and technical director Chris Stamper spoke on IBM’s ”Gekko” processor chip which would help power GameCube’s hardware. ”Designing games is an ever-changing process, and this chip with its speed and seamless data flow, will allow us to make even more amazing games, ” explained Stamper. “Consumers will love the end result with the upcoming system.”

Asked how he felt about Rareware’s E3 2000 lineup, Shigeru Miyamoto said that Rare had been very influential on the industry, and they had encouraged Nintendo to experiment with more genres.

We are very thankful that Rare is creating such great games. Rare has done a lot for the gaming industry. All of Rare’s games are 3D, but they all have very different gameplay. They are encouraging us to create a different genre of games that departs from 3D adventure gaming,” said Miyamoto.

During that same E3 2000 event, Miyamoto was asked in a seperate interview about his thoughts on Rare’s Dinosaur Planet.  ”It looks really nice, doesn’t it? I wish they would use Star Fox characters so that they could use the title Star Fox Adventures. Maybe I should call the team and talk about it [laughs],” said Miyamoto. One year later at E3 2001, “Dinosaur Planet” would reappear as “Star Fox Adventures”.

Rare Ltd would have at least seven projects in development for Nintendo’s GameCube, but only “Star Fox Adventures” would actually release on the console:

  • Quest
  • Kameo: Elements of Power
  • Donkey Kong Racing
  • Perfect Dark Zero
  • Conker’s Other Bad Day
  • Star Fox Adventures
  • Grabbed by the Ghoulies

On February 2000, Rareware registered a domain address www.velvetdark.com which sparked discussion about whether Rare was working on a new Perfect Dark sequel.  Six months later at Spaceworld 2000, Rare showed off a GameCube tech demo for Perfect Dark featuring a 3D model of Joanna Dark. Fast forward to Nintendo’s E3 2001 conference, reporters asked Ken Lobb about the possibility of a Perfect Dark sequel. Lobb replied, “It’s out there. We’re making it. No one will be disappointed.”

Around 2000, Rare was also busy developing a project called “Quest”, an multi-massive online RPG for the Nintendo GameCube.  One year later, “Quest” would change direction and become a MMO Space shooter with Mark Edmonds programming, Duncan Botwood designing, and Chris Seavor handling the art direction. Nintendo’s online strategy for GameCube was still unclear at the time, but Rare continued developing the project anyways.

At E3 2001, three GameCube games developed by Rare were shown to the press:  ”Kameo: Elements of Power”, “Star Fox Adventures: Dinosaur Planet” (later renamed to just “Star Fox Adventures”), and “Donkey Kong Racing”. I asked Lee Musgrave, lead designer of Donkey Kong Racing, why we only saw a CGI video but never any actual gameplay.  Musgrave told me that “Donkey Kong Racing” was in an early prototype stage with playable gameplay, and he explained to me how the game would have worked.

“Ha! – yes, I made that [E3 2001 CGI video]! . . . Donkey Kong Racing was obviously pretty heavily tied to Nintendo as a franchise, and as Rare approached the finalization of a buyout deal with Microsoft it was clear that the game had no future, at least with the ape’s as characters,” said Musgrave.

“[Donkey Kong Racing] was a pure racing game, the underlying software mechanics were actually based on car physics, but it also incorporated the idea of riders jumping between different animals mid-race, to always be riding the ones that were bigger or faster . . . we had some awesome gameplay in place, and it was lots of funwe even had a multiplayer version working– and when you fell off, you had to tap-tap-tap (HyperSports style) to run on foot and catch up with an animal,” said Musgrave.

In December 2001, Rare Ltd. sent out Christmas cards featuring a Christmas tree with presents wrapped in the shapes of GameCube, PlayStation 2, and Xbox.  The card’s message read “…and surprises under every tree”. The card sparked rumors across the internet with insiders claiming that the company had development kits for Xbox and PlayStation 2 in their possession. Tim and Chris Stamper, the founders of Rare, were interested in selling their company to a third party publisher so they could publish on multiple platforms.  The U.K. developer approached major U.S. publishers such as Activision, Electronic Arts, and Disney about acquiring them.

Nintendo owned 50% of Rare Ltd, and they had already extended their option to acquire Rare by one year. Unfortunately, the aggressive bidding war between Activision and Microsoft reached a point where Nintendo didn’t want to be involved anymore.  The deal with Activision eventually broke off, and Microsoft was looking to swoop in and grab the company for themselves.

Microsoft Game Studios Ed Fries explained to Eurogamer, “If they [Nintendo] didn’t exercise that option then Rare had the option to find a buyer for Nintendo’s half. Nintendo had already extended the option by one year, but it looked like they weren’t going to acquire the other half of Rare, so the Rare guys started looking around to see if anyone else might be interested. We were a logical choice for them to call.”

Although Microsoft was the highest bidder to purchase Rare, Nintendo still had the priority option to purchase the company. To prevent Nintendo from buying Rare, Microsoft raised their bid as high as possible so Nintendo wouldn’t be able to match it.

“Still at this point, Nintendo had the priority option to buy the other half of Rare at the price we were offering. So, there’s a problem; if we drive a hard bargain and put in a low price for Rare, Nintendo would have the chance to buy at that low price and probably would. So, the price was high,” said Fries.

On September 24th, 2002, Microsoft paid $375 million to Nintendo to own 100% of the company.  Rare would now become a first party developer for Microsoft, and games like “Donkey Kong Racing” would end up in limbo. Martin Hollis, the producer of Goldeneye, explained that it was Hiroshi Yamauchi who shrewdly declined the offer.

“In the end I understand Mr Yamauchi [Nintendo's President] declined to offer more than a fraction of the value Rare was asking; shrewdly, it would seem. Meanwhile Microsoft had a strategic reason to buy, two reasons really: firstly so Nintendo would not have Rare’s games, and secondly so that Microsoft would,” said Hollis

Nintendo’s George Harrison explained to Electronic Gaming Monthly that Rareware hurt GameCube’s momentum by failing to deliver any games within the launch window.  This was compounded by the fact that Rare Ltd. was one of the first studios to receive development kits.

“…when we launched the GameCube, we put the concentration of our development kits in the hands of only a few people — internally, of course, with Mr. Miyamoto’s EAD team, but also with Rare. And Rare didn’t deliver a single game for us at the launch, when their history had been to make some really great games for us in the past. That hurt us, and it led us into this gap of titles, starting after the launch and lasting for about seven or nine months until Mario Sunshine came out. Consumers want consistency. They would never buy a DVD player that had only one or two good movies a year; they want consistency and variety, and we’re trying hard to make sure that’s not only resolved for the GameCube, but as we go into the next system,” says Harrison

Harrison’s comments weren’t the first time that Rareware was blamed for hurting GameCube’s momentum. When Nintendo of America’s Jasmine Ramya was asked about why Nintendo was no longer working with Rarethis was the answer given:

“Although Nintendo doesn’t comment on rumors or speculation by the media, we can tell you that Nintendo has made the decision not to request Rare to make any further exclusive games for the Nintendo GameCube. Although we’re proud of our joint efforts with Rare over the years and have enjoyed our relationship with them, in fiscal year 2001, Rare accounted for only 9.5% of total Nintendo software revenue worldwide. In fiscal year 2002, that number declined to 1.5%. Therefore, in evaluating our investments in developers, as well as the financial benefits to Nintendo over the years, we’ve decided it’s in Nintendo’s best interests to focus on diversifying our portfolio of developers and projects,” said Ramya.

Both responses seemed unusual for a company that stresses quality over quantity. At the same time, Microsoft had jacked up the bidding price so high that Nintendo would be forced to decline the offer. Employees at Rareware seemed happy with the buyout since Microsoft’s ownership would mean financial stability for the studio. But that financial stability would come at the cost of killing creativity and cancelling projects.

A Rare employee told Gamekult, “Several of us just got fed up, so we left. Beating down our creativity was definitely part of it, but it’s more than that. It’s more like having a strict parent telling you don’t do this and don’t do that. It’s just the environment there. Guess we should have been careful what we wished for. I guess we saw the grass as being greener with Microsoft coming in. Nintendo had always been strict with our compliance to their ideas or standards as they would call it. We figured things would be better after the deal went through,” the employee continues. “Microsoft is much stricter, the my way or the highway type. Nintendo was more of the this is how you should do it. You don’t have to, but we highly recommend you do. Highly recommend.”

In 2012, I spoke to former Rareware employee Don Murphy about the Rareware/Microsoft acquisition.  He had worked on games such as “Conker’s Bad Fur Day”, “Killer Instinct”, and “Perfect Dark Zero”. Murphy says Microsoft bought the company because they only had the hardcore shooter market, and Rare would help the Xbox brand reach a broader, family friendly market.

Murphy says, “At first it seemed that they wouldn’t interfere much, but it was soon clear that they were more interested in using Rare to help aim at a younger market. This stifled a lot of creativity, Rare was renowned for their diverse portfolio, so to not be involved in making Mature games was a real blow. When the stampers left it seemed that Microsoft was losing faith in Rare, it was hard to take when all around were incredibly talented people, with massive amounts of experience. There [were] numerous projects that were put forward that I believe would have been huge hits, but MS rejected them one after the other. I remember seeing a couple of prototypes that Chris Seavor had designed and was working on, that looked amazing, but alas they got shelved. It seemed that MS didn’t want to take the risk in Rare doing anything outside the younger demographic, they quickly forgot the company’s heritage. We started to lose a lot of great talent then, people were losing job satisfaction, so they just left.”

Murphy was not the only person who believed the atmosphere at Rare had severely changed after the Microsoft acquisition. Phil Tossel, another former Rare developer, had worked on games such as “Diddy Kong Racing” and “Dinosaur Planet”. Tosell told Eurogamer, “For me personally, the atmosphere became much more stifling and a lot more stressful. There was an overall feeling that you weren’t really in control of what you were doing and that you weren’t really trusted either.”

When I asked Lee Musgrave in 2012 why he left Rare, he said one reason was because there had been less emphasis being on placed on attention to detail to make games great, and more emphasis on just getting things ‘done’.

“I’d been there for 17 years by the time I left and by the end, the Rare I joined had gone. I don’t really attribute that to anything that Microsoft did, but the simple migration to becoming part of a mammoth organisation inevitably changes the atmosphere of a hitherto insular place like Rare. Some of the people embraced the corporate culture whilst others, like me, felt that there was not enough emphasis being placed on real attention to detail or iteration of ideas/features in order to make them great, rather than just being ‘done’ and able to be ticked from a list,” said Musgrave.

The biggest internet myth is that Nintendo wasn’t interested in buying the other 50% of Rare.  Nintendo wanted to buy the other half of Rare, and that’s why they asked for a one year extension to look over all of their options. The problem is Rare’s founders wanted a giant bidding war between Microsoft, Nintendo, and Activision to boost the value of their company.  When comparing financial investments, the price of Retro Studios ($1 million) was a drop in the bucket compared to the $300 million that companies were bidding to buy Rare Ltd.

NOTE: In the pictures below, we only included RARE staff who were directly involved with game development.  We didn’t include “Special Thanks”, “Voice Actors”, “Nintendo of America Staff”, or “Rare Bug testers/Quality Assurance”.  Our sources for this information come from Linkedin profiles, Mobygames, Raregamer.net, and personal websites of employees.

————————————————————————————————————————

The Sega Dreamcast’s online capabilities were gaining popularity with software such as Phantasy Star Online, Sega Sports, and Quake III Arena.  Sega even launched the internet service “SegaNet” in 2000 which offered a $200 rebate with a two-year contract to encourage sales of the Dreamcast. Even though Sega heavily promoted their online capabilities, Sony promised that PlayStation 2 would offer an online network similar to the film, “The Matrix”.

Ken Kutaragi, the creator of the PlayStation, tells Newsweek,”You can communicate to a new cybercity. This will be the ideal home server. Did you see the movie ‘The Matrix’? Same interface. Same concept. Starting from next year, you can jack into ‘The Matrix’!”

Sony wasn’t the only company preparing an online network at the time.  Nintendo was working on network capabilities for their next-generation console, and IGN’s sources at Nintendo of America suggested that online was at the top of their priority list.  The information claimed that Nintendo had entered into an agreement with Nexus to develop Dolphin’s networking and modem capabilities. There were also rumblings of Nintendo being involved with Netscape, Alps, and some modem makers.  Nintendo planned for their next generation console to be used for network gaming, internet surfing, and checking your email.

Shigeru Miyamoto told GameSpot ”There’s got to be something Dolphin has with the Internet, because from now on we can’t create entertainment without thinking about network communication. At the same time, we are an entertainment company so we have to take into consideration the cost associated with network games, and the ages of the users, who are actually going to make use of it. If we consider these two points right now, I have to tell you that there is not a big market right now for Dolphin to involve a significant Internet business. Nintendo, as an entertainment company has a responsibility to parents and children so that the parents can always feel secure to provide their children with Nintendo machines, hardware and software. So because of that I don’t think network capabilities will be the core of the Dolphin project.”

Six months later, on February 2000, Shigeru Miyamoto said he understood how enthusiastic gamers were about online gaming, but Nintendo was only interested in online gaming if there was a new way to approach it.

“I’m very interested in online gaming, and I fully understand why people are so enthusiastic about it,” said Miyamoto. “But, you know whatNintendo is about, and has always been about, is NOT doing the same as every other company. So, if it ever came to the stage where we were talking about online gaming, it would be because we had a new way to approach the idea. It wouldn’t just be because everyone else is doing it.”

Later that year, Nintendo’s Hiroshi Yamauchi confirmed internet plans for the Dolphin.

“[The Dolphin] will have a function to access the Internet,” said Yamauchi. “We are entering the market as a latecomer so the console will have to outperform Sony Corp.’s PlayStation2. We are planning to introduce an Internet business next March or April. The first step will be online sales of a brand new type of Pokemon cards.”

On August 2000, Genyo Takeda, Nintendo’s general manager of integrated research and development, announced that Nintendo had signed a deal to use Conexant’s modem technology with the Dolphin. Thanks to Conexant’s technology, Dolphin users could play online multiplayer against each other through a Broadband internet connection.

“Dolphin will combine Nintendo’s world-class design and beloved franchise characters with the expansion of the world of gaming by an online network,” said Genyo Takeda. ”Conexant’s modem technology will connect Dolphin users to both the Internet and other gamers, creating a rich and dynamic entertainment experience. We are also excited about working with Conexant to bring broadband connectivity to our powerful new gaming platform.”

Matt Rhodes, senior vice president for Conexant’s Personal Computing Division, released a statement, ”Conexant is pleased to be contributing our modem technology to Nintendo’s exciting new video game console. Conexant helped to re-shape personal computing in the 1990s with our low-cost, dial-up modem technology for Internet access and online connectivity, and now we are helping industry-leading consumer companies like Nintendo do the same for video gaming.”

Before Dolphin [GameCube] could even launch, Hiroshi Yamauchi’s attitude toward online gaming grew cold. The company became skeptical of whether online gaming would be profitable for the GameCube business.

“The Internet games available today are for hard-core gamers. I don’t believe the general public is going to be very interested in them,” said Hiroshi Yamauchi. “And I doubt that Net games will turn out to be profitable. There is only interest in these games because NTT DoCoMo Inc has profited from i-mode. I am not sure if content providers have made any money. Unless the business proves profitable, Nintendo will not be involved in Internet games.”

Soon, the entire company shared Yamauchi’s skepticism, and the belief at Nintendo was online gaming is just pure hype and nothing more. Shigeru Miyamoto, someone who was once excited about Dolphin’s online capabilities, said he didn’t think most developers had any original ideas of how to push online gaming into new territory.

We’re always aware of the technologies available, like online gaming. But we’re very skeptical about the business side of online gaming.  Many people who have said in the past that online gaming is the wave of the future have to sit and face the reality now about how to turn this into a viable business,” said Shigeru Miyamoto. “I’m interested in the future of online gaming. But looking at the situation honestly, I think a lot of the talk is just hype, and isn’t backed up by really new ideas on how to use the technology.  However, we’re always making preparations for this business. Actually, I am more interested in the broader concept of communication in games, of which the online play aspect is just a part.”

Satoru Iwata told an interviewer that there were several barriers for online gaming.  One of those barriers is that narrowband players need to have a telephone line, and since most of GameCube’s owners are children, they don’t have credit cards to subscribe to a network and pay monthly fees.  ”We believe a game that might sell 1 million copies normally would likely sell less than 200,000 copies if it was network based,” said Iwata.

Although Nintendo’s excitement for online gaming was cooling down, GameSpy announced middleware tools for game developments to enable a low-cost turnkey online gaming solution.  The tools would help GameCube developers reduce the cost and time to make their GameCube games have online functionality.  Some of the functionality included in-game player matchmaking, text chat, buddy lists, online competitions, high score ladders, online data storage, game statistics reporting and data transfers.  For $995, GameCube developers could purchase a GameSpy Developer Toolbox which provided SDK source code, documentation, and sample applications.

Another announcement involved America Online who struck a deal with Nintendo of America to become the official preferred Internet Service Provider for GameCube.  The agreement granted GameCube developers with licensed AOL connectivity software to enable their games to connect online.  Through AOL’s part of the agreement, Nintendo products would be highlighted on AOL and AOL Time Warner websites, and AOL demo disks would be bundled with GameCube consoles.  The company reaffirmed it’s stance on online gaming that they weren’t developing any online GameCube titles. “To be clear, this does not indicate the unveiling of a new online gaming approach from Nintendo. Nor does it signify that we have changed our position on the current business viability in the online console gaming field,” said Nintendo.

With Microsoft and Sony both pushing online networks, Nintendo’s GameCube would be the only console without an aggressive online strategy. Satoru Iwata told a Japanese publication that Sony’s offline games were selling much better than their online titles, and this was proof that consumers do not have any interest in online connections at the moment.

“SCEI’s online golf game didn’t sell well, while its off-line golf game sold one million copies. This was also proof that customers do not want online games,” says Satoru Iwata. “Online technology has its own interesting features, so I don’t rule out the possibility of making use of it for games. But, at the moment, most customers do not wish to pay the extra money for connection to the Internet, and for some customers, connection procedures to the Internet are still not easy.”

Iwata further explains, “Some time ago, game companies as well as the media were predicting that online games would take off in the future. But game companies now find it difficult to make online game businesses successful, and their enthusiasm for them is cooling. During the year-end shopping season last year, none of the online games succeeded. The failure of SCEI’s golf game was a good example. All the games that sold well were off-line games.”

In an E3 2004 interview with the Herald Sun, Satoru Iwata was asked about the success of Xbox Live.  Satoru Iwata shrugged it off  and said Microsoft’s number of subscribers is too low to call it a success.

“One million Xbox Live subscribers? We don’t say that is successful. If it were a console, that would be a total failure. However, I believe networked platforms will become a very important part of gaming. When the time comes, that will be a very important way of Nintendo doing business,” said Satoru Iwata.

Each time Satoru Iwata did an interview with the press, the subject of online gaming would rear its head.  As you can imagine, Iwata became increasingly annoyed with answering the same question over and over. He slammed critics who claimed that Nintendo couldn’t survive as a business unless they embraced online gaming as the future.

“All the talk in the industry regarding online gaming has been misleading. Network swindlers have made it seem like companies can’t survive in this business without network compatibility,” said Iwata. “That’s the same type of rhetoric people have been saying about the newspaper business, that the paper-based periodical business will be dead in three years. In reality, the number of users willing to pay a monthly fee for online games is small. Many of the American companies who were focusing almost exclusively on network games last year now view network capabilities as an advertising tool. The fact of the matter is, network games can’t provide a stable source of profit for a company of Nintendo’s scale.”

Instead of focusing on online connectivity, Nintendo wanted to differentiate themselves from their competitors by emphasizing connectivity between the GameCube and the Game Boy Advance. Miyamoto used Animal Crossing as an example of a game that pushes communication in a video game without relying on online. Unfortunately, the idea of GCN/GBA connectivity never caught much fire, and it slowly faded away as time went on.

“You can see that communication between players increases enjoyment of the game,” he said. “But I think communication is a kind of fun that’s not something the developer creates, but draws out. Online is not the only type of communication, but that’s the one everyone has focused on - and I think that’s sad,” Shigeru Miyamoto told CNN in an interview. “Online gaming, like using violence in games, is just one method of introducing gaming to the public. And there are other methods that can be used.” In a separate interview with the LA Times, Miyamoto didn’t understand why so many game designers were focusing so much on online.  He believed there were still many types of games that gamers preferred playing by themselves.

Would Nintendo’s conservative approach to online gaming with the GameCube place the company in a disadvantage with future next generation consoles?  If Nintendo takes too long to create a sophisticated online network, would it become too difficult to catch up with Microsoft and Sony in this area?  This was a question that Electronic Gaming Monthly asked Nintendo of America’s George Harrison in 2004.

“If we look at the situation as it stands today, we’ve got about 30 million systems sold between the PS2, Xbox and GameCube, and about a million and a half people have actually bought an online service — about a million for Sony and half a million for Xbox. So that’s about five percent of the hardware install base that spent the money to get involved. Most of those people have yet to spend any money on a monthly or annual basis for a subscription,” said Harrison.

Fast forward to 2012 where Nintendo is preparing to launch their next-gen console Wii U. Because Sony and Microsoft have been building their online networks for years, Iwata says it wouldn’t be easy to catch up to their online networks.

“I think that what we see in terms of online gaming networks on existing dedicated gaming platforms is not particularly well suited to the approach Nintendo has taken. Therefore, I can’t sit here and say to you that we can very quickly overcome or catch up to other companies, which began to work in the online field from many years ago and have been building these online networks on other platforms, and I don’t think that would be a smart strategy, either.”

The conservative stance on online gaming made sense in 2001 because broadband connectivity wasn’t nearly as widespread as it is today. But while Nintendo spent years playing catch-up with online, their competitors have been using online services to make consumers more loyal to their brands.  After twelve years, Nintendo would finally launch the Nintendo Network, but did they wait too long? Would most gamers switch to the Nintendo Network after they’ve spent over a decade making tons of friends, invested lots of money, and collected plenty of trophies and achievements on Xbox Live and PlayStation Network? That’s the million dollar question whenever we discuss the strategy of Nintendo’s online business.

———————————————————————————————————————————————————————

In Japan, Bandai conducted a survey asking children 12 and under what their favorite activities were. The survey found that the popularity of video games among Japanese children had slipped from fifth place in 1995 to ninth place in 2000. Nintendo was preparing to launch the GameCube — a console designed with small children in mind — one year after research suggested that Japanese children were losing interest in games. Over at Gamasutra, InterOne Inc’s John Ricciardi talked about the gaming scene in early 2000′s Japan. He talks about there being a huge divide in how Japanese developers are paid in comparison to developers in the west. Based on his own observations, he noticed that most Japanese people who own a GameCube were usually small children.

“Xbox gamers in Japan are mostly super hardcore types, and GameCube owners are mostly kids–I hate to play into the stereotype, but it’s really true,” says Ricciardi. ”Another point that I’m sure many of my friends in development out here would love for me to point out: Japanese developers get paid like garbage compared to Western developers. Programmers and artists here, even high-level ones, make a fraction of what their counterparts make in the West. This can’t possibly be good for morale, but at the same time it’s kind of normal for Japan; employees here are expected to be loyal to their company and treat it like a second home, so most people don’t complain about these kinds of issues as much as they probably should.”

Japan’s economy was greatly impacting the game industry during the early 2000′s, and the used game market was only making the situation worse.  Speaking with the Chicago Tribute, KOEI president Kiyoshi Komatsu voiced concerns about the gaming market in Japan.

“First of all, the Japanese economy is not doing well. The shrinking of the game market is a natural extension of the bad economy,” said Kiyoshi Komatsu, president of KOEI. ”…the used-game market in Japan is quite a problem. Roughly 30 to 40 percent of the market is used games. Obviously, this situation with used games affects the income of softwaremakers like us.”

Square Enix’s president, Yoichi Wada, said one reason Japanese publishers struggled in the early 2000′s is because they didn’t properly promote their games. ”Japanese game publishers have been bad at marketing, ourselves included,” said Square Enix President Yoichi Wada.

By 2003, the CESA reported that 37 percent (34.4 million) of the Japanese population actively played games in 2003.  This was a 12 percent increase from 25.6 percent (23.6 million) in 2002.  But here’s where things get problematic.  Although more people in Japan claimed to be playing video games than before, they weren’t spending as much money on video games like in the past.  According to the CESA’s report, the Japanese market declined to 446.2 billion yen ($4.11 billion) in 2003, an 11 percent decrease from 2002, and only 60 percent of its peak in 1997. Hardware sales in Japan declined 16.7 percent in 2003 while software sales declined 8.2 percent that year.

“Japan’s marketplace is the only one shrinking right now,” said Yoshihiro Maruyama, the general manager of Xbox’s Japanese division. Maruyama told 1UP.com, “I think software makers are feeling the heat, too. Publishers, distributors, and users are all unhappy, and I want that to change.”

In the United States, the NPD Group told CNN that sales for the video game industry had increased 8% in 2004. Although the industry’s U.S sales remained strong, Nintendo’s president Satoru Iwata became increasingly worried about the state of the Japanese market.

“We have learned that people get tired of any entertainment form,” Satoru Iwata told CNN. “In Japan, the gaming market is shrinking. There is still room to expand in the U.S. and Europe. But we should not become complacent with that growth.”

Satoru Iwata stated that the reasoning for declining software sales in Japan is due to gamers no longer wanting long or difficult games.

“Also behind the soft sales is a change in consumer trends. Consumers today apparently don’t want to sit in front of the television to play games for hours and hours,” Satoru Iwata explained. Speaking to the Register, Iwata blamed declining game sales on “overly complex titles that are too tough for newcomers and casual gamers”. He told the Register that long, difficult games are bad for business because “gamers can spend months playing them, and while they’re doing so, they’re not buying other titles. Those who find they can’t win get so fed up with the experience, they don’t feel inclined to buy an alternative title.”

The Register explains, “Nintendo’s message to the industry seems to be: forget about discs jam packed with ever more complex levels and involving gameplay, and give the punters something they can complete quicky – and get out to buy more of the stuff. Iwata wants Nintendo to focus on games that have a broader appeal.”

Nintendo of America’s George Harrison told CNN that GameCube’s software sales were lower than expected because games like “Super Mario Sunshine” are too difficult for today’s gamers.  Due to software sales, Nintendo of Japan and Miyamoto have decided to make games less challenging to sell software to larger audiences.

“Nintendo’s chief gaming architect Shigeru Miyamoto agreed with criticism that the Mario game was too hard. And, in a decision that might anger the hardcore crowd, the word has since come from up high to make games less challenging.” says CNN.

 

——————————————————————————————————————————————————————————————–

On March 2003, U.K. retailer chain Dixons decided to remove GameCube consoles, accessories, and games from its shelves. Argos, another U.K. retailer, decided to cut the price of GameCube to £78.99 which would be over £50 cheaper than Nintendo’s standard retail price. Dixon also raised the possibility of dropping GameCube altogether if sales don’t improve. ”Sales have been slow so we wanted to speed these up by selling it at a good price,” according to Argos marketing director Paul Geddes. “In terms of what happens after that, we haven’t yet taken a decision on the format.”

The GameCube’s sales were also hurting in Japan.  Major inflation was hitting Japan that year with stocks trading at 20-year lows and real estate prices down 80% from their value a few years ago, For the fourth straight year, consumer prices had fallen, wages dropped 2.1%, and unemployment was on the rise. Nintendo’s President Satoru Iwata told Bloomberg that GameCube was too expensive for most Japanese families. “Japanese are waiting for the price to come down. Spending more than 20,000 yen ($170) is a heavy outlay for a Japanese family today,” said Iwata.

When asked about why the console had become irrelevant in Australia, Satoru Iwata replied, “First of all, I am most sorry that the Gamecube’s performance is bad in Australia among any area in the world. One of the biggest things I feel unfortunate about is that I have not been to Australia. I am looking forward to learning more about Australia.”

By August 2003, Nintendo’s full year profits fell by 38% due to GameCube’s low sales.  The company would only sell an estimated 5.6 million GameCubes in the fiscal year which would be 44 percent short of Satoru Iwata’s 10 million unit goal.  Satoru Iwata admitted to CNN that his rival’s vision for the future of the industry was bringing a “sense of crisis on us”, but he remained optimistic for the future of his company. Nintendo of America’s president Satoru Iwata conceded that 2003 was a rough year for the company.  GameCube’s sales were not living up to expectations, and third party publishers were losing their patience.

“But then, that’s what I enjoy,” said Iwata, who was 43 years old that year.  ”We need to feel a sense of crisis to bring out the best in us.”

Satoru Iwata admitted that GameCube’s launch sales were good, but they failed to maintain the momentum. ”When we launched GameCube, the initial sales were good, and all the hardware we manufactured at that time were sold through. However, after this period, we could not provide the market with strong software titles in a timely fashion. As a result we could not leverage the initial launch time momentum, and sales of GameCube slowed down,” says Iwata.

Iwata told investors that Nintendo would sell 50 million GameCube consoles worldwide by March 2005. GameCube only sold 21.7 million units worldwide by the end of 2006.  The console didn’t even sell half of what the company had originally forecasted.

“One of the things we did with the GameCube was we had these big gaps in time and that really tested people’s patience,” Nintendo’s Perrin Kaplan admitted to Engadget a few years later.  Nintendo of America’s Beth Llewelyn echoed Kaplan’s comments. “We had to compete against games like Grand Theft Auto which probably stole some of the thunder away from our bigger titles,” said Llewelyn.

Sega found success with family friendly GameCube titles like “Sonic Adventure 2: Battle”  and “Super Monkey Ball” which sold better than most of their games on PS2 and Xbox. Unfortunately, not everything was good news for Sega and the GameCube. The GameCube versions for every Sega Sports title sold significantly worse than the PS2 and Xbox versions. On January 23rd, 2003, Sega Sports representatives confirmed to the press that its upcoming baseball game, World Series Baseball 2K3, wouldn’t be coming to the Nintendo GameCube that year. The Xbox and PlayStation 2 versions would remain on track to ship that upcoming March. A few months later, Sega decided to take it one step further and pull the plug on the GameCube versions of all Sega Sports titles.

“Accordingly, SEGA will focus its sports development resources on delivering its SEGA Sports(tm) games to PlayStation®2 and Xbox(tm) on time at the start of each season.” read a Sega press release.

Here is a chart provided by IGN.com for Sega’s sales all the way through August 2002.

Acclaim, a publisher who was once part of the Ultra 64 “Dream Team” in 1995, had a great relationship with Nintendo. By 2003, the relationship soured, and Acclaim dropped its support for the Nintendo GameCube.  With retailers dropping support for both GameCube hardware and software, Acclaim found itself under pressure to drop the console as well.

“We’re getting increasing negativity at retail just in pushing GameCube products. The reports we get from retailers are that the machine’s not selling, they’re not going to stock it, and they’re only going to stock the top three to five games, and they’re all going to be licensed or first party. This makes it increasingly difficult for us to make it viable to do any GameCube games. The main problem is, just looking across Europe, all the main territories that we deal with, the hardware sales and software sales just don’t make it viable for us to keep going with the ‘Cube. “There are some titles that will be published between now and Christmas on the ‘Cube – XGRA and Urban Freestyle Soccer will still be released. But as I say, it’s almost that the decision’s been made for us by the market.”

Eidos announced to the press that they were pulling support for the GameCube, and they told publishers they would be smart if they followed their lead. ”The GameCube is a declining business. If other companies follow us, [Nintendo] will have a hard battle to fight,” said Eido’s Chief Executive Mike McGarvey. He told the press that Eidos doesn’t normally invest heavily in Nintendo systems so this decision wouldn’t have any impact on their business.

Third parties took that advice. One by one, publishers began dropping their GameCube support like flies. At a Dutch event that same year (2003), Atari announced that Driver 3, Mission Impossible: Operation Surma, and Terminator 3: Rise of the Machines had all been cancelled on the GameCube and planned to axe other GameCube titles as well.

THQ followed Atari’s lead by cancelling the GameCube version of “The Suffering” and told the press that the game would still come to PlayStation 2 and Xbox. THQ explained to IGN, “We want to concentrate on the leading platforms in the marketplace when it comes to launching original product. We are still publishing titles like Blitz and Hitz for the Nintendo GameCube, but for The Suffering we are concentrating on the Sony PlayStation 2 and Microsoft Xbox.”

Jack Symons, director of marketing at Bam! Entertainment said, “I was surprised to hear that THQ and Activision have pulled back, particularly THQ. Some of their licenses like Rugrats and what not skew to the younger audience, which in theory, is more prevalent on GameCube.”

Midway would also revealed that it had no future plans for Gamecube. “Mortal Kombat: Deception is not under development for GameCube,” a Midway representative said. “Midway is still selectively considering software for GameCube. However, there are no titles in development for the console at this time.”

The chart below is the combined game sales of EA, Activision, Ubisoft, THQ, and Midway among all three consoles.

Before Xbox 360 launched, PlayStation 2 and Xbox 1 made up 90% of combined software sales. The other 10% of combined game sales was GameCube.

Nintendo believed that their licensing fees were similar to what their competitors offered, but most publishers didn’t see it that way. With publishers abandoning the GameCube, Nintendo rushed to take action by drastically lowering their royalty rate, and promised to help publishers market their third party titles. George Harrison told CVG,”Before our royalty rate was a little more aggressive so to the third party publisher it was a little less attractive to make games for the GameCube.”

Speaking with Reuters, Harrison also admitted that GameCube sales were hurt by not having Grand Theft Auto on their console. “The biggest games of the year last year were games like GTA and they came from an independent publisher,” says Harrison. “We need to make sure that we have good relationships with all the independent publishers, because you never know where the next big hit game is going to come from.”

You could always sense disappointment from Shiguru Miyamoto whenever interviewers asked him questions about the problems facing GameCube.  I’m not saying Mr. Miyamoto loses sleep over GameCube, but I don’t think he expected such a lukewarm reaction to all of the time, money, and hard work invested into that machine. In a 2001 Nintendo Dream Magazine interview, Iwata says Miyamoto was so dedicated to GameCube that he refused to get away from work for even a day. When you fast forward to 2004, the end of the GameCube era, Shigeru Miyamoto had spoken about the stress and pressure he was under.

“The stress sometimes really takes a toll on me physically, to the point even where I have developed some heart problems in the past. Apart from that and spiritually speaking, though, I always feel like I’ve been trying to fulfill myself and make myself happy here,” Miyamoto explained to The Next Level in 2004.  At this time, the GameCube was reaching the end of it’s lifespan, and the company was making the transition to the Wii.

Five years later, Miyamoto expressed his thoughts on the GameCube, and considered it a very sad time when the general public ignored Nintendo’s software.

“This is a job where you have a plan and you polish it endlessly while getting help from others. If Nintendo’s games fail to stand out as games that aren’t made that way proliferate, then it shows that the creation process is for nothing, which made me very sad,” says Miyamoto. “That was especially obvious during the GameCube era; Nintendo titles were hardly even discussed by the [non-gaming] general public back then.”

———————————————————————————————————————————————————-

Overall, the GameCube business was profitable thanks to the sales of first party software.  If we were to speak specifically on hardware, the GameCube was not very profitable, and that’s partially because Nintendo was regularly forced to slash the price to stay competitive.  Unlike the Wii, which made a profit per unit sold on day one, the GameCube lost money on each unit sold.  But when you compare GameCube’s losses to the Xbox’s losses, the GameCube’s losses seem very insignificant and minor for the most part.

On August 31st, 2001, one month before GameCube’s launch, Peter Main told an interviewer“We expect to incur a small loss on the GameCube hardware initially, and you’re right that it hasn’t been our habit in the past but we expect it to turn okay early next year.” That same year, Merrill Lynch said Nintendo would lose 2350 Yen (£14) on every GameCube sold, but it was a small amount compared to how much money other consoles were losing.

Nintendo spokesman Hiroshi Imanish told Reuters on January 2002 that GameCube’s costs of production and distribution per unit were higher than the machine’s sales price. The goal was to bring the company’s business back into the black. `We are discussing making GameCubes in China,” said Imanishi.

By April 2002, CNN reported that Nintendo was losing about $20 per GameCube sold while Microsoft was losing $100 for every Xbox sold.

Only six months after the North American launch, the GameCube would receive a price cut to $150.  On May 20th, 2002, Nintendo’s George Harrison told USA Today, “At about $149, Nintendo will roughly break even on sales of each GameCube,” Harrison said. “The company has kept manufacturing costs lower by not offering an installed DVD player on the GameCube like its rival consoles.”

One year later, the sales of GameCube would reach a new low, and they were cutting into Nintendo’s profitability.  On November 2003, Nintendo reported a $26 million loss in the first half of its fiscal year due to weak sales of its GameCube console. Sales were estimated at roughly $2 billion, and Nintendo blamed the strengthening of the yen for hurting overseas income. Investors became worried that Nintendo was having trouble getting rid of its excessive inventory of GameCube consoles before Christmas.

Eventually the console’s price had reached a new low price of $99 to make it a much more compulsive purchase for consumers. In 2004, Nintendo’s Perrin Kaplan confirmed to IGN that GameCube’s sales have improved, but it was still losing money on each unit sold.

“I would say that our losses are really negligible. It’s such a small amount. Plus with the amount of software that’s being sold we’re still definitely in a solid profit situation. We’re not in the position that I know that Microsoft has been in with the loss Xbox hardware,” said Kaplan

Toward the end of 2004, Japanese newspaper, Kabushiki Shimbun, revealed that Nintendo was losing ¥20 billion ($180.8m) each year on Nintendo hardware.  Because of these losses, the company decided to reduce that loss by reusing production plants for future hardware.

Shortlink: http://www.dromble.com/?p=5227

Neil Mitchell's Haskell Blog: Optimising Haskell for a tight inner loop

$
0
0

Comments:"Neil Mitchell's Haskell Blog: Optimising Haskell for a tight inner loop"

URL:http://neilmitchell.blogspot.co.uk/2014/01/optimising-haskell-for-tight-inner-loop.html


Summary: I walk through optimising a Haskell string splitter to get a nice tight inner loop. We look at the Haskell code, the generated Core, C-- and assembly. We get down to 6 assembly instructions per input character.

Let's start with some simple code:

break (`elem` " \r\n$") src

This code scans a string looking for a space, newline or $ and returns the string before and the string after. Our goal is to make this code faster - by the end we'll get down to 6 assembly instructions per input character. Before making things go faster we should write test cases (so we don't break anything), profile (so we are optimising the right thing) and write benchmarks (to check our changes make things go faster). To write this post, I did all those steps, but the post is only going to look at the generated Core, C-- and assembly - and be guided by guesses about what should go faster. The complete code is available online, along with the Core/C--/assembly for each step as produced by GHC 7.6.3.

Version 1

To turn our example into a complete program, we write:

module InnerLoop(innerLoop) where
innerLoop :: FilePath -> IO (String, String)
innerLoop file = do
 src <- readFile file
 return $ break test src
test x = x `elem` " \r\n$"

We can save this code as InnerLoop.hs and compile it with:

ghc -c -O2 InnerLoop.hs -ddump-simpl -ddump-cmm -ddump-asm > log.txt

The full output of log.txt is available here. It contains the GHC Core (which looks a bit like Haskell), then the C-- (which looks a bit like C) and finally the assembly code (which looks exactly like assembly). When optimising we usually look at the Core, then at the C--, then at the assembly - stopping whenever our profiling says we are done. Let's take a look at the inner loop in Core (with some light editing):

innerLoop_3 = GHC.CString.unpackCString# " \r\n\$"
test_1 = \ (x :: GHC.Types.Char) ->
 GHC.List.elem @ GHC.Types.Char GHC.Classes.$fEqChar x innerLoop_3
innerLoop_2 =
 ...
 case GHC.List.$wbreak @ GHC.Types.Char test_1 x of _
 (# a, b #) -> (a, b)
 ...

The best way to read the Core is by looking for what you can understand, and ignoring the rest - it contains a lot of boring detail. We can see that a lot of things are fully qualified, e.g. GHC.List.elem. Some things have also been a bit mangled, e.g. $wbreak, which is roughly break. The interesting thing here is that break is being passed test_1. Looking at test_1 (which will be called on each character), we can see we are passing $fEqChar - a pair containing a function of how to perform equality on characters - to the elem function. For each character we are going to end up looping through a 4 element list (innerLoop_3) and each comparison will be going through a higher order function. Clearly we need to improve our test function.

Version 2

We can unroll the elem in test to give:

test x = x == ' ' || x == '\r' || x == '\n' || x == '$'

Compiling again and looking at the Core we see:

test_2 =
 \ (x :: GHC.Types.Char) ->
 case x of _ { GHC.Types.C# c ->
 case c of _ {
 __DEFAULT -> GHC.Types.False;
 '\n' -> GHC.Types.True;
 '\r' -> GHC.Types.True;
 ' ' -> GHC.Types.True;
 '$' -> GHC.Types.True
 }
 }

Now for each character we extract the raw character (pattern matching against C#) then test it against the possibilities. GHC has optimised our repeated ==/|| into a nice case expression. It looks quite nice. Now the bottleneck is the break function.

Version 3

The break function is working on a String, which is stored as a linked list of characters. To get better performance we can move to ByteString, writing:

innerLoop :: FilePath -> IO (ByteString, ByteString)
innerLoop file = do
 src <- BS.readFile file
 return $ BS.break test src

For many people this is the reasonable-performance version they should stick with. However, let's look at the Core once more:

go = \ (a :: Addr#) (i :: Int#) (w :: State# RealWorld) ->
 case i >=# len of _ {
 GHC.Types.False ->
 case readWord8OffAddr# @ GHC.Prim.RealWorld a 0 w
 of _ { (# w, c #) ->
 case chr# (word2Int# c) of _ {
 __DEFAULT -> go (plusAddr# a 1) (i +# 1) w;
 '\n' -> (# w, GHC.Types.I# i #);
 '\r' -> (# w, GHC.Types.I# i #);
 ' ' -> (# w, GHC.Types.I# i #);
 '$' -> (# w, GHC.Types.I# i #)
 }
 };
 GHC.Types.True -> (# w, l_a1J9 #)
 }

The first thing that should strike you is the large number of # symbols. In Core, a # means you are doing strict primitive operations on unboxed values, so if the optimiser has managed to get down to # that is good. You'll also notice values of type State# RealWorld which I've renamed w - these are an encoding of the IO monad, but have zero runtime cost, and can be ignored. Looking at the rest of the code, we have a loop with a pointer to the current character (a :: Addr#) and an index of how far through the buffer we are (i :: Int#). At each character we first test if the index exceeds the length, and if it doesn't, read a character and match it against the options. If it doesn't match we continue by adding 1 to the address and 1 to the index. Of course, having to loop over two values is a bit unfortunate.

Version 4

A ByteString needs an explicit length so it knows when it has come to the end of the buffer, so needs to keep comparing against explicit lengths (and for efficiency reasons, also maintaining those lengths). Looking to C for inspiration, typically strings are terminated by a \0 character, which allows looping without comparing against a length (assuming the source file does not contain \0). We can define our own null-terminated ByteString type with a break operation:

newtype ByteString0 = BS0 ByteString
readFile0 :: FilePath -> IO ByteString0
readFile0 x = do
 src <- BS.readFile x
 return $ BS0 $ src `BS.snoc` '\0'

We define a newtype wrapper around ByteString so we gain some type safety. We also define a readFile0 that reads a file as a ByteString0, by explicitly calling snoc with \0. We can now define our own break0 function (this is the only big chunk of Haskell in this article):

break0 :: (Char -> Bool) -> ByteString0 -> (ByteString, ByteString0)
break0 f (BS0 bs) = (BS.unsafeTake i bs, BS0 $ BS.unsafeDrop i bs)
 where
 i = Internal.inlinePerformIO $ BS.unsafeUseAsCString bs $ \ptr -> do
 let start = castPtr ptr :: Ptr Word8
 let end = go start
 return $! end `minusPtr` start
 go s | c == '\0' || f c = s
 | otherwise = go $ inc s
 where c = chr s
chr :: Ptr Word8 -> Char
chr x = Internal.w2c $ Internal.inlinePerformIO $ peek x
inc :: Ptr Word8 -> Ptr Word8
inc x = x `plusPtr` 1

We define break0 by finding the position at which the condition stops being true (i) and calling unsafeTake/unsafeDrop to slice out the relevant pieces. Because we know the second part is still null terminated we can rewrap in ByteString0. To find the index, we mostly use code copied from the bytestring library and modified. We convert the ByteString to a Ptr CChar using unsafeUseAsCString which just lets us look at the internals of the ByteString. We then loop over the pointer with go until we get to the first character that passes f and find how far we travelled. The function go looks at the current character using chr, and if it's \0 (the end) or the function f passes, returns the address at this point. Otherwise it increments the pointer. We use chr to peek at the pointer directly, and inlinePerformIO to do so purely and fast - since we know these buffers are never modified, the inlinePerformIO is morally defensible (we could have put chr in IO but that breaks a future optimisation we'll need to do).

Compiling to Core we see:

go = \ (x :: GHC.Prim.Addr#) ->
 case readWord8OffAddr# @ RealWorld x 0 realWorld#
 of _ { (# _, c #) ->
 case GHC.Prim.chr# (GHC.Prim.word2Int# c) of _ {
 __DEFAULT -> go (GHC.Prim.plusAddr# x 1);
 '\NUL' -> GHC.Ptr.Ptr @ GHC.Word.Word8 x;
 '\n' -> GHC.Ptr.Ptr @ GHC.Word.Word8 x;
 '\r' -> GHC.Ptr.Ptr @ GHC.Word.Word8 x;
 ' ' -> GHC.Ptr.Ptr @ GHC.Word.Word8 x;
 '$' -> GHC.Ptr.Ptr @ GHC.Word.Word8 x
 }

Now we have a Core inner loop to be proud of. We loop round with a single pointer, peek at a byte, and compare it to our options. Time to look onwards to the C--, where I've included just the inner loop:

InnerLoop.$wgo_info()
 c1Tt:
 Hp = Hp + 8;
 if (Hp > HpLim) goto c1Tx;
 _s1RN::I32 = %MO_UU_Conv_W8_W32(I8[I32[Sp + 0]]);
 _s1T5::I32 = _s1RN::I32;
 _s1T6::I32 = _s1T5::I32;
 if (_s1T6::I32 < 13) goto c1TG;
 if (_s1T6::I32 < 32) goto c1TH;
 if (_s1T6::I32 < 36) goto c1TI;
 if (_s1T6::I32 != 36) goto c1TJ;
 ...
 ...
 c1TJ:
 _s1T4::I32 = I32[Sp + 0] + 1;
 I32[Sp + 0] = _s1T4::I32;
 Hp = Hp - 8;
 jump InnerLoop.$wgo_info; // []
 ... 

Reading the code, we first mess around with Hp, then pull a value out of the array and into _s1RN, then do some comparisons, and if they don't match jump to c1TJ, mess around with Hp again and jump back to start again.

There are three obvious problems with the code: 1) we mess around with Hp; 2) we are doing too many tests to get to the default case; 3) there is a jump in the middle of the loop.

Version 5

Let's start with the Hp variable. Hp is the heap pointer, which says how much heap GHC is using - if the heap gets above a certain limit, it triggers a garbage collection. The Hp = Hp + 8 reserves 8 bytes of heap for this function, Hp > HpLim checks if we need to garbage collect, and Hp = Hp - 8 at the bottom of the loop gives back that heap space. Why do we allocate 8 bytes, only to give it back at the end? The reason is that in the return path after the loop we do allocation. It's a long standing performance issue that GHC doesn't push the heap test down to the exit path, but we can fix it ourselves. Looking at the Core, we saw:

case GHC.Prim.chr# (GHC.Prim.word2Int# c) of _ {
 __DEFAULT -> go (GHC.Prim.plusAddr# x 1);
 '\NUL' -> GHC.Ptr.Ptr @ GHC.Word.Word8 x;

The expression GHC.Ptr.Ptr @ GHC.Word8 x is allocating a constructor around the pointer to return. Looking at the Ptr type we discover:

data Ptr a = Ptr Addr#

So Ptr is simply a constructor wrapping our address. To avoid the Ptr in the inner loop, we can switch to returning Addr# from go:

i = Internal.inlinePerformIO $ BS.unsafeUseAsCString bs $ \ptr -> do
 let start = castPtr ptr :: Ptr Word8
 let end = go start
 return $! Ptr end `minusPtr` start
go s@(Ptr a) | c == '\0' || f c = a
 | otherwise = go $ inc s
 where c = chr s

We also add back the Ptr around end do call minusPtr. Looking at the Core we now see a very simple return path:

case GHC.Prim.chr# (GHC.Prim.word2Int# ipv1_a1D0) of _ {
 __DEFAULT -> InnerLoop.$wgo (GHC.Prim.plusAddr# ww_s1PR 1);
 '\NUL' -> ww_s1PR;

And dropping down to C-- we see:

 c1Td:
 _s1Ry::I32 = %MO_UU_Conv_W8_W32(I8[I32[Sp + 0]]);
 _s1SP::I32 = _s1Ry::I32;
 _s1SQ::I32 = _s1SP::I32;
 if (_s1SQ::I32 < 13) goto c1Tn;
 if (_s1SQ::I32 < 32) goto c1To;
 if (_s1SQ::I32 < 36) goto c1Tp;
 if (_s1SQ::I32 != 36) goto c1Tq;
 R1 = I32[Sp + 0];
 Sp = Sp + 4;
 jump (I32[Sp + 0]); // [R1]
 c1Tq:
 _s1SO::I32 = I32[Sp + 0] + 1;
 I32[Sp + 0] = _s1SO::I32;
 jump InnerLoop.$wgo_info; // []

Not a single mention of Hp. We still have a lot more tests than we'd like though.

Version 6

The current code to check for our 5 terminating characters compares each character one by one. This entire example is based on lexing Ninja source files, so we know that most characters will be alphanumeric. Using this information, we can instead test if the character is less than or equal to $, if it is we can test for the different possibilities, otherwise continue on the fast path. We can write:

test x = x <= '$' && (x == ' ' || x == '\r' || x == '\n' || x == '$')

Now looking at the Core we see:

go = \ (ww_s1Qt :: GHC.Prim.Addr#) ->
 case GHC.Prim.readWord8OffAddr#
 @ GHC.Prim.RealWorld ww_s1Qt 0 GHC.Prim.realWorld#
 of _ { (# _, ipv1_a1Dr #) ->
 case GHC.Prim.chr# (GHC.Prim.word2Int# ipv1_a1Dr) of wild_XH {
 __DEFAULT ->
 case GHC.Prim.leChar# wild_XH '$' of _ {
 GHC.Types.False -> go (GHC.Prim.plusAddr# ww_s1Qt 1);
 GHC.Types.True ->
 case wild_XH of _ {
 __DEFAULT -> go (GHC.Prim.plusAddr# ww_s1Qt 1);
 '\n' -> ww_s1Qt;
 '\r' -> ww_s1Qt;
 ' ' -> ww_s1Qt;
 '$' -> ww_s1Qt
 }
 };
 '\NUL' -> ww_s1Qt
 }
 }

The code looks reasonable, but the final \NUL indicates that the code first checks if the character is \NUL (or \0) and only then does our fast < $ test.

Version 7

To perform our < $ test before checking for \0 we need to modify go. We require that the argument predicate must return False on \0 (otherwise we'll run off the end of the string) and can then write:

go s@(Ptr a) | f c = a
 | otherwise = go $ inc s
 where c = chr s
test x = x <= '$' &&
 (x == ' ' || x == '\r' || x == '\n' || x == '$' || x == '\0')

The Core reads:

InnerLoop.$wgo =
 \ (ww_s1Qq :: GHC.Prim.Addr#) ->
 case GHC.Prim.readWord8OffAddr#
 @ GHC.Prim.RealWorld ww_s1Qq 0 GHC.Prim.realWorld#
 of _ { (# _, ipv1_a1Dr #) ->
 let {
 c1_a1uU [Dmd=Just L] :: GHC.Prim.Char#
 [LclId, Str=DmdType]
 c1_a1uU = GHC.Prim.chr# (GHC.Prim.word2Int# ipv1_a1Dr) } in
 case GHC.Prim.leChar# c1_a1uU '$' of _ {
 GHC.Types.False -> InnerLoop.$wgo (GHC.Prim.plusAddr# ww_s1Qq 1);
 GHC.Types.True ->
 case c1_a1uU of _ {
 __DEFAULT -> InnerLoop.$wgo (GHC.Prim.plusAddr# ww_s1Qq 1);
 '\NUL' -> ww_s1Qq;
 '\n' -> ww_s1Qq;
 '\r' -> ww_s1Qq;
 ' ' -> ww_s1Qq;
 '$' -> ww_s1Qq
 }
 }
 }

The C-- reads:

InnerLoop.$wgo_info()
c1Uf:
 _s1Se::I32 = %MO_UU_Conv_W8_W32(I8[I32[Sp + 0]]);
 _s1Sh::I32 = _s1Se::I32;
 _s1Sg::I32 = _s1Sh::I32;
 _c1TZ::I32 = _s1Sg::I32 <= 36;
 ;
 if (_c1TZ::I32 >= 1) goto c1Ui;
 _s1Ty::I32 = I32[Sp + 0] + 1;
 I32[Sp + 0] = _s1Ty::I32;
 jump InnerLoop.$wgo_info; // []

And the assembly reads:

InnerLoop.$wgo_info:
_c1Uf:
 movl 0(%ebp),%eax
 movzbl (%eax),%eax
 cmpl $36,%eax
 jbe _c1Ui
 incl 0(%ebp)
 jmp InnerLoop.$wgo_info

We have ended up with a fairly small 6 instruction loop.

Version 8

We've now exhausted my Haskell bag of tricks, and have to stop. But the assembly code could still be improved. In each loop we read the contents of the memory at %ebp into %eax, and increment the contents of the memory at %ebp at the end - we're manipulating the value on the top of the stack (which is pointed to by %ebp). We could instead cache that value in %ebx, and write:

_c1Uf:
 movzbl (%ebx),%eax
 cmpl $36,%eax
 jbe _c1Ui
 incl %ebx
 jmp _c1Uf

One less instruction, two less memory accesses. I tried the LLVM backend, but it generated significantly worse assembly code. I don't know how to optimise any further without dropping down to the C FFI, but I'm sure one day GHC/LLVM will automatically produce the shorter assembly code.

Viewing all 9433 articles
Browse latest View live