Tuesday, December 25, 2007

Why so slow?


Google has thrived by proving that computer users value speed above nearly all else. So why does each Microsoft Windows device I use take longer and longer to boot up? via NY Times

Booting from Flash RAM is the future- how fast is Windows? I will not buy another computer without it. Everything in memory is the default optimization to take now...cache everything.

Ron Jeffries Can't Get Rails Running


This is kind of a strange story, but why the hell can't he get it running? This is when to call in VMWare and have a nice set of clean machines available. Apparently he was running SlimServer, which sounds slightly immoral so I am sure he deserved the non-functioning environment. Anyway, nice thing about XP is early cancellation, but, he could have just asked for help...

That said- development environment set up is not that easy. I had one J2EE project where we had a two day process. For my last Rails project, it was a about two pages on the wiki, most of it related to SQL Server driver (don't ask) and Ferret. It did remind me to not depend on the availability of external resources for the install procedures though- what's out there on the Rails Gem servers can change.

What's killing everyone on one of my internal corporate projects is svn+ssh. That said, our most recent hire nailed it pretty well, even though it was his first time using a Mac and NetBeans, and I am using RadRails on Windows.

Anyway- it's worth writing up the procedures for the development environment set up, and then having the new person update those when they come on board as their first task. Of course, if no one has started for a while, chances are the instructions just won't work.

Wednesday, December 05, 2007

Digital Notes in general (and in ArcGIS)

I was reading Tim Ferris' post on how "awesome" he is at taking notes. I have tried his approach and it didn't really work for me having all my notes in a little moleskine notebook, even if there is a little index page. I much prefer the approach of having a tear off pad of heavy Levenger paper, then I can put the note page in the appropriate folder and file it.

I was experimenting a little today with the middle ground- a digital/analog pen. You use a special notebook and special pen to take notes, then you connect the pen to a USB dock and the notes get transferred to OneNote 2007- and OCR-ed. It's a really cool idea- and their ArcGIS product is coming out soon. It's a cool idea to go out into the field with a paper map and then have your annotations show up on the map without scanning it back in, but as vectors recorded by the pen.

Anyway, I don't really want the big pen, but maybe the good idea to steal from all of this is to scan and OCR all of my note pages. Then I can full text search them...brilliant, eh?

Friday, November 23, 2007

Amish Oriented Databases


Was Ray Ozzie right? I've been seeing a fair amount of information circulating about RDDB. RDDB is a document-oriented, RESTful database. Lotus Notes is a document-oriented, old database (and email program, calendar, workflow form generator). The cool thing is that all of my Lotus Notes knowledge, which was rapidly becoming worthless now has a chance to be valuable again if this sort of thing takes off again.

I was in a Rails training class with this guy who made his career on the back old Notes and their ugly little autogenerated web apps. He kept asking, "how do we keep this from becoming another Lotus Notes?" At the time, I was thinking, "how do I keep myself from being one of those guys who looks at every new technology in terms of some old technology that he actually had time to understand?" Now, looking at the potential resurgence in document oriented datbases, maybe he was right...nah. Rails is nothing like Notes, because, uh...it's dynamically typed? No, Notes had that. Works with Java now, thanks to JRuby? No, IBM took care of that with Domino+WebSphere. Hmmm. At least 37 Signals hasn't been sold to IBM while David H. grabbed the lead role at Microsoft.

Oh, wait, there is a difference. REST! I had posted a long while back about how my cousin made fun of me in front of a huge room of people because he asked what the two types of web services were (hoping for document oriented and remote procedure call as answers) and I said "REST". He said the REST people were like the Amish, got some laughs and proceeded to promote WS-*. At lunch after his talk, Bob Martin took my side, so I knew I had made at least an interestingly wrong decision. Tim Bray has a nice round-up of other smart people that were wrong about that. So sweet to be right about that, even though he could kick my shins in programming any day of the week. Except Sunday, when we Amish polish our wooden laptops. Anyway, maybe RDDB is more Amish Oriented Database than Rails Oriented.

So Rails has REST, and even though I swore off creating RESTful apps when the initial URL implementation was polluted with semi-colons, it's the real deal now. Not so much for the application, but for the API. It's great to have APIs that are platform independent. Most things like a Lotus Notes wouldn't bother with this- doesn't help with platform lock-in. I spent more time working with the Lotus Notes ODBC driver to suck data out of the system than was reasonable. Of course, they did support COM, which was the REST of the day (if CORBA was the WS-*).

Now, REST still needs a solution for the old federated identity problem (so many websites, so many passwords, so many usernames)...but there are plenty of options. Maybe my cousin is right about CAS? Seems Amish enough for me.

Thursday, November 15, 2007

Feral Programmer


Feral Programmer: A programmer who has no notion whatsoever of common, socially acceptable programming practices or knowledge of available tools and technologies. The feral programmer is not necessarily a bad coder, and sometimes could be labeled a savant, but is generally impossible to work with due to the complete lack of any common framework of understanding. Somewhat of a corollary to Not Invented Here syndrome, the feral programmer is either just completely unaware or uncomprehending of most inventions in the field. Behaviors include: creating flat file databases, version control systems or their own web service architecture built of packet sniffers, frequently using only Perl.

Tuesday, November 06, 2007

iPhone SDK trumped

A third note in the open platforms series...Facebook vs. OpenSocial, Second Life vs. Multiverse, iPhone vs. Android. Now strictly speaking, the iPhone is the least walled garden of the three, but the lack of Java support means that it's a little like Facebook. Even with the vague SDK announcement, all applications will have to be custom built for it. Fortunately you can still use the iPhone to get to your Google docs...

Where's my GPhone?

Android is the first truly open and comprehensive platform for mobile devices. It includes an operating system, user-interface and applications -- all of the software to run a mobile phone, but without the proprietary obstacles that have hindered mobile innovation.

We recognize that many among the
multitude of mobile users around the world do not and may never have an Android-based phone. Our goals must be independent of device or even platform.


Thursday, November 01, 2007

...tear down that wall.


You have to love the timing of the Open Social API announcement. Facebook is not worth $15B or $3 for every person on earth...

As pmarca puts it:

This is the exact same concept as the Facebook platform, with two huge differences:

  • With the Facebook platform, only Facebook itself can be a "container" -- "apps" can only run within Facebook itself. In contrast, with Open Social, any social network can be an Open Social container and allow Open Social apps to run within it.
  • With the Facebook platform, app developers build to Facebook-proprietary languages and APIs such as FBML (Facebook Markup Language) and FQL (Facebook Query Language) -- those languages and APIs don't work anywhere other than Facebook -- and then the apps can only run within Facebook. In contrast, with Open Social, app developers can build to standard HTML and Javascript, and their apps can then run in any Open Social container.
In keeping with my thoughts on 3D virtual worlds, the walled garden approach is not the end state, the "internet of virtual worlds" is. I think we're going to see this with Facebook, MySpace and others eventually becoming part of an "internet social spaces" connections between bits and pieces of ourselves, photos on flickr, blog on blogger, twits on twitter, profile on myspace, maps on geocommons, restaurant reviews on yelp, plus whatever we think of next week. Tying all of this together is going to be remarkably cool.

Technology like OpenID is really useful, and that's why you see Enterprise players like Oracle on the agreement. Now Facebook has a great UI and momentum, so there is still a really huge chance that this effort will fail if it turns out that no one uses it effectively, but I think there will be at least one innovation out there that will cause Facebook to pull down the wall towards interop of their data...

Wednesday, October 31, 2007

Official Gmail Blog: Code changes to prepare Gmail for the future

Official Gmail Blog: Code changes to prepare Gmail for the future

How sad is it that I get excited about a mostly invisible update to a web based email program? Nonetheless, I am "stoked" for the gmail updates, and only slightly concerned about potentially losing some GreaseMonkey awesomeness.

Monday, October 22, 2007

Pete Lacey's SOA definition


Read this SOA definition from Pete Lacey this morning and it almost makes sense...too bad it's not what most people mean.

So, then, what is SOA? For one thing, SOA is misnamed. It’s not an architecture in any sense of the word. It is, to use a Burton Group phrase, a mind set. It is the generally held belief that when implementing systems one should expose system functionality for general consumption directly from the network, as well as or instead of burying it behind a user interface. It is, as well, the belief that there is a great deal of value to be generated by retrofitting network accessibility into most existing systems. And it is the belief that this can only work if the means of doing so aren’t locked to a particular language, framework, operating system, vendor, or network architecture.

It's a bit hard to disagree with that as a good policy. However, having a bit of a linguistic bent, it makes me wonder if there isn't something fundamentally wrong with an acronym that is "misnamed". Mr. Lacey does suggest an alternative term for what he is describing: Network Oriented Computing. However doesn't that make it a worse definition of SOA (since the definition better fits another term)? It does. His final definition of SOA is "a technical approach to NOC that has a non-uniform service interface as its principle abstraction. Today, SOAP/WS-* is the chief implementation approach.". This is a better definition because it actually is an architecture. It also sits well next to ROA (Resource Oriented Architecture).

The killer final definition he offers is: "Business Service Architecture (BSA): An unnecessary term (also not an architecture) that tries to make the obvious something special. Aka, business analysis. Aka, requirements gathering." Reading this crystallized something important about why phrases like IT-Business alignment always bothered me- you align the IT systems with the business via requirements gathering. Now, maybe you are doing it at high level and want to give it a special name, but the dangers of abstract terms are that people can look like they are agreeing about how to do something, but really have completely different ideas about what they are concretely going to do.

Hmm, that might be a topic for another post.

Wednesday, October 10, 2007

Metaverse Skeptic


I've been an extreme skeptic when it comes to virtual worlds for a while now. I really can't understand how people can get excited about things like Second Life. None of the arguments for it as a great business seem to make much sense. The false scarcity markets for real estate seem quite misguided. I really enjoyed working with Forterra the other day- it's definitely richer than chat rooms and teleconferences in terms of communication- but is it richer than video telecon? In one way it is, because you can walk around and such, but that also imposes the limitations of the real world upon communication: you can't hear someone if you walk too far away.

I left a highly skeptical comment on James Au's post on 7 reasons why business execs should care about second life. There needs to be an Internet of sorts, so that the virtual worlds can, well, internetwork. You can't say Second Life is that important, because investing so much in something owned by another company is putting all of your eggs in one basket. It's like starting a company that builds Facebook applications- when (not if) Facebook gets acquired, your whole business model is in jeopardy. Better make a quick buck.

The world isn't just about open platforms. It's about interoperable platforms. One thing that makes adoption of new GIS systems possible is that the data can be transitioned from one to another because all of the stuff is connected to real world coordinates, and it is possible to translate from one coordinate system to another because the both refer to a place on the real earth. I am not sure what the virtual coordinate system is. With the Internet, it's the unified addressing and naming schemes. There can only be one 220.231.23.123 on the public internet.

In this light, the iPhone looks really dumb. You can't install apps on it. Okay, WiFi works and the phone bit actually connects with the real phone network, but it's not an open platform. Few cell phones are unlocked to connect to multiple networks, but Apple seems to be actively discouraging this. To me, that's a sign of a bad business model- easily defeated. Just make the money on the device- not the lock-in.

It seems like the right business model is have an open and interoperable platform. It cuts off competitors, but allows you to benefit from the network effects of others innovations.

Tuesday, October 09, 2007

The Alignment Trap

Lots of good articles in the Fall Sloan Review of Management which I am just getting around to reading...

Avoiding the Alignment Trap in IT
One of the big ideas in IT Enterprise Architecture lately is Business / IT alignment. This is generally defined as the concept that the shape of the services offered by IT should reflect the offerings of the business. At the very least, it makes it easier for the CIO to justify why they are spending money on project X- because it is directly tied to a business need. To some degree it's a communication convenience.

In reality, the biggest benefits seem to come from simplifying the infrastructure and putting an emphasis on integration. A second article in the SMR by Cynthia Rettig really hits it on the head: The trouble with enterprise software:


...enterprise software may be just too complex to deliver on its promises. She also suggests that the next new thing — service-oriented architecture (SOA) — is not likely to fare much better, for many of the same reasons. There are no easy fixes, cautions Rettig, save a large dose of sobriety, clear-eyed analysis and emphasis on simplicity and efficiency.


If you want to read one of the best advocates for the odd beast known as SOA, I suggest following Bobby Woolf's blog or reading his new ebook. His old book is has had a solid slot on my shelf for a couple of years, but I still haven't felt the need to install an Enterprise Service Bus. (Despite the fact that I more or less use it as a design pattern for integration architectures, just without the overhead of actually having it be a running piece of software)

In any case, I am not writing myself out of a job here- the message is simple though: don't try to do everything at the expense of ending up with complexity. Keeping things simple and hitting the "Pareto important" requirements is the winning strategy.

Friday, September 21, 2007

Xobni


I was really disappointed to read about Xobni. Bringing Gmail features to Outlook- and letting you 'pivot' your email by who it is from / to. I am disappointed because I have a great idea for an inbox assistant that I am conceptualizing, and this thing looks much better, if not really on the same track. The good side of it is that I will get a better sense of the market for Outlook plugins from watching their progress.

My idea is a little more focused on another aspect of email- the doing side of things, while Xobni is more on the analytical side. I still really want to try it, even though a few of the details are far from clear at this point...

I've also been checking out the trial for Sandy. It's another email type tool, except it's more of a command line using email. You send Sandy an email and she either updates your calendar, todos, sends a reminder or just remembers it. With the API support, you could think of it as a mail in interface to all of the tools you already use. I really wanted to like it, but it's not quite what I thought it was going to be.

Here's what my tool is going to do- help you get through your email faster and more effectively. I've been on the verge of declaring email bankruptcy a couple of times in the past months and I need this tool to stay at inbox zero (see below video for more on that).

Friday, September 14, 2007

Is CS "The Modern Liberal Arts Degree"?


Interesting thoughts on CS and Econ degrees on Marginal Revolution.

From the Comments:
I was a CS major at MIT and was heavily recruited by Wall Street firms. My friends who were studying economics did not do nearly as well right out of college. The CS major is a modern "liberal arts" degree; you can do almost anything with it. In my case, I am getting my PhD in economics. I think studying economics instead of CS as an undergrad just signals that you want a high-paying job without having to do any hard work.

Or maybe proving you can make it through one of the more challenging academic programs in the world means there is less risk in hiring you to do anything...Anyway, I don't have a CompSci degree, despite taking most of the 400 level courses, and I've never missed having one (that I know of).

I do really like the idea that CS is just a tool for getting involved in some other industry.

Another Comment:
This is all kind of funny to me. After graduating with a BBA Econ in 1999 I made a play at the CS world. Realizing I was not willing to learn any real programing I went to get my PhD in Econ. Now I sit all day... programing.

Funny world.



And you can now just follow along online...

Tuesday, September 11, 2007

Most frightening sentence on my resume


"Wrote Visual Basic for Applications code in a consulting role to support a multi-user client/server time reporting application in Microsoft Excel"

I feel the need to add a line specifying that I was not the architect on this project who decided that doing hub-spoke replication in Excel was not only feasible, but a good idea. I was just called in when it was determined that the original team had made a big mess when they tried to "scale up" to 50 users. At least it was only a month seven years ago. I suppose what really would make it scary is revealing the customer...but I can't do that to them.

This was one of the first instances I had seen of a non IT department wanting to do something technical, but being shot down by a nascent CIO office in terms of being allowed to do IT projects. They thus went with what they had on their desktops as the development platform.

Anyway, I think that project is getting dropped from Resume 2.0.

Tuesday, August 28, 2007

Secondary Actions in Web Forms


Just checked out this interesting article from LukeW on secondary actions on web forms. The sum of it is that people don't complete forms faster when you make distinctions between primary (submit) and secondary (cancel) buttons. Still, as Luke points out- hitting cancel after filling in a long form really is not good. Do you really need a button for that in all cases?

I really enjoy the data driven approach of using eye tracking data to help make decisions on form elements. Unfortunately he missed my current preferred option: Bolder font-weight on the text on the primary action button, lighter or normal font-weight on the secondary actions. Since we have CSS classes for those button types on my project, I was all ready to make some changes...looks like I won't be making any, but it's still worth the quick read.

Tuesday, August 14, 2007

Authentication in the database- revised

There are some people out there that believe there is one right way to build an application. The people that believe in best practices. The people that fall victim to Harwell's laws. (Particularly #1, "People always try to use their experience even if it doesn't apply to the current situation." and #3 "If a manager doesn't know how to improve an organization, then he/she will change it to look like the last organization that he/she remembers")

The particular issue I am dealing with has to do with web application security architecture, but it comes up again and again in architecture, particularly in web services. Where does authentication live and where does authorization live? And what trusts what. In particular, should a web application authenticate a user's x509 cert or Kerberos ticket, and then pass user information on to further systems, or should it pass the users security information on to further systems, such as databases. In other words, should authentication performed at each stage of a request that passes through multiple systems?

If you ask Oracle any question of the form "Should X be in the database?" the answer is yes. Apparently Microsoft is training their people to think the same way. It sure works great to induce vendor lock-in.

Somehow I am of the opinion that the first system a user comes in contact with should perform the authentication, that system should authenticate to other systems- as a system, not a user, and that their should be trust in the initial user authentication.

Having the database re-authenticate the user seems wasteful...and of no value.

Wednesday, August 08, 2007

Process Overload


I am completely sick of all of the processes that have to be constantly running on my Windows machines. The things that particularly annoy me are the "update check" software- I am looking at you menacingly iTunes, Java, Adobe and Windows Update. Why on earth do these programs have to run constantly? Can't they at least just have a scheduled task to run at a particular time? Or how about copying firefox and just checking for an update whenever I start your program. I think Quicktime asks for updates more often that I actually use the software. I don't need a background process running constantly to tell me a piece of software that I haven't run in two months needs to be updated.

Perhaps even more annoying are the "helpers" that run to make various software applications appear to start up faster. I suppose this is just keeping up with the various tricks that Microsoft employs to make their stuff startup faster, so it's fair to blame them for starting the arms race.

And the final annoyance is the difficulty in seeing which svchost.exe PID goes with which service. How hard could it be record the name of the service, so that I could in a glance see which one is sucking CPU or RAM? I suppose I could actually code a solution to that myself...but I shouldn't have to. In fact, someone already did:

Process Explorer- this is what task manager should have been...

Wednesday, July 18, 2007

A dozen readings later...

I guess I am finally reaching the "advanced" Rails tricks, since I just used this. (Although, I have to admit the origin was a ryanb tutorial on editing multiple models, not from repeated readings...)

from Agile Web Development with Rails

An Extension to Ruby Symbols

(This section describes an advanced feature of Ruby and can be safely skipped on the first dozen or so readings....)

We often use iterators where all the block does is invoke a method on its argument. We did this in our earlier group_by and index_by examples.

groups = posts.group_by {|post| post.author_id}

Rails has a shorthand notation for this. We could have written this code as

groups = posts.group_by(&:author_id)

Thursday, July 12, 2007

Another kind of DVR

Over at boingboing there's an interesting piece about the future of streaming video. "It's a bummer to consider a future in which broadcasts -- which we can all see and record -- are replaced with geo-locked, streaming crippleware netcasts.....All that said: top marks to the first person to demonstrate a working, reliable solution for watching and recording the CBC Dr Who episodes from anywhere in the world."

Right now, I am living in a bit of a gilded Tivo age where I can record just about any signal that comes into my home by intercepting its output right before it goes into the TV (which is basically just a monitor, I have never actually used its built-in tuner). The hugely hacky piece of the whole setup is the IR blaster to control the cable box by pretending to be a remote- with my new Verizon FIOS Motorola set top box, it just doesn't work all that well. New TiVos support the Cable Card standard- at the price of losing control of stream + $800 and a monthly fee. If restrictions are put into place on those new TiVos, for example flags on HD content that make it disappear after a week, then we might actually have a situation where you need to TiVo your TiVo (by capturing its output) to get the content into an open and safe format.

What is really striking about all of this is that the IR remote control is the open interface by which I can control the output of the cable box. Without that, it's basically useless because I couldn't choose what to capture. The whole weirdness by which all IR remotes are similar in that most can imitate the others but just about every device comes with a bunch of pages in the back that allow you to program the remote to control a variety of other devices.

I wonder if we aren't going to see a solution where the output going full screen to your monitor is going to be intercepted in a similar fashion, and then dropped back in to a capture card. It's really insane. The part of the computer that is analogous to the IR interface of the remote control is the URL. It's nice to take a step back every now and then and appreciate the beauty of that- URL addressable content. Now if we could just be assured that we'd be able to look at it when and where we want to.

Tuesday, July 10, 2007

s3sync.rb


I am finally backing my life (and my company's data) offsite with Amazon S3. I have been diving into this full on and it's quite cool. It took me way way too long to get it all set up. Nearly 4 hours to get a 7 line shell script in a cron job, due to a serious of yak-shaving events

In any case, it was basically easy with s3sync (the Ruby version, I believe the Perl version is abandoned...) Here is a decent link on getting it done. I actually used Marcel Molina's s3sh shell program a bit, simply because the error messages it dumped were vastly better than the HTTP 403s I was getting from s3sync. I just didn't find s3sh as "scriptable" due to my lack of shell scripting expertise at the moment. I am just glad I have a place to practice.

It's definitely worth reading a bit on the main AWS site about S3 before diving in. I wasted quite a bit of time due to not understanding the flexibility of the key/bucket system's handling of directories. And the minor bit about buckets having to be unique across all users.

Saturday, July 07, 2007

Persistence Scaling

Interesting interview with Michael Stonebraker (the Ingres guy) from the ACM.[via GLinden]

The basic thrust is the RDBMS is slowing fading in its place at the heart of IT systems everywhere, for a variety of reasons. I personally have seen a lot more in the embedded database and in-memory database category lately, but he's talking about the relational model itself failing for warehousing, streams, text, and list processing. I don't know if the database concepts behind his current company Streambase and stream data processing are that broadly applicable. Still, his depiction of the areas of the relational model where there are scaling or intelligibility problems seems right on. I am not sure I get the following, but it makes sense in a way too:

"It’s the same case in scientific and intelligence databases. Most of these clients have large arrays, so array data is much more popular than tabular data. If you have array data and use special-purpose technology that knows about arrays, you can clobber a system in which tables are used to simulate arrays."

Vector oriented programming?

Stonebraker makes a nice point about how the ActiveRecord component of Rails is in effect a cleaner way of interacting with data than SQL. It really is. That said, I wonder if LINQ is on his radar? Sidetracked by this great post on from 9 till 2 on the 'Rubenesque' (That's Paul Rubens, not Matz) status of c#, which says the following about LINQ:

"Historically, there have been more Microsoft ways to access the Northwind database than they are rows in the Customer table. OLEDB, ODBC, DAO, RDS, JRO, RDO, SQLXML, ADO, ADO.NET, Entity Services....I suspect that DLINQ will probably not be the last ever ever ever in this data accessing periodic series."

And tellingly:

"LINQ has that big benefit of being able to treat relational, XML hierarchical and in-memory data objects all with the same query syntax, allowing you to swap store types at a drop of a hat. The nagging doubt though is that this may be the same big benefit akin to not having to use SOAP over HTTP. How did that go again? Something wonderful for a demo, but something that may not actually be a real pressing 'need'. There is a school of thought that when you're working with a relational database rather than a collection of in-memory objects, then you should not lose track of the various nuances and advantages of the stores - abstraction to save typing can come back to bite you?"

Yeah, like if you're doing Rails finders without :include? Ooops, N queries just ran. Still, I like the idea of s3record. As Nutrun says: "I have occasionally participated in conversations around the subject of the database as a product with an expiry date, destined to eventually be replaced by highly distributed data storage models. Although S3's data storage and retrieval model looks presently better suited for larger units of data (e.g. media content), it would be interesting to investigate how it could be applied as an Object persistence service."

This is obviously the wrong implementation for such a scheme, but it points in one of the directions we're exploring. The concepts of streams, spaces, distribution are the future of scaling and persistence. It won't be a simple design decision whether a database is at the heart of your system or not. It's going to be interesting...as usual.

Thursday, July 05, 2007

Java EE 6: profiles versus subtraction

The JCP has produced a Java Enterprise Edition spec that even "Mr. Spring" Rod Johnson can approve of. This is actually a really big deal, as Java containers have become bloated by implementing expensive pieces of the spec which are seldom used. What they are doing is allowing for compliance to subsets of the standard, with these subsets referred to as profiles. They are also allow for extensibility, with a standard for extensions to the spec.

The "profiles" approach can lead to much confusion, as anyone who has tried to figure out why their Bluetooth phone and Bluetooth car can only communicate on the most basic levels can attest. The OGC also has been using this approach for GML data standards, and seems like overkill when you are just talking about XML. However, in general it makes a lot of sense when you need to preserve the whole from a service implementation standpoint. But really, sometimes just subtracting things from standard is a great thing. (JSF anyone?)

I wonder if there will be profile for app servers to run JRuby on Rails?

Monday, June 25, 2007

CHAOS = BS

Anyone who has looked at software development methods, myself included, is probably familiar with the CHAOS report of The Standish Group. I have blogged before about Robert Glass's article that the numbers reported that group simply are not supported by other studies. I was happy to see that referenced on Herding Cats.

The effects of the data they report (189% cost overrun average) are negative for the IT industry. They are used to justify all means of odd process controls and contribute to a general excessive scepticism about the possibility of making a software project a success. The lack of context given for the data (actually, the data is kept secret) means that all of this is based on a very shaky quantitative foundation.

Having just finished "Fooled by Randomness", I am going to take another look at the Monte Carlo approach to estimation. I used it in a couple of cases to provide broad estimates for large projects, but all of the numbers came out quite low. I still the think the only responsible thing to do is to provide a range of estimates, especially when the nature of the work to be done is uncertain. And it usually is. The real benefit of the simulation approach, from the fooled by randomness perspective, is that they allow to take account of the black swans. If you are using a purely inductive approach, you can't account for things that have never happened. Like a 3 day power outage in your server room. Or system administrator that changes the name of your source control server and costs 10 person days of work. Or a new programmer that checks his whole c drive into the source code repository. Well, those 3 have actually happened to me, but they aren't the sort of thing I usually factor into my estimates, particularly when using the methods from "Planning Extreme Programming", which are purely based on yesterday's weather.

The real challenge here is estimating the shifting sands. Until you have a feel for the rate at which requirements change, you are only going to be able to estimate based upon what people thought they might want something they don't understand to do. Of course, if there is a reorganization in a business you are supporting, and there usually is at some point, the rate of change can shoot up. This is where the difference between methods really starts to kick in. If you are using a method that "embraces change" (the subtitle of the XP book), then you can leverage change to be the engine that carries the organization forward. If you try to keep the requirements frozen, you will be one of the projects reported when the Standish Group calls and says, "tell me about your failed projects".

Tuesday, June 19, 2007

Many Methods

Cognitive Dissonance development (CDD) - In any organization where there are two or more divergent beliefs on how software should be made. The tension between those beliefs, as it’s fought out in various meetings and individual decisions by players on both sides, defines the project more than any individual belief itself.
-Scott Berkun

I've been there...wondering whether it is better to give up the argument because the other person is so incredibly stubborn. And it would have been better for the actual product had I given up. Not because my idea was worse, but because trying to do it two ways at the same time was worse than any one way.

I've been on projects where people waste all of the time trying to pick a product. I now have the confidence to tell people that most products really are not that good. Particularly ones that are intended to make development easier. So many of these create frameworks that are more complicated than what one was trying to accomplish in the first place, but offer less flexibility.

I was on a project that was trying to replace an ArcView 3.1 data entry and analysis application with about 25k lines of custom scripts with a Java web application based on ArcIMS and ArcSDE. This was in 2001. With a primordial version of Netscape. When all ArcIMS did was push jpegs down the wire. Oh, and they wanted to host it at a remote site. Adding points in ArcIMS at the time was a serious challenge. Adding lines was nearly impossible. I quickly quit. The project kept growing until the first deployment was such a failure they had to hide the bodies.

Obviously a place where Agile would have made a big difference. "Architects" in the ivory tower of the CIO shop had decided that we must have web apps for everything. If they had just put the demo app in front of an actual user, the whole project could have been quickly canceled.

I was listening to Uncle Bob Martin's Agile Skypecast while mowing the lawn on Sunday. It's a must listen. Despite all of the BS and corporate maneuvering around the buzzword, agile has deep principles behind it. I suggest ditching the buzzword and sticking to the concepts.
Early and frequent delivery of working code. Daily customer feedback. Developer testing (I am still working on mastering that one.) And the big one: Embrace Change. Don't fight requirements volatility, leverage it. (of course the trick is keeping the requirements stable during the iteration...)

It is interesting how Scrum leaves a lot of these things out and is, as Bob says, more of a management process that doesn't have much to do with software.

My favorite part of the agile universe remains the XP planning game. Point budgets as a means of giving multiple customers a voice, realistic estimation feedback, and ever so slightly, possibly, a little wee smidgen of fun. We could all use that from time to time.

Anyway, Uncle Bob gives the principles a good run through, worth the time to listen to a podcast.

Thursday, June 14, 2007

I never really liked cats...


...but this has pushed me over the edge. The "LOLCATS" madness will pass.

Tuesday, June 12, 2007

Bad code...are you 'aving a laugh?

A little bomb I defused today in an ASP application I am replacing...weirdness explanation at the end.

nextTuesday = ""
showTuesday = ""
gotTuesday = 0
todayDate = weekdayname(weekday(now()))
'response.Write(todayDate)
if todayDate = "Monday" then
holdNextTuesday = date
nextTuesday = formatdatetime(date,1)
' response.Write("got it")
else
j = 0
'do until gotTuesday = 1
do while showTuesday = ""
j = j + 1

if j > 10 then
response.write("ERROR: Infinate loop
" & j)
response.write(weekdayname(weekday(nextTuesday)))
response.end()
response.flush()
end if

nextTuesday = DateAdd("D",j,Date)
'response.Write(weekdayname(weekday(nextTuesday)))
if weekdayname(weekday(nextTuesday)) = "Monday" then
gotTuesday = 1
holdNextTuesday = nextTuesday
nextTuesday = formatdatetime(nextTuesday,1)
showTuesday = nextTuesday
end if
'response.write(nextTuesday)

loop
'nextTuesday = ""
end if

thisMeetingDate = holdNextTuesday
'response.write thisMeetingDate
previousMeetingDate = DateAdd("D",-7,thisMeetingDate)

----
The meeting date was moved from Tuesday to Monday and they didn't change the variable names. I have no explanation whatever for the "infinate loop" check. That one just kills me.

Biggest JRuby (Java) Lie

"Nearly impossible to crash runtime or VM" -From Nutter and Enebo's MountainWest RubyConf presentation.

I managed to do it on a daily basis for years, the simple recipe is to add WebLogic to the project.

This is not to take anything away from their achievement with 1.0. It's a Java application running nearly as fast as a c application. I am very curious to see if it's as compatible with horizontal scaling...

Wednesday, May 30, 2007

Many Lo-Fi Prototypes


I've been listening to the audiobook of 10 Faces of Innovation by Tom Kelley of IDEO. He has a great insight into something I sorta knew, but had never thought about why it was so.

The basic idea is that showing just one prototype to a customer is bad. This tends to force them into a binary/reactive decision, one that is often determined more by their relationship to the creator of the prototype than to the item itself. Given multiple items to evaluate, the attention turns more to the items, and more feedback can be gathered, since it won't be perceived as direct criticism.

The telling analogy Kelley provides is that of a husband that is told by his wife that she has purchased a new dress. She then tries it and asks how it looks. Of course, it is difficult for the husband to say anything negative, because the question is really about how she looks wearing that dress. However, given multiple options to consider, the husband is actually more free to give honest feedback.

Now, it can be expensive to buy multiple dresses, or build multiple prototypes. The answer then is to use lo-fi prototypes: whether it be pictures in a magazine or crude drawings. Still, the more prototypes that are created, the less pressure there is to get it right the first time, which is a crushing force against innovation.

Having heard his brother speak at a conference in 2006, I love what IDEO are doing. If didn't need to make quite so much money, and had any confidence in my abilities, I would like to work there...just to live at that creative level every day would make me so happy.

Thursday, May 24, 2007

Another retro RSS reader.


A must for Neal Stephenson fans, the RSS telegraph gives you all of your feedy goodness in Morse code.

Saw the steampunk turntables on BoingBoing- might have to trade in the 1200s.


Steampunk...

Empowerment and Trust

I've had this in draft for a while, can't figure out what the point was, but I don't want to think about it anymore. I've seen so many people chafing about the enterprise IT shop at one of my clients...unfortunately only I can do is add another forlorn whine to the choir.

My friend sixty4bit recently blogged about some trust and power issues at a client we have both worked for. Their behavior is bureaucratic in the extreme, they run all projects through a slow and cantankerous review board for all of their technology selections. In the end, the review board exists because the organization does not trust software developers and does not want to empower them to make decisions. The end result is that they slow everyone down and drastically reduce the effectiveness of the organization. The overall atmosphere created is one of fear of anything new, where people stop asking to try new things, because it isn't worth the time to get them approved. In a lot of cases, you have to get approval just to try something.

Laughably, this client is attempting to move to agile development processes, which are certainly focused on empowering the people who are doing the work to make decisions (although that is more of a Lean idea), while continuing on the opposite track in terms of empowering individual projects to meet their customers' needs.

In reality, particular technical decisions about technology selection can be contentious, especially in regards to projects across the enterprise choosing incompatible technology stacks. There is also a need to coordinate purchases across projects to gain negotiation leverage. Even more important, although completely overlooked by these bozos is that the license terms for all software need to be read by actual lawyers. However, it's clear that the real issue here is one of trust and power, tied up with the fun of getting to decide how to spend other people's money.

I am working on a devious plan to: a) get them to trust me, b) keep them feeling powerful, and c) allow for innovation and progress. Reading "Influence: Science and Practice" by Robert Cialdini tonight. Maybe that will help with a and b.

Thursday, May 17, 2007

$11M for uLocate


Pretty big round for uLocate. Famous for providing support for Flagr, the mobile version of MapQuest etc.

I tried to download the new Where (how much did that domain name cost?) app to my phone, but it's not supported as of yet...

Their mobile technology is sorta cool- going for the whole write once run anywhere thing for mobile apps with an xml style application definition. A location specific version of something like mFoundry. Of course, when they don't support your phone, it's a little harder to buy into that concept.

Anyway, XML as a programming language? Just because it worked sorta okay for HTML doesn't mean there aren't better ways to do it- especially when you are talking applications.

Sunday, April 29, 2007

Mobile Platforms

For work purposes I am currently carrying two phones- a Blackberry and a Palm 650. I like both of these OSes, although the Palm OS does crash quite a bit. I've also been testing a couple of mobile GIS applications running on Windows Mobile 5. We're trying to figure out where to go next with it- RIM, Symbian, Linux (iPhone), etc.

While I like RIM and Palm OS, with a edge to RIM, Windows Mobile is completely unappealing to me. The main plus from a development standpoint is that it is relatively standard across devices. However, this is easily eclipsed by the usability, the memory consumption, and the way in which closing most programs doesn't close them- it just hides them. The stability varies pretty widely across devices, but it's about on the level of Palm OS.

I am still waiting for the mobile OS that implements virtual memory paging (with an SD card as the pagefile holder). It might be slower, but the RAM is so easily consumed in simple apps, it would enable a variety of applications that just don't work now. I know, you just have to wait a couple of years and it will all change, but right now it's a very tough decision if you want to figure out which mobile OS to develop on.

Now, some might say- why not just do mobile friendly web applications versus applications? For a lot of applications, you need more interactivity than the limited browsers on the phone provide. For others, the constant lag is the impediment. Overall, it's an area of potential, but limited to only the simplest applications today.

However, the mobile device is where some of the virtual machine technologies can really come to bear. In particular, Java Mobile Edition is pretty widely available, even if the apps look as out of place as they often do in Desktop OS land. So, you can do a Java app and get a reasonable percentage of phones to work, although they all seem to implement slightly different profiles.

Now we have the Apollo, Silverlight, JavaFX model of what are in reality enhanced web browsers that build UIs out of things other than HTML. JavaFX mobile is one picture of how things could go in that direction, where the environment could provide all of the basic phone functions, in addition to providing a decent programming environment for content delivery. Of course, I am not sure how much of that is really one thing, or if it's a set of technologies that Sun is lumping under one banner.

Anyway- let's say I have an opportunity to port one of these mobile GIS apps to a platform or OS- which way should I go?

Saturday, April 28, 2007

Productive Thoughts

Scott Simpson: I love saving things to del.icio.us w/ a "toread" tag. Offloading work to the future, incredibly productive, version of myself. [link] Kind of a cool idea for using del.icio.us, especially if you use the firefox plugin. Just highlight a bit of text and then tag it. So easy. I haven't gotten as much out of the more advanced del.icio.us bookmarks plugin, but that might be because my current bookmarks are a pile of junk.

The main garbagey bit about del.icio.us is that there is no good way to manage tags with the default apps. Idea! I am going to hunt around to look for someone that has done that or start digging into the API a little bit.

I guess one of the main competitors to this stuff is Google Notebook, but I really don't enjoy using that for some reason I don't yet understand. Those little links in the search results pages are tempting thought.

Thursday, April 26, 2007

del.icio.us friends

Thinking again. One of the things Steve Poland is going after is the concept of webothlike:

Basically the idea of WeBothLike.com is to connect like-minded people — people that have the same interests. You’ll go to the website and answer questions, as many as you want. The more you answer, the more profile data we have on you — and the more we can contrast you with others in our system, and match you up with them

I think rather than asking questions it would be better to try a recommendation style algorithm on your del.icio.us links, or bookmarks, etc. so that you could get a match score for how similar you are to other people. Basically you could submit all of your links and get a quick score for who else is tagging the same things as you. You could take a similar approach with OPML comparisons to find who is reading the same things as you. On the one hand, the output of this would be suggestions about what to read- and really, who doesn't have too much to read already? On the other hand, a cooler output suggested by WeBothLike would be to find people to scheme with.

Anyway, I have a feeling someone has already done this somewhere, but it's not part of the main del.icio.us application, and it probably should be.

Wednesday, April 25, 2007

Google Reader Reader Robot


How about a project that downloads your unread feeds from Google Reader, uses a speech synthesis program to convert them to mp3, adds the mp3s to a podcast server, which you then script to sync to your portable device? Then you can listen to all of your news on the way to work, as read by your choice of a feminine or masculine robot.

If you like that one, wait until you hear about my idea for Google Reader Printer for my poor friends that have no internet connection at their desks.

Hopefully someone else is working on a good Podcast transcriber, like Podzinger. Once there is a great speech to text solution, I can come up with a convoluted name for that.

Okay, these are lame ideas, but someone's got to pick up for that Techquila Shots guy.

Reporting and OR/M

Just an off the cuff thought about using Object Relational Mapping (Hibernate, TopLink, ActiveRecord, etc.) when doing reporting: Does it make sense to have objects corresponding to specific reports that result in denormalized data, so as to hold the results of the query? It makes it a little more sense if you have a view corresponding to the report, and then a object corresponding to the view. At some point it does seem to cross a line of excessive abstraction and an overgrown data model.

Monday, April 23, 2007

Simple


I think this is the route I am going to go when I have to do another Java web application. I have become improbably enamored with horizontal scalability. The primary downside is that they started using xml configuration files. Curse you XML configuration. The XML config is just there to compensate for the lack of dynamic language features...

Seriously though- the old way of building Java web applications is seriously threatened by the latest .Net stuff on one side, and the dynamic language stuff on the other side. I think the .Net 3.0 is not helping much on the complexity angle.

It makes me laugh to think how much the REST architecture resembles the Java Servlet Spec. All of these frameworks to abstract away HTTP verbs, when that was what really needed to have application structures that are concordant with the architecture of the web itself.

Tuesday, April 03, 2007

Ruby Community

I don't know who "Dan" is, but I like the way he rolls.

"I have no idea what the "Ruby Community" is. How do you define
membership in this community?"

No, no, the "Java Community" has a defined static membership, but the
"Ruby Community" is more dynamic. As long as you respond to the right
messages, you're just assumed to be a member.

Monday, April 02, 2007

Productivity tip?


I am considering switching back to using Lynx as my primary browser. My theory is that it makes the web so much less compelling that I won't be as distracted by it. Slightly easier in terms of keyboard navigation so I don't have to touch the mouse. Runs great in cygwin too. The biggest downside I am seeing so far is that Google Maps doesn't run very well.

I don't see them doing much to fix that. Oh well, back to 9 tabs of FireFox I suppose. Apologies for the mind drip.

Language Games

I've been following with great interest Jeff Atwood's series of posts on Alistair Cockburn's work on software development as a cooperative game.

I was wondering if he was going to dig down to the philosophy level on this, and he may, but this concept of the cooperative game is an extension of Wittgenstein's concept of language games, first outlined by Pelle Ehn. Reading Wittgenstein, more than anything else, really opened my mind to an incisive type of analysis that attempts to dissolve a seemingly unanswerable question by proving that it is nonsensical out of the context of the particular language game that gives it sense.

The digression between the purported "process" that underlies a project and the actual intuitive set of rules that guide the behavior of and between individuals are clearly different things. It's obviously important to pay attention to both. This is why Cockburn's approach to agile "method shaping" is so important, every strict process must itself have a meta process by which it is applied to any particular collection of individuals.

I really like Alistair's interview with Bob Payne (especially the parts where Mr. Cockburn is speaking) on the Agile Toolkit podcast.

Tuesday, March 27, 2007

Mingle- ThoughtWorks hotness?


Wow, the high priced consultants @ ThoughtWorks have broken into commercial product development. This is going to be interesting.

Mingle is apparently an agile project management tool. We've looked at VersionOne and Rally in that space (not to mention Team Foundation Server), but we're actually using Trac, which is hard to beat for the svn integration and price. I wonder how how much this pup is going to cost?

Sunday, March 25, 2007

Communication Cost


Just caught this great post from Lisa Haneberg that echoes some ideas I have had in the past, like a running clock that shows the running total cost of a meeting as it "progresses".

Top level costs: the cost of time for people to receive (hear, read) the communication and the cost for creating/delivering communication. If you send an email to your entire team of 50 people. And if the email takes 3 minutes to read, and 15 minutes for you to write - the top level costs are of people's salaries for the 3 minutes and your 15 minutes. Plus the costs of the email server time and space, etc... But the time is the largest cost. You may think this is a small amount, but multiply this times 100s of emails we deal with each day, and the costs add up.

If you book a two hour meeting with 15 people - the top level costs are huge.

But that's the just start of the costs. There are two other important types of costs...


I have thought about this in terms of meetings quite a bit. I've had lots of extremely expensive meetings to sit through. Meetings where our tax dollars were being consumed at unhealthy rates. Now, there are the people where having them in a meeting is actually more productive than having them cause damage through their other work, but it's hardly the optimal solution. However, applying this thought to email and phone calls is interesting.

Think before you send...I've been doing the whole "Inbox Zero" thing. It's really great that I don't have a huge unprocessed morass of email to look at it. I do have to get better about keeping up with my todo lists though. It's still easier for me to forget those because I don't have a blinking light telling me to look at them every time a new one shows up. Maybe it's not as fun to look at, because, unlike email, I already kind of know what is on my todo lists.

Jeffrey Veen and Merlin Mann were talking about how Google is a no phone call culture to some degree, everything is email, IM, with a definite preference for the asynchronous communication. Not sure how much of that is introvert culture and how much is efficiency, but it sounded like they were positively bombarded with email information there.

Maybe there is a real economic benefit to parsimonious communication, learning to be clear concise and brief saves money. Maybe this should apply to bloggers as well, I know there are people out there struggling to keep up with their feeds...on that note, I'll be signing off.

Tuesday, March 20, 2007

Treat Contractors Well (please)


I like this post from Scott Berkun. (of the Art of PM book)

One habits many managers have is to dump the boring, unpleasant work of their team onto contract workers. The thinking is that full timers deserve the best treatment and contractors are mercenaries: they deserve whatever they get since they won’t be around long.

It’s a mistake - good managers finds a way to treat everyone well. And there are some reasons contractors deserve special attention


I don't think it applies 100% to government contracting, but it does make a lot of sense. Please treat my team well government overlords!

Tuesday, February 20, 2007

Wiki Patterns

I have been trying to get a good collaborative space set up for my company forever now. It just has never risen up high enough in the priority chain to really become a collaborative effort...although I have had some very good support from a couple of people that really get it.

Anyway, I happened across this link to Atlassian's wikipatterns site at Web Worker Daily. So far it's a pretty basic set of patterns, but I plan on adding a few of my notes.

Here are some random things that I think are decent ideas for a making a wiki well used with the audiences I have been dealing with.

1) WYSIWYG editing. Most people don't want to learn a markup langauage. It really lowers the barrier to creative usage.
2) Make it the authoritative source. No one wants to look at or update data that might be wrong, if there is another place where it is definitely right. If the wiki isn't the authoritative source, link the other content in, don't make copies that will get out of sync.
3) Fun stuff. It's absolutely worth having a few pages where people can post silly stuff when they feel frustrated.
4) RSS Feeds or change summaries. Crucial if you are using it for anything new-sy, as opposed to reference.

I have set up MediaWiki and many others. MediaWiki looks like a really ugly piece of code. I had problems with it on some customer networks with strange proxy behavior. Instiki and most of the Ruby on Rails wikis are missing crucial features- like security models. I actually like the Trac wiki for development projects, even though the wiki isn't really the crucial feature. I am extremely disappointed Google snapped up JotSpot right as they were about to release a self-hosted version. As convenient as Software as a Service things like Blogger are, I just would easily prefer hosting the app myself. JotSpot, while also being vastly more than a wiki, really benefits from being the engine behind dojo.js development. Dojo is a great ajax lib, and makes Jot cool.

I never thought the Atlassian Confluence stuff was worth the price, but the patterns site is a good idea. In fact, I suppose that was the origin of the wiki concept in the first place, Ward Cunningham's c2 wiki for the portland patterns group is still the canonical example. How far we haven't come: web 2.0 v. Web 1995.

Thursday, January 25, 2007

Google Earth and Standards

I can't comment on the all points blog...they must have me marked as a spammer.

Anyway, I won't link to the post by Adena Schutzberg due to that, but here's the content, jacked from a Federal Computer Week article.

"I have a question on this:

Interviewed in the Google booth, which resembles the bridge of the Starship Enterprise, Painter [director of Google Earth Federal] said that although the public Google Earth uses commercial satellite and geospatial imagery, Google Earth Fusion allows federal agencies to manipulate and integrate their own geospatial imagery with the company’s software tools.

Imagery or software? Isn't Google a member of OGC? Is it moving forward on implementing those standards such that it can do both with ease? Is that not the point of OGC? Is DoD pushing Google to implement such standards? See for example: NGA Announces Requirement for OGC and Complementary Standards
Imagery."


I don't think there is a high performance streaming 3D imagery XML standard, is there? In any case, NGA is presumably rational. They are not going to sacrifice user experience for the sake of standards compliance, particularly where the standards board is as commercially fractious as OGC has been.

While there may be a requirement for support of those standards, this is already easily met by Google Earth in the form of reflectors for OGC Image Services as overlays or super-overlays to the base streamed data. All of the major data types can already be imported to Google Earth server to form the base layers.

To somehow suggest that Google should use an OGC format as their primary streaming format is a really bad idea for everyone- especially the data owners that would end up giving away their data. I remember the same insinuations being made against ESRI in years past, and then people deciding they didn't want to pay the performance penalty for standards compliance. Why don't standards committees ever look at what works best, and then choose that as the standard, instead of trying to prognosticate the market, technology, and user needs years in advance? While the shapefile was never an OGC standard, it was easily the lingua franca of the GIS community for a long time. The same with the good old .e00.

My wife worked with the poor fellow ESRI designated as their OGC standards body rep. He was not a person who loved his job. Basically he was a punching bag as he watched their competitors attempt to push things away from technology that would be compatible with proven success.

Thursday, January 18, 2007

Manual Reverse Geocoding


So this guy's letter was delivered without an address- just a map. Reverse geocoding at work. Humans rock- it's dead hard to get a computer to do this. Actually, maybe I should just say the the UK Post rocks, I can't see our USA civil servants doing anything with this besides sending it to the DLO or trying to arrest the person that sent it for subversive activities.

One of the worst experiences in my life was spending three days geocoding a dataset of points for Wien (aka Vienna, Austria) with ArcView 2.1. (ArcView would try to suggest a point, and I would try to move it to the right place.) House numbers there don't follow a very organized pattern, and I hope the errors I inevitably must have made didn't get anyone killed. Or didn't get the wrong person killed, or anything like that.

link via: /usr/bin/girl , which incidentally is one of the first blogs I ever read.

Sunday, January 14, 2007

Beautiful Brochure...ugly product

I have been doing a light evaluation of some software products in anticipation of some upcoming requirements for a project I am working on. The product that the architect in my position before me had been ready to buy is a real piece of work. It commits three violations of trust that no company selling software should:

1. Where's the product?
I don't trust that there even is a product when you can't go to the website and have any clue about how to buy the product or even a wild guess as to it's cost. You can download a gloriously illustrated brochure of bullet points of software features, with teeny tiny pictures of UI. You can read about solutions and services, but you can't buy anything? It sounds like a services trap: call us, we'll figure out how much money you have to set our price, then buy/rent our software and we'll finish it for you at an hourly rate, but own the code we write for you.

I guarantee there isn't a "there is no three step" install process.

2. UI design = sorry, we spent all of the design money on our brochure.
What does it say when a company has an obviously professionally designed and frankly beautiful brochure, but their actual product looks completely undesigned and frankly is just plain hideous? They care about people's experience until they have their money?

3. No mention of an API anywhere.
Monolith sensors on full alert. This thing has to integrate with my enterprise, I don't need to buy an ERP system where everything is a module of your system, versus a distributed architecture. (no one does need to buy a "complete" ERP system...ever) Currently dealing with the seemingly vast, but still incomplete, APIs of Microsoft Dynamics Great "Pains" has me watching out for software that makes itself hard to fit into a distributed world. This software isn't even a web app, so there's no chance of URL hacking.

Sorry, no dice. I am specifically not naming the guilty party here so that they don't begin any legal actions against me for defamation of character, or in case I decide to apply for a job there someday to rewrite the thing. I'll post an offensive screenshot at some point in the future on an unrelated topic.

Saturday, January 13, 2007

Refactoring Ruby

Jay Fields and friends are rewriting/porting Martin Fowler and friends' Refactoring: Improving the Design of Existing Code book to Ruby. I guess since the IDEs aren't there yet, we might as well get going on the manual process! Good show to Jay et. al. They've made a dent in the first section, hope they keep it up.

I still keep a copy of Refactoring on my desk. It's one of about 20 books I still need to keep around on my desktop. The content is somewhat timeless, and (was) not readily available on the internet. Still, the Jav-oid nature of the text makes it a little less relevant to my current world of Ruby. It also serves as a badge of good programming knowledge.

I've been selling off a lot of my other books on Amazon. Someone bought the Sun Certified J2EE Architect guide from 2002 for $16. Someone else bought Rod Johnson's old Expert J2EE development, the early edition, before he finished Spring, for $15. No takers on Tapestry yet at $10. Many books have a used value below a $1. I am debating what to do with them as it's not worth my time to sell them. The chances I need a Turbo Assembler reference have dropped about as low as they can, no library would want it, still, it was a good book in it's day, and it is sad that it is no longer relevant to anyone's life. I also think "professional Java Web Services" is probably not worth the shelf space. Well, maybe it will be recycled into a better book someday.