In Search of the Silver Bullet

To make a beginning, you must have already reached the end. Only by solving a problem does one really understand the problem at hand.

There is a tendency in computing to look for the Silver Bullet. The Silver Bullet is that piece of technology that will magically solve the current problem. It unfortunately can be used as an alternative to solving the problem at hand.

Let me give an example from an episode of Columbo where having reached a solution, an alternative solution becomes apparent.

In the episode “The Bye-Bye Sky High I.Q. Murder Case”, Columbo is presented with a problem. There are 3 sacks of an equal number of coins. Two sacks contain real gold coins which each weigh a pound. There is one sack of fake coins which each weigh one pound and one ounce. The problem is that given a scale and only being permitted one measurement, how to determine which bag holds the fake coins.

The solution is that you take one coin from bag one, two from bag two and three from bag three. If the combined weight is 3 pounds and one ounce, bag one contains the fake coins. If the weight is 3 pounds and two ounces, bag two contains the fake coins. If the weight is 3 pounds and three ounces, bag three contains the fake coins.

This is a solution to the problem that most people will discover.

However, there is another solution. This solution is that you take no coins from bag one, one from bag two and two from bag three. If the measurement is 3 pounds, bag one contains the fake coins. If the combined weigth is 3 pounds and one ounce, bag two contains the fake coins. If the measurement is 3 pounds and two ounces, bag three contains the fake coins. In this case, you reach the same solution using 3 as opposed to 6 coins.

The first solution is ‘intuitive’ because we naturally find it easier to reason that if the weight is 3 pounds and x ounces, then bag x contains the fake coins. It is a natural mapping of numbers.

The second solution only becomes obvious after the problem is solved. It is an equally valid solution, but not obvious.

Once the problem is solved, one feels more at ease to look for alternative solutions.

If the intuitive solution were not found, no-one would come up with the alternate solution.

Therefore, to find a better or alternate solution, one must solve the problem first.
 

In computing, the Silver Bullet becomes an alternative to solving the problem at hand. The Silver Bullet defers reaching a solution or, more correctly, admitting failure.

If a problem is solved although not in an ideal way, then the correct, or suitable, Silver Bullet can more easily found.

If an insignificant attempt is made to solve the problem, then all Silver Bullets are equally attractive. Each alternative Silver Bullet is likely to exhibit the same deficiency. 

This is because solving the problem at hand, although not in an ideal way, helps to create the specification for the Silver Bullet. By clarifying the functionality and scope of any solution, trying to solve the problem at hand helps to define the problem clearly. If one understands why the current problem was unsolved or inadequately solved, it makes it a lot easier to solve the problem in a different way. Also having solved the problem, it also provides time to solve the problem is the best way because it is no longer a time-critical issue. 

So, before you chose and fire the Silver Bullet, make a substantial effort to actually define the target.

Advertisements

January 6, 2018 at 5:25 pm Leave a comment

The 80/20 Rule (The Pareto Principle) Revisited

Recently (time is relative), I wrote about the 80/20 rule. This states that it takes 20% of the time to do 80% of the work. The relatively short period of time taken to do the 80% of the work gives a false impression of the time necessary to complete the entire project. Also, it is very easy under time pressure to do the easiest 80% of the project. This means that 80% of work is done in significantly less than 20% of the effective project time; the time to completion is actually increased for a short-term benefit.
I just watched an interesting PluralSight video called “Architecting Applications for the Real World in .NET” by Cory House ( http://pluralsight.com/training/Courses/TableOfContents/architecting-applications-dotnet ), which takes a different approach to the 80/20 rule. It accepted that 20% of the time is all that it takes to complete 80% of the work. However, it states that in certain circumstances, 80% of the work is all that is required since the remaining 20% of features are not critical to the viability of the product.
This raises two interesting questions: what does complete mean, and must everything be 100% complete. If 80% complete provides the required features of the product, then 80% may be more than satisfactory.
Years ago, I worked on a project that produced certification papers for pharmaceuticals. The Products were produced by one system and then passed to another. However, no documentation was produced to permit tracking of the product. A simple program was written where the operator manually typed in the information and a document was printed. Over a period of two years, a new God product was designed to automate the production of this certificate. The project became so massive that it was never completed. In fact, the simple manual approach had already solved the problem.

So, I want to amend my concept of the 80/20 rule. It takes 20% of the time to do 80% of the work, but sometimes, 80% is enough. My proviso is that the 80% completed should be the most important part of the work.
Given that the most critical components of the project are solved in 20% of the time, it should therefore be possible to put software into production in 20% of the time it takes to produce a complete product. This will validate that the correct 20% was done.
The Minimum Viable Product (http://en.wikipedia.org/wiki/Minimum_viable_product) pushes this philosophy. The MVP is the minimal product that can be created to test the feasibility of a product. The classic example is a person evaluating if there is a market for a new product. He advertises his product to test the demand. He has no product available. If there is sufficient demand, he then produces the product. His upfront cost is minimal, as is his risk. He only commits to expenditure when he knows that there is a demand.
In software, we can extend the 80/20 rule by incorporating the concept of the Minimal Viable Product. We prioritize features into must-have, should-have and could-have. The Minimal Viable Product has all of the Must-Haves.
Often, customers place everything in the Must-have category. If the customer is guaranteed that the product will be in production once the must-have features are complete, the greater the incentive to create a realistic required features list.
Therefore, if 80% of the work is focused on the most important features, a Minimum Viable Product can be made available in 20% of the time that it will take to complete the 100% solution.
The Pareto Priority Index quantifies this (http://en.wikipedia.org/wiki/Pareto_priority_index):

 

PPI

 

This simply states that reducing time to completion and the cost increases the priority of a project. Also, providing the most essential value in the product will also increase the priority of the project by increasing the savings and probability of success (adoption).

For more information about risks of MVP see “Building a Minimum Viable Product? You’re Probably Doing it Wrong” by N. Taylor Thompson in Harvard Business Review Blog. http://blogs.hbr.org/2013/09/building-a-minimum-viable-prod/.
An interesting case study of a Minimal Viable Product is given in “What Drones and Crop Dusters Can Teach About Minimum Viable Product” by Steve Blank also in Harvard Business Review Blog. http://blogs.hbr.org/2014/02/what-drones-and-crop-dusters-can-teach-about-minimum-viable-product/

March 26, 2014 at 8:32 pm Leave a comment

Making A Start

Pollock_Mural_1943

How do you start a document?

The white space is pretty intimidating.

This applies to both a document and a painting.

The painter Jackson Pollock was commissioned by Peggy Guggenheim to create a mural for her house.

Pollock signed a gallery contract with Guggenheim in July 1943. The terms were $150 a month and a settlement at the end of the year if his paintings sold. He intended to have the mural done by the time for his show in November. However, as the time aproached, the canvas for the mural was untouched. Guggenheim began to pressure him. Pollock spent weeks staring at the blank canvas, complaining to friends that he was “blocked,” and seeming to become both obsessed and depressed. Finally, according to all reports, he painted the entire canvas in one frenetic burst of energy around New Year’s Day of 1944—although the painting bears the date 1943. Pollock told a friend years afterward that he had had a vision: “It was a stampede…[of] every animal in the American West, cows and horses and antelopes and buffaloes. Everything is charging across that goddamn surface.” Pollock’s “vision” may have been a memory from his childhood in the American West. While there is some suggestion of figuration within Mural, its overall impact is that of abstraction and freedom from the restrictions imposed by figures. http://uima.uiowa.edu/mural/

So, how did Pollock break the tyranny of space?

Look carefully at the canvas.

Do you see the word Pollock?

Sometimes you just have to make a start.

Anything will do. Just type something. Once you have made a start, then you have made the biggest step.

PollockAndMural

February 28, 2014 at 1:59 am Leave a comment

Creating a Software Narrative

Undated and unlocated picture of American British-

So why a narrative?

Our lives are a narrative. They have a beginning, a middle and an end. As T. S. Eliot states: “In my beginning is my end.”

It is this narrative which is the dominant strain of our lives, and it is this narrative, which although at times diverted, defines who we are. Our narrative is defined by what we choose, and what we reject. Our choices present a narrative and this narrative, or composition of choices, defines the line from our beginning to our end.

So how can we use a narrative to define our software?

Think of software as a narrative which describes a story. The story is the path from the beginning of the code to its end.

In the beginning, we name the piece of software. The name may be as simple as GetTime(). This defined the purpose. It defines the goal. At the end of the code, we would reasonably expect to have the time at hand. We would certainly not expect to have our bank account balance.

The code also states what the user must provide. For a method, these is usually the arguments or parameters.

The concept of encapsulation in software means that the software has a guarantee of what it will deliver and delivers that. It does not specify how it will deliver it, but it also should not do anything that is not expected. These unwanted liberties are referred to in software as side effect.

Although, not a part of the contract, the code should be single purposed. If we ask for the time, we would not expect it to reset the clock. To reset the clock, one would expect to find a method called SetTime( newTime ).

We could have a method called GetAndResetTime( newTime ), but that really is mixed purpose. There is no logical relation between getting the time and resetting the clock.

So, our software should remain true to the narrative. Like a paragraph, it commences with a topic sentence, in our case, a name. This tells us what to expect to happen. There is a middle. The purpose of the middle is to satisfy the purpose as stated in its name. It should not do anything which does not relate to that goal. Then there must be an end, which satisfies the contract.

What we call the beginning is often the end
And to make an end is to make a beginning.
The end is where we start from. And every phrase
And sentence that is right (where every word is at home,
Taking its place to support the others,
The word neither diffident nor ostentatious,
An easy commerce of the old and the new,
The common word exact without vulgarity,
The formal word precise but not pedantic,
The complete consort dancing together)
Every phrase and every sentence is an end and a beginning,
Every poem an epitaph.

T. S. Eliot: The Four Quartets.

 

February 23, 2014 at 5:26 pm Leave a comment

One Size Does Not Fit All

 Recently I wanted to buy a new pair of trousers. I know that I’m 38 short (I know. I need to lose weight.) The 38 short was the easy part. I know that I’m not slim fit, but how Regular am I? Am I a Hilfiger Regular or a Levi’s Regular? If I pick carelessly, I could end up looking like I need to visit a bathroom, or look like the rear end of an elephant.

That made me think about how we pick software development methodologies. What can I say? It’s geek creep.

There appears to a weird perception that only one magical methodology can be used on a specific project and that this approach must apply to all phases and teams of the project.

It is possible to combine the best features of different methodologies.

For example, up-front design could conform to the concept of “Just Enough Software Architecture” (Fairbanks).  Evans of “Domain-Driven Design” did an interesting video with the same argument. The development cycle that follows could then be XP with pair programming. Leffingwell suggests this as a combination of RUP and Agile (“Scaling Software Agility”). Team management could use Scrums.

In defense of the indefensible, Waterfall actually places a strong emphasis on prototypes and proof of concepts. The Waterfall downfall is that the requirements phase tends to take so long that by the time it is completed, the requirements have changed or the solution is no longer needed.

Agile attempts to reduce this lead time by continuous ‘prototyping’ in collaboration with the customer.

To assail the unassailable, ‘Agile’ is not one methodology but a series of methodologies, some of which predate Waterfall. For example, scrums are not dissimilar from project briefing and debriefing regularly used for pilots on missions. The issue of Undisciplined Agile (Fragile) is that it encourages bad behaviors. It encourages an attitude that users are basically stupid and don’t know what they want, and there is so much emphasis on progress (burn rate) that often the most critical tasks are deferred. I have seen several ‘complete’ Agile projects which provided only a user interface with no concept of data persistence.

The important thing to remember about Agile is that it is not a methodology, but a series of methodologies formalized in the Agile Manifesto. If you look at the signers of this document, you see a whole lot of different approaches. Beck and Cunningham (XP), Sutherland and Schwaber (SCRUM). Then here are the other evangelists, each with their own focus: Poppendieck (Lean), Ambler (Agile Modeling). Everyone uses Agile. The issue is what flavor of agile.

The most important thing to decide on a project is not the methodology to use, but a decision about what has to be achieved and the best mix of methodologies to use to achieve that goal. In fact, Cockburn (Crystal Clear) suggests minimal methodology up-front and adding the methodologies as required. This is interesting as Cockburn’s book “Crystal Clear” was based on the results of examining successful projects to see what made them work. (And Cockburn must be correct since he is a signer of the Agile Manifesto).

If the chosen methodology does not work, modify it or throw it out. It is not a loss. You got this far as a result of the chosen methodology. It served its purpose. Now find one which fits the next stage.

So, I think I’ll go to the stores and try on the jeans. Until you actually try something, it is hard to know if it really fits. After all, one size does not fit all.

Cockburn, Alistair. “Crystal Clear: A Human-Powered Methodology for Small Teams: A Human-Powered Methodology for Small Teams.” Addison-Wesley Professional, 2004. http://www.amazon.com/Crystal-Clear-Human-Powered-Methodology-Small/dp/0201699478/ref=sr_1_2?ie=UTF8&qid=1393085241&sr=8-2&keywords=crystal+clear

Evans, Eric. “Domain-Driven Design: Tackling Complexity in the Heart of Software.“  Addison-Wesley Professional, 2003. http://www.amazon.com/Domain-Driven-Design-Tackling-Complexity-Software/dp/0321125215/ref=sr_1_1?ie=UTF8&qid=1393084938&sr=8-1&keywords=evans+domain

Fairbanks, George F. Just Enough Software Architecture.” Marshall & Brainerd, 2010. http://www.amazon.com/Just-Enough-Software-Architecture-Risk-Driven/dp/0984618104/ref=sr_1_1?ie=UTF8&qid=1393084853&sr=8-1&keywords=fairbanks+architecture

Leffingwell, Dean “Scaling Software Agility: Best Practices for Large Enterprises.” Addison-Wesley Professional, 2007. http://www.amazon.com/Scaling-Software-Agility-Practices-Enterprises/dp/0321458192/ref=sr_1_1?ie=UTF8&qid=1393085158&sr=8-1&keywords=Leffingwell+scaling

Royce, Winston W, “Managing the Development of Large Software Systems.” http://www.cs.umd.edu/class/spring2003/cmsc838p/Process/waterfall.pdf

February 22, 2014 at 7:56 pm Leave a comment

Software Frameworks and Creativity

AlmaTadema

Recently, I worked on a project where we developed a software framework. We based our framework, including the program structure and package names, on the proven results of other groups. The goal was that developers could seamlessly move between applications based on this framework, and the massive reuse of common code would reduce the number of bugs. It also included a testing framework.

We had a new developer join the project, and it was decided that he did not need to follow the practices that we had established, or use the framework. It was felt that forcing him to do such would stifle his creativity. It was ultimately decided that our practices, including package naming would differ from the format all other groups because we were not bound by their standards. The new standards were defined by the new developer who was reluctant to use the defined standards.

Historically, in the arts, all artists followed an apprenticeship program, and all great artists are the result of this system, with perhaps the exception of Caravaggio, who had minimal training.

Michelangelo worked in the studio of Ghirlandaio, and da Vinci in the studio of Verrocchio. As apprentices / assistants they learn the mechanics of the craft, by doing such chores as mixing pigments. This meant that they inherited a substantial wealth of experience, and emerged from the process as fully fledged artists. But, even then, they realized that their training was not complete. Da Vinci was a strong advocate of the intellectual knowledge of artists in a period when artists were becoming expected to not just have a knowledge of the arts, but also of literature and even write poetry. This was to move the arts and the status of artists from that of craftsmen to that of professionals. Although Michelangelo was a very literate man, and a writer of poetry, he still was the target of da Vinci’s barbs about how he was always covered in marble dust.

But, even as mature artists, they realized that they did not have all-encompassing knowledge. When Michelangelo was commissioned to do the Sistine Ceiling, he realized that his knowledge of frescos was not complete, even though he has apprenticed with Ghirlandaio who painted some major frescoes in Florence. Therefore, Michelangelo, employed assistants to help him prepare the fresco surface, and apply the pigments. He used this approach to learn from his assistants and then continue later without them, having learnt the craft.
Da Vinci, Michelangelo and Raphael are often referred as the giants of the renaissance, and later artists are commonly referred to as standing on the shoulders of the giants.DeChirico

The arts is full of examples of artists reaching back into history for insights and to learn from the masters. Classic examples are Picasso, Degas, Alma-Tadema, Jackson Pollock and Giorgio de Chirico (the precursor of Surrealism).

Without the work of Michelangelo, there would be no Caravaggio. Without Ghirlandaio, there would be no Michelangelo. It is based the building on the previous knowledge of others and ourselves, that we make our greatest advancements.

There was originality in the Renaissance, but this was always based on a solid foundation of the past when it was available. For example, Da Vinci’s Last Supper is radical in that Judas is placed on the same side of the table as Christ. The Sistine Ceiling is radical in composition. But both works are firmly anchored in past tradition, and other conventions of representation.
As mankind has progressed, education has become more extensive. This is because each generation has to learn what all the prior generations knew. It is this progressive accumulation of knowledge which is the progress of mankind. Man is capable of great creativity, but it is always creativity based on a solid knowledge of the past.
This combined knowledge does limit our possibilities, but it removes the drudgery of making decisions about the mundane.
Years ago, I saw a BBC documentary on hypnosis. The subject was told that they were driving a car. Suddenly a child ran in front of the car, and they hit the brake. They used the right foot. There was no conscious effort to decide where the brake was. It was a mundane option that had been removed. How many drivers would say that having the brake on the right side would damage the ‘creativity’ / joy of driving? Using defined rules removes the mundane, and permits energy and choice to be applied to the truly creative elements of any work.

Degas_Homer

Degas: Apotheosis of Degas Created after Ingres’ Apotheosis of Homer.

February 17, 2014 at 11:02 pm Leave a comment

The 80 / 20 Rule

The 80 / 20 rule states that it takes 20% of the time to do 80% of the work, and 80% of the time to do the remaining 20% of the work.

I often get phone calls about ‘full-time’ jobs where the initial task will take about 5 to 6 weeks. The scenario goes like this: we have finished phase 1, but we are concerned about performance; or we have a Spring application running but we need someone who is an expert in Spring to do some ‘fine tuning’.

What they mean is: We completed the project, within budget and on schedule. Although the system looks OK, it continually breaks. To fix it will place us behind schedule, and we need someone who really knows the framework to ‘fix’ it immediately.

It can also be stated as: We used the cheapest labor possible to do what was most visible. The hard stuff such as design or error handling was ignored because it would hinder progress. Now we need to do that as fast as possible within changing what is ‘complete’.

Remember the Tower of Pisa. Lovely building. Looks great. Pity that no-one ever spent enough time on the foundations.

This is a problem common in poorly managed agile systems. To show progress, what gets done first is what is most visible. If only the simple stuff is done, it is not a product, it is a prototype without a proof of concept.

Leonardo da Vinci’s Last Supper has major damage. Even when da Vinci was painting the wall, the wall was deteriorating. He finished it ( sort of ), but the foundation was bad. He got 80% of the job done. The missing 20% was stabilizing or finding a suitable wall as a foundation for the painting; that he ignored.

Michelangelo was meticulous in preparing the ceiling of the Sistine Chapel, and employed assistants who knew how to prep a wall ( Michelangelo had never done a fresco before ). He spent a lot of time in finding the best people to do the 80% of the work up front. The 80% of the work was the wall preparation. Once the wall was prepared, he could paint it. The prep is why the ceiling survived.

So, if you want a completed project, it is essential that you spend the 80% of the time up front to be ready for the visible work that follows. No matter how good the result looks, if the prep is bad, it will crumble, or lean.

The problem is that managers and clients want to see progress; they would rather see you working on the visible 80% than the foundation 20%; they want to see measurable progress; this is a customer management issue.

The issue is determining how much time to spend up front. The infrastructure has to be done up front without urgency as opposed to at the end of the project, when all that happens is a poor restoration job.

No matter how much work is done on da Vinci’s last super, if the basic structure is poor, the painting will never survive.

Interestingly, the most succesful method now used to restore frescos is to actually physically remove the fresco from the wall i.e. to separate it from any underlying problems, or rebuilding the foundation of the work.

June 13, 2009 at 5:40 pm Leave a comment

Older Posts


The Creative Site Administrator

Creative Time Management.

December 2018
M T W T F S S
« Jan    
 12
3456789
10111213141516
17181920212223
24252627282930
31