Sunday, March 17, 2013 |
|
|
People get confused and try to reconcile what "cloud" means in the terms public cloud and private cloud. To me they are contranyms. Those are words that are spelled the same but have two entirely different meanings.
For example, the word "fast". It can mean to move rapidly. The runner ran fast. It can also mean to be held tightly. The mast held fast to the ship during the storm. Another example is "clip". He clipped the papers together. He clipped the coupons from the newspaper.
The world cloud is being used in two different ways. In public cloud, the term cloud refers to utility computing which is effectively massively scalable. In private cloud, the term cloud refers to a virtualized computing environment which is not massively scalable.
|
|
|
|
|
Wednesday, July 25, 2012 |
|
|
In order to build robust cloud applications, a client that calls a service has to handle four scenarios:
Complete Success
Partial Success (Success with conditions)
Transient Failure
Resource Failure
A partial success occurs when a service only accomplishes part of a requested task. This might be a query where you ask for the last 100 transactions, and only the last 50 are returned. Or the service only creates an order entry, but does not submit the order. Usually a reason is supplied with the partial success. Based on that reason the client has to decide what to do next.
Transient failures occur when some resource (like a network connection) is temporarily unavailable. You might see this as a timeout, or some sort of error information indicating what occurred. As discussed in a previous post , continually retrying to connect to a transient resource impedes scalability because resources a being held on to while the retries are occurring. Better to retry a few times and if you cannot access the resource, treat it as a complete failure.
With failure, you might try another strategy before you treat the resource access as a failure. You might relax some conditions, and then achieve partial success. You might access another resource that might be able to accomplish the same task (say obtain a credit rating) albeit at greater cost. In any case all failures should be reported to the client. You can summarize this responsibility in this diagram:
Bill Wilder helped me formulate these thoughts.
|
|
|
|
|
Sunday, March 18, 2012 |
|
|
In the simple example we have been discussing, the consequences of a failure appear immediately to the user. In a more complicated architecture there are many more tiers
and many more dependencies. With more dependencies, more problems can possibly result from poor decisions on how to handle failure. Those dependencies include
other applications in your
own shop, third party libraries that you don't control, the internet, etc. For example, if your order queues fail, you cannot do orders. If your customer service app fails,
you cannot retrieve member information. Unhandled failures propagate (like cracks) throughout your application.
Failures Cascade - a unhandled failure in one part of your system becomes a failure of your application.
In deciding on how to respond to failure we have to distinguish between two types of failure: transient failures and
resource failures. Transient failures are not due to component failure,
but a resource temporarily under load that cannot respond as fast as you had assumed. With resource failures you have to have an alternative strategy because a component is not available.
Transient failures occur for short periods of time. The typical response is to retry the operation after a short period of time. But questions still remain. How often do you retry?
What is a short period of time? What do you with the data during the retry? On the other hand, remember that just as failures cascade, so do delays. While you are waiting or retrying,
scarce resources are being used (threads, memory, TCPIP ports, database connections) that cannot be used for other requests.
Go back to our
WCF example and look at the try/catch block. If we had to do a retry
how would we change the logic? We will have to adopt a whole different strategy to handle failure, a retry loop inside the catch handler would not be enough because other failures could
reoccur, and not every error would allow a retry. You have to design the entire routine around expecting failure, because as we discussed
in our last post, failures happen.
Since slow responses usually come from resource bottlenecks, you have to treat them as failures if you are going to have reasonable availability. Which means that transient
failure can soon look like resource failures. So what do you do? Retry for a limited amount time and then give up. In addition, never block on an I/O, timeout and assume failure.
From the point of view of architecture and design, there is really no such thing as a transient failure. If you have a transient failure, fail
fast and treat it as a resource failure.
|
|
|
|
|
Friday, March 09, 2012 |
|
|
Why is failure endemic to distributed systems?
In the past two blog posts we talked about a hypothetical ASP.NET application. Let's add a second tier to this app where we make a call to a web service.
We will then have some version of the following code fragment which resembles something everybody has written:
ClientProxy client = new ClientProxy();
int result = client.Do(a, b, c);
What's wrong with this?
We have assumed that the call would succeed. Why would it not succeed? At the very minimum you could have a network timeout.
You are assuming you have control over a resource that your really do not.
The fundamental concept in designing for failure is to understand that any interface between two components can fail.
So we rewrite the code as follows:
try
{
ClientProxy client = new ClientProxy();
int result = client.Do (a, b, c);
}
catch (Exception ex)
{
????
}
But now what do you do in the exception handler?
In this simple example, how many times do you retry?
When you give up do you cache the input, or do you make the user enter it over again?
Suppose the service on the other side stopped working? What happens when the underlying hardware crashes, and your application has to be restarted.
Where is the user data then?
What about total failure conditions? Do you "go to" out of the exception handler?
Where do you go to?
You cannot program your way out of a failure condition in code that is
based on the assumption that everything works properly. You have to architect
and design for failure conditions from the start.
The critical issue is how you respond to that failure.
Here is the fundamental principle of designing for failure:
Assume failure will occur. The question is how will the application respond to that failure. You cannot depend on the underlying infrastructure to achieve availability because it cannot make that guarantee.
|
|
|
|
|
Monday, February 27, 2012 |
|
|
Continuing out examination of hosting options from the last post, perhaps, the cheaper options have lower reliability. Here are the availability numbers for our providers:
Provider |
Compute SLA (%) |
Go Daddy |
99.9 |
ORCS Web |
99.9 |
Host Gator |
99.9 |
Rackspace |
Repair within one hour |
Amazon |
99.95 |
Azure |
99.95 |
If Amazon and Azure are more expensive with a similar SLA, why use them?
Availability numbers do not tell the whole story.
The rate for cloud computing infrastructure is more expensive because it allows you to pay only for what you use. Not only does this allow you to use computing resources more economically, it allows you to design around outages. With a cloud computing infrastructure, you can reach, if you wish to pay for it,
very close to 100% availability.
This gets to the essence of the matter. You are doing cloud computing when you are interested in one of two things:
-
Paying for only the computing resources you use so you do not have to buy
enough hardware for peak scenarios that happen infrequently.
-
You want to achieve very high reliability, with almost no downtime. You have to design for failure.
We often refer to these goals as scalability and availability. Scalability is
making sure your application can handle increase load with reasonable
performance. Availability is making sure your application has reasonable
performance for a reasonable amount of time. What is reasonable, depends on the
economics of your business environment.
Technically they are different problems. Depending on your system load, you
could have reasonable availability for the vast majority of the times your
customer wants, and just be very unresponsive under very heavy loads. Or,
you could handle very large loads for small periods of time, and be very
unresponsive the remainder of the time.
But for most applications they are closely related.
To have high scalability requires you not only to be able to acquire more
computing resources, but it requires you to be able to detect and handle
failures of existing computing resources. High availability requires the
same thing.
Designing for failure is cloud computing.
|
|
|
|
|
Thursday, May 12, 2011 |
|
|
I have run some simple performance tests on Azure table storage. Thinking about them reinforced in my mind what kinds of applications should run on Azure, and the relationships between Azure storage tables and SQL Azure.
I have found insert rates on the order of 30-200 milliseconds for a single insert into Azure table storage . It is also on the order of magnitude found in the more elaborate testing reported in Early Observations on the Performance of Windows Azure (Zach Hill, et. al.). This paper looks at the performance of Azure tables, blobs, and queues under various circumstances including the number of concurrent clients.
These rates not blazingly fast. On the other hand, these performance numbers are good for a huge percent of applications that are written. It also may mean that you do not want to do a large number of small inserts into table storage.
Thinking about these results reinforced in my mind that the point of table storage is not that it is super fast, but its performance is more consistent than a relational database where massive scalability can lead to performance degradations in the lock manager. I would assume you would see these kinds of numbers in all of the major cloud vendors such as Amazon and Google.
It also got me thinking about the types of applications that should run on Azure, or any cloud platform for that matter.
The big advantage of cloud computing comes from its elastic scale, and its opportunity to outsource parts of the application stack (virtualization, operating systems, networking, etc.). This comes from the economy of scale of commodity hardware. On the other hand, if you are sophisticated enough, and you have the money, you could buy more sophisticated hardware, and get greater performance.
If you need really high performance (say you are doing online transaction processing) cloud computing is not for you, and probably will not be for another 30 years.
|
|
|
|
|
Tuesday, January 25, 2011 |
|
|
A colleague sent an article about cloud computing to me. I found this interesting nugget inside. Apparently precipitation can be a problem inside of a computing cloud.
"This isn't the only problem faced by SHS. According to Hruboska, "We set up in Iceland because of the lower costs associated with cooling our servers in a northern climate [a huge savings and environment friendly] and because of the availability of cheap, geothermal energy. What we didn't expect is that by running our cloud in a cooler environment that the moisture within the cloud would condense, freeze due to the low temperatures, and effectively snowcrash our servers. Our physical maintenance bill is higher as a result but our overall expenses are still lower by hosting our servers here outside of Reykjavik." Other companies are operating server farms in Whitehorse, Helsinki, and Vladivostok for similar reasons and are running into similar condensation problems as SHS."
This was quoted from:
Dr. Dobb's Agile Update 04/09
|
|
|
|
|
Tuesday, December 14, 2010 |
|
|
Windows Azure provides two storage mechanisms: SQL Azure and Azure Storage tables. Which one should you use?
Can Relational Databases Scale?
SQL Azure is basically SQL Server in the cloud. To get meaningful results from a query, you need a consistent set of data.
Transactions allow for data to be inserted according to the ACID principle: all related information is changed together.
The longer the database lock manager keeps locks, the higher the likelihood two transactions will modify the same data. As transactions wait for locks to clear, transactions will either be slower to complete, or transactions will time out and must be abandoned or retried. Data availability decreases.
Content distribution networks enable read-only data to be delivered quickly to overcome the speed of light boundary. They are useless for modifiable data. The laws of physics drive a set of diminishing economic returns on bandwidth. You can only move so much data so fast.
Jim Gray pointed out years ago that computational power gets cheaper faster than network bandwidth. It makes more economic sense to compute where the data is rather than moving it to a computing center. Data is often naturally distributed.
Is connectivity to that data always possible? Some people believe that connectivity will be always available. Cell phone connectivity problems, data center outages, equipment upgrades, and last mile problems indicate that is never going to happen.
Computing in multiple places leads to increased latency. Latency means longer lock retention. Increased locked retention means decreased availability.
Most people think of scaling in terms of large number of users: Amazon, Facebook, or Google. Latency also leads to scalability based on geographic distribution of users, transmission of a large quantity of data, or any bottleneck that lengthens the time of a database transaction.
The economics of distributed computing argue in favor of many small machines, rather than one large machine. Google does not handle its search system with one large machine, but many commodity processors. If you have one large database, scaling up to a new machine can cost hours or days.
The CAP Theorem
Eric Brewer’s CAP Theorem summarizes the discussion. Given the constraints of consistency, availability, and partitioning, you
can only have two of the three. We are comfortable with the world of single database/database cluster with minimal latency where we have consistency and availability.
Partitioning Data
If we are forced to partition our data should we give up on availability or consistency? Let us first look at the best way to partition, and then ask whether we want consistency or availability.
What is the best way to partition?
If economics, the laws of physics, and current technology limits argue in favor of partitioning, what is the best way to partition? Distributed objects, whether by DCOM, CORBA, or RMI failed for many reasons . The RPC model increases latencies that inhibit scalability. You cannot ignore the existence of the network. Distributed transactions fail as well because once you get beyond a local network the latencies with two-phase commit impede scalability.
Two better alternatives exist: a key value/type store such as Azure Storage Services, or partitioning data across relational databases without distributed transactions.
Storage Services allow multiple partitions of tables with entries. Only CRUD operations exist: no foreign key relations, no joins, no constraints, and no schemas. Consistency must be handled programmatically. This model works well with tens of hundreds of commoity processors, and can achieve massive scalability.
One can partition SQL Azure horizontally or vertically. With horizontal partitioning we divide table rows across the database. With vertical partitioning we divide table columns across databases. Within the databases you have transactional consistency, but there are no transactions across databases.
Horizontal partitioning works especially well when the data divides naturally: company subsidiaries that are geographically separate, historical analysis, or of different functional areas such as user feedback and active orders. Vertical partitioning works well when updates and queries use different pieces of data.
In all these cases we have to deal with data that might be stale or inconsistent.
Consistency or Availability?
Ask a simple question: What is the cost of an apology?
The number of available books in Amazon is a cached value, not guaranteed to be correct. If Amazon ran a distributed transaction over all your shopping cart orders, the book inventory system, and the shipping system, they could never build a massively scalable front end user interface. Transactions would be dependent on user interactions that could range from 5 seconds to hours, assuming the shopping cart is not abandoned. It is impractical to keep database locks that long. Since most of the time you get your book, availability is a better choice that consistency.
Airline reservation systems are similar. A database used for read-only flight shopping is updated periodically. Another database is for reservations. Occasionally, you cannot get the price or flight you wanted. Using one database to achieve consistency would make searching for fares. or making reservations take forever.
Both cases have an ultimate source of truth: the inventory database, or the reservations database.
Businesses have to be prepared to apologize anyway. Checks bounce, the last book in the inventory turns out to be defective,
or the vendor drops the last crystal vase. We often have to make records and reality consistent.
Software State is not the State of the World
We have fostered a myth that the state of the software has to be always identical to the state of the world. This often makes
software applications difficult to use, or impossible to write. Deciding what the cost of getting it absolutely right is a
business decision. As Amazon and the airlines illustrate, the cost of lost business and convenience sometimes offsets the occasional problems of inconsistent data. You must then design for eventual consistency.
Summary
Scalability is based on the constraints of your application, the volume of data transmitted, or the number and geographic distribution of your users.
Need absolute consistency? Use the relational model. Need high availability? Use Azure tables, or the partitioned relational
model. Availability is a subjective measure. You might partition and still get consistency.
If the nature of your world changes, however, it is not easy to shift from the relational model to a partitioned model.
|
|
|
|
|
Monday, November 22, 2010 |
|
|
Determining how to divide your Azure table storage into
multiple partitions is based on how your data is accessed. Here is an example
of how to partition data assuming that reads predominate over writes.
Consider an application that sells tickets to various
events. Typical questions and the attributes accessed for the queries are:
How many tickets are left for an event? |
date, location, event |
What events occur
on which date?
|
date, artist,
location |
When is a particular
artist coming to town?
|
artist,
location |
When can I get a
ticket for a type of event?
|
genre
|
Which artists are
coming to town?
|
artist, location |
The queries are listed in frequency order. The most common
query is about how many tickets are available for an event.
The most common combination of attributes is artist or date
for a given location. The most common query uses event, date, and location.
With Azure tables you only have two keys: partition and row.
The fastest query is always the one based on the partition key.
This leads us to the suggestion that the partition key
should be location since it is involved with all but one of the queries. The
row key should be date concatenated with event. This gives a quick result for
the most common query. The remaining queries require table scans. All but one are
helped by the partitioning scheme. In reality, that query is probably location
based as well.
The added bonus of this arrangement is that it allows for geographic
distribution to data centers closest to the customers.
|
|
|
|
|
Tuesday, September 28, 2010 |
|
|
Popular consciousness creates popular myths. Here are some myths about cloud computing.
1. Total reliance on the cloud is foolish or scary.
So is total reliance on the Internet or the electric grid, or the transportation network to get us our food. In fact, I imagine someone in 4000 BCE said: dwelling in cities is dangerous, and we should not let people farm to support them. Come to think of it, people are still saying it.
Dependency is a fact of life. It has been a fact of human existence since the first division of labor. Nonetheless, we should have contingency plans. Any organization that hosts any application should understand what the impact of an outage would be. It might be the cloud application itself is down, or the Internet connectivity is slow. Your contingency plan does depend on the nature of your application. After all, hospitals still have emergency generators for surgery, and we store a couple of day’s food in the refrigerator.
On the other hand, maybe you did build your own house, or sew your own clothing. Perhaps a day without email or applications (i.e. the Sabbath) might be a good idea after all.
Dependency, by itself is not an argument against Cloud Computing. It is the consequences of that dependency that matter. For most applications, even some of those considered the most critical; we could actual do without for a few hours.
2. Security is better/worse in the cloud
Data in the cloud is insecure. Data in the cloud is more secure. Nothing is quite like security for generating fear and myths.
The first question you always have to ask is: secure compared to what? Fort Knox? Money in your mattress? After all, the most secure computer is disconnected from the Internet. If you are really paranoid you can turn it off. Of course, it is now difficult to get any work done.
Is data in the cloud more or less secure? Is it secure compared to a corporate data center? There certainly have been some well publicized incidents of corporate data breaches. There are probably even more cases that have not been reported. Have there been any incidents in a cloud computing center? None yet, but there will be. If there are, they might be the fault of the application designers or owners. The same people who create insecure applications in their own data centers can certainly create them in the cloud. Cloud computing centers might be able to better focus on security (physical, data, and application) because that is part of their expertise.
On the other hand, with all that computation and storage focused in one place, people fear that cloud computing data centers may be an inviting target for attack. Employees of cloud computing centers may snoop. So can employees of a corporate data center. Will industrial espionage be easier in a cloud computing center? I am just waiting for the movie. Perhaps you are safer with people who specialize in keeping data centers secure, than a lot of smaller data centers. Bank robberies are not as frequent as they used to be.
Cloud computing centers may lack compliance certification, and that is a problem. On the other hand, as Berkeley researchers have
argued, cloud computing may make Denial of Service attacks economically unfeasible.
It is also currently unknown if security breaches in one virtual machine can cause a compromise of the underlying physical hardware.
As with any hosted application, the builders of the application share responsibility with the cloud providers. You might want to investigate how well capitalized, and what the security plans of your provider are.
The best security is to park your bicycle next to a better bicycle with a worse lock.
3. Cloud is reliable / unreliable
My electric utility only gets
99.98% uptime. So much for the vaunted four nines. How much uptime does Facebook really need? You need to understand exactly what your application requirements are, and the consequences of failure.
I do not know of a single cloud computing vendor that offers a service level agreement with real remediation in case of an outage. Don’t forget that as with any hosted application you are still subject to the vagaries of the external network connections. The data center may be fine, but when Michael Jackson died, the response time of the Internet slowed to a crawl: nothing like a self-inflicted denial of service attack.
Given the current fetish over net neutrality, the packets carrying the output of your pacemaker to your cardiologist get the same priority as someone streaming Lady Gaga’s latest hit.
First define the reliability requirements that your application needs, then decide the appropriate course of action.
4. Cloud computing requires no social infrastructure
Suppose your cloud computer provider goes bankrupt, and the machines are seized as collateral for the debt. What happens to your applications and data? We may need an FDIC-like organization to handle cloud computing provider insolvencies. Some regulation would need to be in place to handle continuity of service during takeovers.
The economies of scale may lead us down the same road that the electrical utilities and the water companies went. The small scale providers were eventually taken over by the larger providers and the resulting monopolies were regulated.
Companies, such as financial services, that operate in heavily regulated industries will be reluctant to use cloud computing providers unless there is some clarity to their legal responsibility for data in the cloud. On the other hand, Microsoft is selling its cloud computing fabric so that third parties might set up private clouds for various industries. Whether they are true computing clouds or just hosting services with flexible virtualization would depend on the actual scaling potential of the data center.
|
|
|
|
|
Wednesday, September 15, 2010 |
|
|
"Government, without popular information, or the means of acquiring it, is but a Prologue to a Farce or a Tragedy; or, perhaps both. Knowledge will forever govern ignorance."
James Madison
What is it?
Control over information is a societal danger similar to control over economic resources or political power. Representative government will not survive without the information to help us create meaningful policies. Otherwise, advocates will too easily lead us to the conclusion they want us to support.
How does one get access to this data?
Right now, it is not easy to get access to authoritative data. If you have money you search for it, purchase it, or do the research to obtain it. Often, you have to negotiate licensing and payment terms. Why can’t we shop for data the same way we find food, clothing, shelter, or leisure activities? None of these activities requires extensive searches or complex legal negotiations.
Why can’t we have a marketplace for data?
Microsoft Dallas is a marketplace for data. It provides a standard way to purchase, license, and download data. Currently it is a CTP, and no doubt will undergo a name change, but the idea will not.
The data providers could be commercial or private. Right now, they range from government agencies such as NASA or the UN to private concerns such as Info USA and NAVTEQ. You can easily find out their reputations so you know how authoritative they are.
As a CTP there is no charge, but the product offering will have either transaction/query or subscription based pricing. Microsoft has promised “easy to understand licensing”.
What are the opportunities?
There is one billing relationship in the marketplace because Microsoft will handle the payment mechanisms. Content Providers will not have to bill individual users. They will not have to write a licensing agreement for each user. Large provider organizations can deal with businesses or individuals that in other circumstances would not have provided a reasonable economic return. Small data providers can offer their data where it would have previously been economically unfeasible. Content Users would then be able to easily find data that would have been difficult to find or otherwise unavailable. The licensing terms will be very clear, avoiding another potential legal headache. Small businesses can create new business opportunities.
The marketplace itself is scalable because it runs on Microsoft Azure.
For application developers, Dallas is about your imagination. What kind of business combinations can you imagine?
How do you access the data?
Dallas will use the standard OData API. Hence Dallas data can be used from Java, PHP, or on an IPhone. The data itself can be structured or unstructured.
An example of unstructured data is the Mars rover pictures. The Associated Press uses both structured and unstructured data. The news articles are just text, but there are relationships between various story categories.
Dallas can integrate with the Azure AppFabric Access Control Service.
Your imagination is the limit.
The standard API is very simple. The only real limit is your imagining the possibilities for combining data together.
What kind of combinations can you think of?
|
|
|
|
|
Wednesday, August 18, 2010 |
|
|
Microsoft has published my five part introduction to the basics of partitioning and layering a software application. While there is a great deal of discussion about it in the literature on intermediate and advanced topics on software development, I have never found a good introduction that discusses the essentials. So I wrote one.
You can find it on the Microsoft's Visual C# Developer Center.
|
|
|
|
|
Sunday, July 11, 2010 |
|
|
Commodity hardware has gotten very cheap. Hence it often makes more economic sense to spread the load in the cloud over several cheap, commodity servers, rather than one large expensive server.
Microsoft's Azure data pricing makes this very clear. One Gigabyte of SQL Azure costs about $10 per month. Azure table storage costs $0.15 per GB per month.
The data transfer costs are the same for both. With Azure table storage you pay $0.01 for each 10,000 storage transactions.
To break even with the SQL Azure price you can get about 9,850,000 storage transactions per month. That is a lot of bandwidth!
Another way to look at the cost is to suppose you need only 2,600,000 storage transactions a month (1 a second assuming an equal time distribution over the day). That would cost you only $2.60. That means you could store almost 50 GB worth of data. To store 50 GB worth of data in SQL Azure would cost about $500 / month.
If you don't need the relational model, it is a lot cheaper to use table or blob storage.
|
|
|
|
|
Sunday, December 27, 2009 |
|
|
One way to approach the different architectural implications is to look at the various vendor offering and see how they match to the various application types.
You can divide the cloud vendors into four categories, although one vendor might have offerings in more than one category:
Platform as a Service providers
Software as a Service providers
Application as a Service providers
Cloud Appliance Vendors
The Platform as a Service providers attempt to provide a cloud operating system for users to build an application on. An operating system serves two basic functions: it abstracts the underlying hardware and manages the platform resources for each process or user. Google App Engine, Amazon EC2, Microsoft Azure, and Force.com are examples of platform providers.
The most restrictive platform is the Google App Engine because you program to the Google API which makes it difficult to port to another platform. One the other hand, precisely because you program to a specific API, Google can scale your application and provide recovery for a failed application.
At the other extreme is Amazon. Amazon gives you a virtual machine which with you can program directly against the native OS installed on the virtual machine. This freedom comes with a price. Since the Amazon virtual machine has no knowledge about the application you are running, it cannot provide recovery or scaling for you. You are responsible for doing that. You can use third party software, but that is just a means of fulfilling your responsibility.
Microsoft tries to achieve a balance between these two approaches. By using .NET you have a greater degree of portability than the Google API. You could move your application to an Amazon VM or even your own servers. By using metadata to describe your application to the cloud fabric, the Azure infrastructure can provide recovery and scalability.
The first architectural dimension (ignoring for a moment the relative econonmics which will be explored in another post) then is how much responsibility you want to take for scaling and recovery vs. the degrees of programming freedom you want to have. Of course the choice between the Google API and Microsoft Azure might come down to the skill set of your developers, but in my opinion, for any significant application, the architectural implications of the platform choice should be the more important factor.
|
|
|
|
|
Thursday, November 05, 2009 |
|
|
One of my clients,
ITNAmerica has become a Microsoft case study for the idea of software +
services. The idea behind software + services is that software should run where
ever it makes sense: in the cloud, on the desktop, or on a mobile device, not
just in a thin client such as a browser.
Latency, bandwidth
limits, and the need for software to
work if the connection to the cloud disappears makes this a logical
approach. Anybody who has tried to get a
cell phone signal should understand the issues about continual connectivity.
Curt Devlin, a
Microsoft evangelist, demonstrates another reason why this approach makes
sense. It makes the transition to a cloud provider such as Azure much simpler.
If you want some
further ideas on how to take a software + services application to a cloud
platform. Check out my recent ARCast on "Software + Services in the
Cloud."
|
|
|
|
|
|
ARCast.TV Special - Michael Stiefel on Software as a Service in the Cloud The Architecture Innovation Cafe presents my discussion of Software as a Service in the Cloud, I discuss how architecting and building a software as a service application requires solving a series of problems that are independent of a particular software platform. I focus on three areas of designing and building the application that you can leverage on new platforms such as Microsoft Azure. Tags ARCast, Architects, Architecture, Cloud Architecture, Cloud Computing, Cloud Patterns, Cloud Services, Software + Services, software as a service, Windows Azure
|
|
|
|
|
Wednesday, October 28, 2009 |
|
|
Cloud computing is utility computing. No up front commitment
required. You buy only what you need, and when you do not need it any more you
do not pay for it.
There are three basic cloud computing scenarios:
infrastructure scenarios, application delivery scenarios, and scaling
scenarios. These scenarios are not independent, one or all of them can come into
play. Each, however, has different
technological implications.
The three basic scenarios are: infrastructure, application
delivery, or the need to reach internet scale.
Fundamentally, cloud computing is a software delivery
platform. Are the economics of working with the cloud cheaper than doing it
yourself? Doing it yourself could mean self-hosting, or traditional delivery of
desktop software. Self-hosting could be in your own data center, or in a
hosting facility.
Not needing to build to your peak capacity drives the
infrastructure scenarios. This is not an all or nothing proposition.
Some small and medium sized companies may decide they do not
want to run their own data centers. The savings in terms of not having to buy
machines and pay employees is enormous. This money could be put to use in
building better applications. This might be the entire compute infrastructure,
or just running an email server.
Other companies may have an occasional need for massive
computation. Say you have to do a stress analysis of a new airplane wing, or a
geographical routing of a complicated delivery, decide among alternative new financial models, or even a human genome search. Any of the classic grid
computations fall into this category. Your existing infrastructure is just fine, but for these not
every day scenarios (they might actually be frequent) it makes sense to rent
space in the cloud to do the computations.
A related scenario is cloud-bursting. You can handle your
everyday computing demands, but occasionally you get a burst of orders that
overwhelms your system. Ticket agencies are a classic example when tickets for a popular event first go on sale. So are stores
around the holidays. Here you use the cloud to handle the overflow so that
people wanting to order do not get unresponsive web pages, or busy signals on
the telephone.
Small divisions in large companies may find the cloud appealing
for prototyping, or even developing certain applications. Their central IT may
be unresponsive or slow to respond to their needs. It is well within the
capacity of a departmental budget to rent space in the cloud.
The next post will explore the other two scenarios, and look
at how the various vendor options would meet your needs.
|
|
|
|
|
Tuesday, October 06, 2009 |
|
Tuesday, August 11, 2009 |
|
|
Microsoft has yet to release all the details of its Azure SLA, but it has said that you will have a 99.95 per cent up-time for compute and 99.9 per cent up-time for SQL Azure.
How does this compare with my electric utility?
With my latest electric bill, my local utility listed its 2008 average number of service interruptions per customer as 1.051, and the average number of minutes without power for a customer at 78.55 minutes. So my electric utility has an up-time of .9998. I guess they don't get 4 or 5 "9"s either.
I presume these numbers include outages due to winter storms, but I do not know what the utility regulators allow them to exclude. Microsoft, to my knowledge, has not stated whether the SLA percentages include planned downtime for upgrades.
How many outage minutes per year could we expect with Azure under the SLA? That comes to about 262.8 compute minutes per year, or about 4.36 hours. Of course when those outages occur matters, and whether they are concentrated in one or many interruptions.
For SQL Azure that SLA is on a per month basis. So for data you could loose access to it for 43.8 minutes per month.
Is 4 hours a long time? Could you live without data access for 45 minutes a month?
For Facebook probably, for emergency services you would need some sort of fallback just like they have backup generators now.
I wonder what a cloud computing brownout looks like?
|
|
|
|
|
Sunday, July 05, 2009 |
|
|
I just did an interview on .NET rocks about cloud computing.
We covered a whole bunch of topics including: what is cloud computing comparing the various offering of Google, Force.com, Amazon, and Microsoft the social and economic environment required for cloud computing the implications for transactional computing and the relational model the importance of price and SLA for Microsoft whose offerring is different from Amazon and Google the need for rich clients even in the world of cloud computing.
|
|
|
|
|
Wednesday, June 24, 2009 |
|
|
One of the big advantages of cloud computing is its utility computing model. Customers can use as much compute power or as little as they want without paying for what they do not need. Normally, most data centers have to be built for peak demand, with the servers unused when they are not needed.
Utility computing is based on the electric utility model. While this comparison has a lot of merit, there is one particular part of the analogy that really does not work.
Data are not electrons.
If someone steals some of your electric power by diverting it, you can get replacement power. If one part of the country's electric demand exceeds its generating ability, it can get power from another part of the grid. One electron is as good as another.
Data has identity, latency, and relationships to other pieces of data.
If someone steals your data, another piece of data cannot take its place. if your data is stolen, or even delayed it, can aversely affect you. Depending on your resolution of the CAP Theorem dilemma, your replication strategy might leave you with a window of vulnerability for data loss.
Curiously, the argument has been made that the utlity computing model makes denial of service attacks unfeasible because the economics of trying to get enough bot driven computers to assualt a hugh data center is prohibitive. Sooner or later, somebody is going to try to get the servers of one data center to attack the servers of another data center. Hopefully, the software that monitors the transactions would realize that somebody is exceeding their credit limit. |
|
|
|
|
Tuesday, June 23, 2009 |
|
|
It's time for me to be interviewed on .NET Rocks again!
Carl and Richard will interview me about Cloud Computing. The interview will be published on June 30 at http://www.dotnetrocks.com/.
Based on my previous show (and related DNR TV segments) it will be a lot of fun to do and to listen to.
|
|
|
|
|
Wednesday, May 27, 2009 |
|
|
Many people have misconceptions about cloud computing. For example, applications do not have to be built so they are all in the cloud. You can put the application in the cloud (to handle parallel computation), and have the database in your enterprise. I was interviewed at TechEd about some of the misconceptions about computing in the cloud. Other misconceptions discussed include what size business is right for the cloud, the role of the browser, guaranteed connectivity, and cloud security. |
|
|
|
|
Wednesday, May 20, 2009 |
|
|
Small or medium sized companies can have the advantages of being able to act as a big company while maintaining the advantages of being small.
A hosted solution has many advantages.
You no longer need the staff, or have to spend money on installing and upgrading software on your clients' machines. Your customers and clients can use your application anywhere, not just on their office computers. If you provide services as well as an application, third parties can easily use your solution as part of their offering. Sometimes these services can be used in your own applications such as portals, or future applications. Perhaps your customers can extend your application making it more valuable to them. Having your application in the cloud means that your intellectual property (your secret sauce) is better protected because it is not in the hands of your users.
All these arguments also apply to small business units within a large enterprise.
Nonetheless, small businesses very often do not have the financial ability to economically run, or even rent a significant hosted application solution beyond a small scale web application.
Cloud computing offers a way out of the dilemma.
Cloud computing offers businesses a utility model for computation. Host your application on a cloud platform and you pay only for what you use. With minimal initial investment, you can scale up or down as your customers use more or less of your application or services.
With many cloud vendors (Amazon being a major exception) you do not even know on what infrastructure your machine runs on. Scaling and failover happen in those environments with minimal work on the client's part.
Clearly the cost and reliability of the cloud provider is crucial. Google's most recent outage shows that this is not a unreasonable fear. Private IT centers also have had their outages, but they are not made public.
Microsoft, Amazon, Google and others are spending huge amounts of money to build cloud data centers. Clearly they see the opportunity.
Right now many large companies already have data centers that can offer cheaper compute power than the current generation of cloud providers. This will eventually change.
But right now, small companies, start-ups, and other similar organizations should think about cloud computing for their hardware infrastructure. |
|
|
|
|
Sunday, January 04, 2009 |
|
|
"Once upon a time, we wrote a book called A Pattern Language and that is how we got our name. Now, a pattern is an old idea. The new idea in the book was to organize implicit knowledge about how people solve recurring problems when they go about building things. "
Christopher Alexander
What is a software pattern?
How do writers about software patterns decide what software artifacts are patterns? How do these writers decide what patterns are worthy of note?
Christopher Alexander is the writer most associated with originating the idea of a pattern as a design concept.1 As the above quotation makes clear patterns are about making explicit solutions to recurring problems that people have created.
So a pattern formalizes knowledge that the profession has already arrived at.
Alexander never defined the word pattern. Nothing wrong with that. In fact the whole idea that we should define all our terms before we use them is misguided. The best definitions emerge from discussion and debate. Relying on people's intuitive notions on what a pattern is, is better than trying to spend time defining what a pattern is. In fact philosophy has long realized that good definitions arrive over time and debate. 2
My dictionary has various definitions for pattern. To use "a model for making things" seems to be the most useful stake in the ground to start the discussion.
In other words, a pattern is not just a solution to a problem. It is an abstraction of a solution that can generate several possible implementations. This of course, corresponds to Alexander's use of the word. His patterns such as "agricultural valleys" or "house for one person" do not have one possible implementation.
So a good software pattern is not a software technology. Hence WS* and REST are not patterns. They are implementations of a standard.3 The standards operate the same way on different platforms. This is no different than a mold for a cup being used to cast a bronze or sliver cup.
Given this point of view, a looping construct is not a pattern. A linked list is not a pattern. What about file systems? Anybody who remembers JCL realizes that there is more than one way to work with disk sectors.
But don't looping constructs come in several flavors? Aren't they different ways to solve the problem of control of software programs. Way back in the early days of computing people had to come up with these various ways of handling control flow. They were not divine revelations; that had to be invented. Anybody who remembers the arguments over the use of "goto"s , whether programs should have single or multiple entry and endpoints, or whether co-routines were a good idea might think of all of these as control flow patterns.
It is just that we take them for granted now that we might not consider them as patterns, just as technological givens. So patterns do need a context. Whenever somebody discusses patterns you need to clarify the domain of discourse. There are certainly patterns in certain "application domains" such as "double-entry" bookkeeping in accounting.
In fact looping constructs, assignment constructs and the like perhaps should be considered patterns once again. The rise of multi-processors, and distributed computing force us to think once again about what it means to do an assignment statement. In a distributed environment, where there is a latency in updating the value of any value, saying "x=y" is not always simple.
Whenever you discuss patterns, you must state the context in which you are talking. A pattern in one context could be a foundational technology in another context.
- http://www.patternlanguage.com/leveltwo/caframe.htm?/leveltwo/../bios/douglea.htm
- http://www.sfu.ca/philosophy/swartz/definitions.htm
- The OAIS reference model for SOA (http://docs.oasis-open.org/soa-rm/v1.0/soa-rm.pdf) would consider WS* and REST as implementations.
|
|
|
|
|
Wednesday, October 29, 2008 |
|
|
At the PDC Microsoft announced its answer to Amazon and Google's cloud computing services.
This answer has two parts: the Azure platform and hosted applications. Unfortunately people confuse these two aspects of cloud computing although they do have some features in common.
The idea behind Azure is to have a hosted operating systems platform. Companies and individuals will be able to build applications that run on infrastructure inside one of Microsoft's data centers. Hosted services are applications that companies and individuals will use instead of running them on their own computers.
For example, a company wants to build a document approval system. It can outsource the infrastructure on which it runs by building the application on top of a cloud computing platform such as Azure. My web site and blog do not run on my own servers, I use a hosting company. That is an example of using a hosted application.
As people get more sophisticated about cloud computing we will see these two types as endpoints on a continuum. Right now as you start to think about cloud computing and where it makes sense, it is easier to treat these as distinct approaches.
The economics of outsourcing your computing infrastructure and certain applications is compelling as Nicholas Carr has argued.
Companies will be able to vary capacity as needed. They can focus scarce economic resources on building the software the organization needs, as opposed to the specialized skills needed to run computing infrastructure. Many small and mid-sized companies already using hosting companies to run their applications. The next logical step is for hosting on an operating system in the cloud.
Salesforce.com has already proven the viability of hosted CRM applications. If I am a small business and I need Microsoft Exchange, I have several choices. I can hire somebody who knows how to run an Exchange server. I can take one my already overburdened computer people and hope they can become expert enough on Exchange to run it without problems. Or I can outsource to a company that knows about Exchange, the appropriate patches, security issues, and how to get it to scale. The choice seems pretty clear to most businesses.
We are at the beginning of the cloud computing wave, and there are many legitimate concerns. What about service outages as Amazon and Salesforce.com have had that prevent us from accessing our critical applications and data? What about privacy issues? I have discussed the cloud privacy issue in a podcast. People are concerned about the ownership of information in the cloud.
All these are legitimate concerns. But we have faced these issues before. Think of the electric power industry. We produce and consume all kinds of products and services using electric power. Electric power is reliable enough that nobody produces their own power any more. Even survivalists still get their usual power from the grid.
This did not happen over night. Their were bitter arguments over the AC and DC standards for electric power transmission. Thomas Edison (the champion of DC power) built an alternating current electric chair for executing prisoners to demonstrate the "horrors" of Nikola Tesla's approach. There were bitter financial struggles between competing companies. Read Thomas Parke Hughes' classic work "Networks of power: Electrification in Western society 1880-1930". Yet in the end we have reliable electric power.
Large scale computing utilities could provide computation much more efficiently than individual business. Compare the energy and pollution efficiency of large scale electric utilities with individual automobiles.
Large companies with the ability to hire and retain infrastructure professionals might decide to build rather than outsource. Some companies may decide to do their own hosting for their own individual reasons.
You probably already have information in the cloud if you have ever used Amazon.com. You have already given plenty of information to banks, credit card companies, and other companies you have dealt with. This information surely already resides on a computer somewhere. Life is full of trust decisions that you make without realizing it.
Very few people grow their own food, sew their own clothes, build their own houses, or (even in these tenuous financial times) keep their money in their mattresses any more. We have learnt to trust in an economic system to provide these things. This too did not happen overnight.
I personally believe that Internet connectivity will never be 100% reliable, but how much reliability will be needed depends on the mission criticality of an application. That is why there will always be a role for rich clients and synchronization services.
Hosting companies will have to be large to have the financial stability to handle law suits and survive for the long term. We will have to develop the institutional and legal infrastructure to handle what happens to data and applications when a hosting company fails. We learned how to do this with bank failures and we will learn how to do this with hosting companies.
This could easily take 50 years with many false starts. People tend to overestimate what will happen in 5 years, and underestimate what will happen in 10-15 years.
Azure, the color Microsoft picked for the name of its platform, is the color of a bright, cloudless day. Interesting metaphor for a cloud computing platform. Is the future of clouds clear? |
|
|
|
|
Monday, September 22, 2008 |
|
|
"Software + Services" is Microsoft's representation of what a large part of the future of computing is going to be. Microsoft, however, has not done a great job of explaining what "Software + Services" is.
Based on what I have read and heard, let me try to explain it as I see it.
The fundamental question that one has to ask is "Where does computation happen?"
The obvious answer to everyone today is: "Everywhere".
We compute on mobile devices, appliances, desktops and laptops, and remote computers. We communicate with text and voice.
Everybody understand this. The key question is: "Why?"
I think the answer is because "Hardware is cheap, and data is expensive to move."
The late Jim Gray did an analysis1 of the economics of distributed computing. His analysis came to two conclusions:
1. Put the computation near the data. Unless you have something that is very compute intensive, it is much cheaper to not move the data. 2. If you need data from multiple sites, push the processing closer to the data source by filtering the data early.
The assumption here is that telecommunication prices drop slower than Moore's Law. So far this has always been the case.
The natural conclusion is to do the computation where the data naturally resides. In other words: Do what makes sense. Some things will be in the cloud, some things will still be on the desktop. As long as Internet connectivity is not ubiquitous, and not always connected, you may have to cache data somewhere. Depending on the mission criticality of your application, a few seconds could be a long time.
As Ray Ozzie put it in his MIX Keynote, we live in a "World of small pieces loosely joined."
Software + Services means some things will be services in the cloud, others will be software as we know it today. That includes mobile devices and appliances that we are learning to love and hate, just as we have always done with traditional software.
1. MSR-TR-2003-24 "Distributed Computing Economics"
|
|
|
|
|
Thursday, April 03, 2008 |
|
|
I have put my VSLive! talk, explaining how to use Windows Comunication Foundation and Windows Workflow Foundation together to create distributed applications in the Presentations section of my web site. |
|
|
|
|
Friday, March 28, 2008 |
|
|
Quick answer: When I don't know about it? When two experienced co-workers do not know also? I was working on a workflow code sample for an upcoming talk, when I started getting ridculous compilation errors. The compiler could not find the rules definition file when it was clearly available. The workflow designer could find it because I could associate it with a policy activity. The compiler falsely complained about an incorrect type association in a data bind, but it was clearly correct. Once again the designer had no problem doing the data bind. I tried to find an answer on Google with little success. After two hours of experimenting, I tried a different Google query and came up with the following link: https://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=612335&SiteID=1.The essence of the solution is the following: "this is a well-known
problem with code files that have desigable classes in them - the class
that is to be designed has to be the first class in the file. If you
do the same thing in windows forms you get the following error: the class Form1 can be designed, but is not the first class in the file. Visual Studio requires that designers use the first class in the file. Move the class code so that it is the first class in the file and try loading the designer again." It turns out I had changed a struct that was defined first in my file to a class. I moved that class to the end of the file and "mirabile dictu" everything worked. So if this is a well known problem, why can't we get an error message just like in the Windows Forms case?
While it was clearly my mistake, Microsoft has a share of the blame here. Clearly this requirement makes it easier to build the workflow designer. It would have been just as easy to check if this class was not defined first, and issue an error message.
|
|
|
|
|
Thursday, March 06, 2008 |
|
|
I did a short podcast for Consortio Services about Software as a Service as part of their weekly techcast. I very briefly cover what SaaS is about and some of the critical issues facing organizations looking at delivering services using the SaaS model. |
|
|
|
|
Tuesday, March 04, 2008 |
|
|
I am going to be giving two talks and a workshop at VS Live! in San Francisco. The first talk is an "Introduction to Windows Workflow Foundation" where I explain both the business reasons why Microsoft developed Workflow Foundation as well as the technical fundamentals. This talk will help you understand not only how to build workflows, but when it makes sense to do so and when to use some other technology. The second is " Workflow Services Using WCF and WWF". WCF allows you to encapsulate business functionality into a service. Windows Workflow Foundation allows you to integrate these services into long running business processes. The latest version of the .NET Framework (3.5) makes it much easier to use these technologies together to build some very powerful business applications. On Thursday I will give a whole day tutorial on Workflow Foundation where will dive into the details of how to use this technology to build business applications. Other speakers will talk about VSTS, ALM, Silverlight, AJAX, .NET Framework 3.0 and 3.5, Sharepoint 2007, Windows WF, Visual Studio 2008, SQL Server 2008, and much more. If you have not already registered for VSLive San Francisco, you can receive a $695 discount on the Gold Passport if you register using priority code SPSTI. More at www.vslive.com/sf
. |
|
|
|
|
Tuesday, February 12, 2008 |
|
|
One of the great features in Visual Studio is the ability to startup more than one project at the same time. You do not need to create two solutions, for example, for a client and a server to be able to debug them both. I thought everybody knew how to do this, but when I found out that two members of a project team I am working with did not, I decided to blog how to do this. Select the solution in the Solution Explorer, right mouse click and you will see the following menu: Select the Set Startup Projects menu item, and a property page will appear that lists all the properties in the project. For example: You can associate an action with each of the projects: None, Start, or Start without debugging. When you start execution, the projects that you wanted to startup will begin execution. If you allowed debugging, and set breakpoints, the debugger will stop at the appropriate places. |
|
|
|
|
Monday, February 11, 2008 |
|
Thursday, January 17, 2008 |
|
Thursday, November 22, 2007 |
|
|
The Windows Workflow
Foundation (WF) ships with a Policy Activity that allows you to execute a set
of rules against your workflow. This activity contains a design time rules
editor that allows you to create a set of rules. At run time, the Policy
Activity runs these rules using the WF Rules engine.
Among other
features, the rules engine allows you to prioritize rules and to set a chaining
policy to govern rules evaluation. The
rules engine uses a set of Code DOM expressions to represent the rules. These
rules can be run against any managed object, not just a workflow. Hence, the
mechanisms of the rules engine have nothing to do with workflow. You can
actually instantiate and use this rules engine without having to embed it
inside of a workflow. You can use this rules engine to build rules-driven .NET
applications.
I gave a talk at
the last Las Vegas VSLive! that demonstrates how to do this. The first sample
in the talk uses a workflow to demonstrate the power of the rules engine. The
second and third samples use a very simple example to demonstrate how to use
the engine outside of a workflow.
Two problems have to
be solved. You have to create a set of
Code DOM expressions for the rules. You have to host the engine and supply it
the rules and the object to run the rules against.
While the details
are in the slides and the examples, here is the gist of the solution.
To use the rules
engine at runtime, you pull the workflow rules out of some storage mechanism.
The first sample uses a file. A WorkflowMarkupSerializer instance deserializes
the stored rules to an instance of the RuleSet class. A RuleValidation instance validates the rules
against the type of the business object against which you will run the rules
against. The Execute method on the RuleExecution class is used to invoke the
rules engine and run the rules.
How do you create
the rules? Ideally you would use some domain language, or domain based
application, that would generate the rules as Code DOM expressions. If you were
masochistic enough, you could create those expressions by hand.
As an alternative,
the second sample hosts the Workflow rules editor dialog (RuleSetDialog class)
to let you create the rules. Unfortunately, like the workflow
designer, this is a programmer's tool, not a business analyst's tool. A WorkflowMarkupSerializer
instance is used to serialize the rules to the appropriate storage.
I would be
interested in hearing about how people use this engine to build rules driven
applications.
|
|
|
|
|
Wednesday, October 31, 2007 |
|
|
Meditation is supposed to develop awareness, help focus your attention, and relax while increasing your focus. At one of my current clients we are developing a Software as a Service (SaaS) application. We have developed the following "meditative principles": 1. It's not done until the tests are done. 2. If it's broke, fix it first. 3. If it's not in a script or code, it doesn't exist. 4. Don't explain, do it (but ask questions if you don't understand). And finally (with apologies to Bobby McFerrin), "Don't worry, be agile". Here is a little song I wrote You might want to sing it note for note Don't worry be agile In every software we have some trouble When you worry you make it double Don't worry, be agile Ain't got no place to lay your head Somebody came and took your machine Don't worry, be agile The manager say your code is late He may have to litigate Don't worry, be agile Look at me I refactor Don't worry, be agile Here I give you my url When you worry call me I make you agile Don't worry, be agile Ain't got no time ain't got no style Ain't got not money to make you smile But don't worry self organize Cause when you worry Your face will frown And that will bring everybody down So don't worry, be agile (now) There is this little song I wrote I hope you learn it note for note Like good little developers Don't worry, be agile Listen to what I say In your software expect some trouble But when you worry You make it double Don't worry, be agile Don't worry don't do it, be agile Put a smile on your face Don't bring everybody down like this Don't worry, it will soon pass Whatever it is Don't worry, be agile |
|
|
|
|
Monday, August 20, 2007 |
|
|
My series of four digitial articles have been published by Addison-Wesley. You can get the links to purchase them and the associated source code from my web site.
I have tried to explain, in practical terms, what you need to know to actually build real world software using Windows Workflow. There is a tiny amount of theory to explain the underpinnings. The vast majority of the explanation uses code examples to illustrate all the key points. The last shortcut in the series has two extended examples that illustrate how to build custom activities. |
|
|
|
|
Sunday, October 29, 2006 |
|
|
Here are good instructions on how to install RC1 for the .NET Framework 3.0: http://blogs.msdn.com/pandrew/archive/2006/09/07/745701.aspx. People, including myself, have been having problems getting the Workflow Extensions for Visual Studio 2005 installed. I moved the installer file (Visual Studio 2005 Extensions for Windows Workflow Foundation RC5(EN).exe) to a different directory from the other installation files. The workflow extensions then installed just fine. |
|
|
|
|
Friday, September 29, 2006 |
|
|
David Chappell (http://www.davidchappell.com/HTML_email/Opinari_No16_8_06.html) argues that SOA may not foster the service reuse that everyone has been hoping for. I think his analysis is correct, but I think with business services we at least have a reasonable hope of achieving reuse. Here we are least dealing with the way things actually happen in the world as opposed to programmer abstractions such as objects or components. That combined with the looser coupling of services gives me some hope.
The reason why frameworks like .NET are successful is they reflect years and years of experience with programming problems. Many examples of reuse (such as file systems and compilers) are so embedded in our experience that we no longer see them for what they are.
Reuse may fail here as well for all the reasons mentioned in David Chappell's analysis. At least now I feel we are on the right track. |
9/29/2006 5:00:37 PM (Eastern Standard Time, UTC-05:00) | | All | SOA
|
|
|
|
Friday, September 01, 2006 |
|
|
The Reference Model for Service Oriented Architecture defines a vocabulary for building service-oriented systems. Put together by a technical committee operating under the auspices of the OASIS standards organization, it is the result of individuals and organizations representing vendors, users, governments, consulting organizations, and academic institutions.
The Reference Model (RM) sees SOA as a means for organizing and using distributed capabilities that may be under the control of different ownership domains. The RM is not an architecture. It does not attempt to make any architecture normative. It does not try to make any standard or set of standards normative.
It does provide a common set of semantics that can be used across different implementations. This does sound rather fancy. Nonetheless, just like Moliere's bourgeois gentlemen that found out he was speaking prose all his life, many industries have been using reference models all along. They just never had to define them explicitly.
An architect for a residential dwelling knows that if they use the term door or window, the builder will understand what is meant. There are widely varied implementations of doors and windows depending, for example, if you are building a space station or an igloo. Nonetheless, everyone knows what the terms mean. Many of these terms are codified in building codes, and by standards bodies, and have evolved over the years. The software architecture community moves too quickly for such evolution; this is where standards organizations can help.
Software architectures, for sure, can have views and viewpoints, but the terms in which they are discussed have to be understood.
The core concepts that the RM discusses are service, visibility, execution context, service description, real world effect, interaction, and contract and policy.
I will discuss these core concepts over the next few posts.
None of this work is going on in isolation, or is it intended to denigrate other work such as the WS* specifications, or organizations such as the ISO, IEEE, IETF, the Ontolog Forum or other groups. The reference model just supplies standard definitions so that it becomes easier for each group to communicate with the others. |
9/1/2006 11:32:59 AM (Eastern Standard Time, UTC-05:00) | | All | SOA
|
|
|
|
Monday, August 14, 2006 |
|
|
I have updated the workflow examples on my site to the most recent Workflow version. |
|
|
|
|
Monday, July 03, 2006 |
|
|
I would like to thank all those who helped me achieve an Microsoft MVP award for Visual Developer - Solutions Architect. |
|
|
|
|
Wednesday, June 28, 2006 |
|
|
"You are so young; you stand before beginnings. I would like to beg of you,
dear friend, as well as I can, to have patience with everything that remains unsolved in your heart. Try to love the questions themselves, like locked rooms and like books written in foreign languages. Do not now look for the answers. They cannot now be given to you because you could not live them. At present you need to live the question."
- Rainer Maria Rilke, Letters to a Young Poet
I learned programming in high school from a Fortran IV manual, which was like learning how to drive a car from the owner's manual—unexciting. Later, after taking an operating systems course at MIT, I gave up entirely on programming as a profession. I did not want to spend my life doing the same thing over and over again.
What made me change my mind and become a professional programmer? In large part it was Gerald M. Weinberg's The Psychology of Computer Programming, which I first read in 1982. Weinberg not only demonstrated that programming was more than technology, it is a social activity, but he showed how the social element related to the technical. In essence, he identified and addressed the types of fundamental questions that Rilke advised the young poet Franz Kappus to study. Such as:
- “What does it mean when we say a program is good?” I learned from Weinberg that a good program is as much a matter of cultural fit as technological merit. A designer has to understand that tradeoffs are made not only among technical factors, but among technical, social and economic constraints. Too often, I have seen engineers try to build the “perfect” product while ignoring ease of use or budget constraints.
- “How do you get programmers to work together as a team?” One programmer cannot do it all. Different people have different skills, different skills are needed at different parts of the project.
- “What is leadership all about?” How do you manage change and performance? Why do many managers manipulate programmers and tread them poorly and then wonder why they get poor results?
-
- “How do you find good programmers?” And just what does it mean to be a good programmer? Weinberg was one of the first to point out the stupidity of aptitude testing for programmers, and the importance of understanding individual psychology in dealing with programmers.
Weinberg confirmed my own intuition that software could have an enormous impact on society, and his discussion of programming as a social activity helped explain much of the “strange behavior” I saw around me as I began working on my first programs. For example, during one of my first projects, I saw that the inability of certain people to work together had more impact on the project’s outcome than the technological issues being debated.
Weinberg examined what a naïve programmer would consider just technical topics and demonstrated how the elements of human personality and interactions between people had just as much, if not more influence over the outcome of a computer programming project than the technical issues and debates.
By understanding the importance of questions such as these, even if not every question can be answered in every situation, my value as a programmer and designer transcends whatever today’s technology du jour happens to be. I would have to say that Weinberg’s book took years off my apprenticeship, and saved me much aggravation.
To this day, I view programming primarily as a human activity, with the technical merits secondary. This does not mean you can ignore the technical merits. What makes a programmer really great is not technical genius, but an understanding of the human context of what he or she is doing. Any programmer who creates a truly revolutionary and world-changing program understands this. Others did not, or did not care to, and their contributions are hidden behind or overwhelmed by others’ accomplishments.
It is incredible that a twenty-five year old programming text containing examples illustrated with technologies that many programmers today cannot even conceive of—I recently taught a programming class where not one of the students had any idea what a punched card or paper tape was—is still a great book. |
|
|
|
|
Wednesday, June 21, 2006 |
|
|
I have signed the petition at: http://www.mwdadvisors.com/resources/stop-the-madness.php. SOA does not need another buzz word. I think SOA 2.0 ranks belong even ESB on the buzz word list.
This is also an experiment. We have heard of viral marketing. Let us see if we can have viral common sense.
|
|
|
|
|
|
How do workflow and service oriented architecture relate?
The real question is how service oriented architecture (SOA) and business processes relate.
Service orientation is about how to organize and utilize distributed capabilities that could be under the control of different owners.1 Business Process Management (BPM) is about modeling, designing, deploying and managing business processes.2 Business processes are the capabilities, or the users of those capabilities. Workflow is a technology that builds the automated part of a business process. It integrates human decision with synchronous and asynchronous software systems. Of course this is somewhat recursive because a workflow could use other services in its implementation.
For me, SOA and BPM are not in conflict. People talk about layering BPM on top of SOA. Or that SOA is for IT folks, and BPM is for business people. In today's world, business cannot afford to have people who just think IT, or just think business. Given the way the human mind works, multiple models are often needed to think about certain problems.3 SOA and BPM are two different ways to think about the same problem: how organizations can best accomplish their missions. Thinking about business process will transform how you architect your services. Architecting your services will impact how you model your business processes.
1 For more information about service oriented architecture take a look at the Reference Model that the OASIS TC that I am a member of has produced: http://www.oasis-open.org/committees/download.php/18486/pr-2changes.pdf
2 See http://ww6.infoworld.com/products/print_friendly.jsp?link=/article/06/02/20/75095_08FEbpmmap_1.html
3 See "Mental Models" by P.N. Johnson-Laird in Foundations of Cognitive Science edited by Michael I. Posner
|
|
|
|
|
Sunday, June 04, 2006 |
|
|
Here is the first of four talks on Microsoft Windows Workflow Foundation that are appearing on Carl Franklin's dnrTV. This one was broadcast on June 2. Each of the following ones should appear in subsequent weeks.
http://dnrtv.com/default.aspx?showID=21
|
|
|
|
|
Tuesday, February 07, 2006 |
|
|
(Apologies to Christopher Alexander)
Christopher Alexander, the architect who inspired the Design Patterns movement, wrote a two part article that appeared in the April and May 1965 issues of Architectural Forum entitled “The City is Not a Tree.” The tree in the title is not a biological tree, but refers to a hierarchy being used as a way to organize how modern cities are built.
We all try to organize the world into neat categories. It helps us make sense of the world. Unfortunately, those categories and subcategories force us to view the world as a set of hierarchical categories. Alexander argued that architects who think that way produce buildings and cities that are sterile and unlivable. For example, zoning that refuses to mix residential, industrial and commercial use has some very severe drawbacks in transportation, living conditions, and tax policy.
The world has too many interrelationships to be viewed as a hierarchy, it is really a semi-lattice. Now there are parts of the world that are hierarchies. But a hierarchy is a semi-lattice, but the reverse is not true. The point is that if you view the world as a hierarchy you miss the true picture.
Software often has to model some part of the world. The World Wide Web is a semi-lattice. Image what the Web would be like if it could only be structured as a hierarchical directory such as Yahoo. Don’t get me wrong; neat categories are often useful. But Search has become such an important part of the Web because it allows you to capture the relationships in a semi-lattice.
Take the classic example I used to give my software engineering students when teaching them about abstraction and object-oriented systems: How do you define a chair? Of course they start out with a standard definition. A chair has a back, a seat, and four legs. But what about a bean bag? Or even a table? In the end, what emerges is that a chair is about a relationship between a piece of anatomy and surface that can support it.It is a relationship, not an object with constraints.
This is what led Alexander to focus on patterns and not components. Of course, some patterns could become components. But components (software or otherwise) are packaging artifacts, not fundamental abstractions. This is why the authors of Design Patterns have the principle of "Favor object composition over class inheritance." Class inheritance is a hierarchy. Object composition allows you to build a semi-lattice if that is appropriate.
Focusing on relationships means you focus on behavior, on what happens in the real world. Systems built on behavior are more flexible and more scalable than those based on constrained objects. Of course not all systems have to be flexible and scalable. Flexible and scalable often conflict with other desired goals such as performance.
Service orientation is based on focusing on the relationships or behaviors between the capabilities of distributed services because ultimately, a service performs some action in the real world. In service oriented systems you do not focus on constrained objects. You try to model the world as the semi-lattice it really is. 1
[1] Look at http://polaris.gseis.ucla.edu/pagre/simon.html for another interesting perspective. |
|
|
|
|
Saturday, November 19, 2005 |
|
|
One of the dogmas of messaging technology is that the "truth is always on the wire." In the context of interoperability that is certainly true. The message, not the platform object model that generated the message, is all that really exists between a service provider and consumer.
Like all principles it has its limits. The statement the "truth is on the wire" only means that using an agreed upon message format is equivalent to a using a common syntax for a language such as English. It does not matter how you define the message format. XML Schema, RelaxNG, or just "ask Alice" are all equivalent. Humans are better at handling ambiguity than machines, hence English syntax can be a lot looser than a message format. Nonetheless, the point remains valid.
Syntax tells you nothing about the semantics of the message. For those of you who abhor fancy terminology, semantics means nothing more or less than the real world actions that arise from processing the message.
Just like you can misunderstand an English sentence, you can "misunderstand" a SOAP message. This misunderstanding may be a programming error, or a misunderstood or mismatched policy.
For example, I send to my bank a correctly formatted message that says transfer $1000 from my cash reserve to my checking account. If the bank transfers the money from savings to checking, that is a programming error. The "wire truth" however was not violated.
Now suppose that the bank made the correct transfer, but the bank's policy (which I did not know of at the time) was to report such transfers to a credit bureau. My altered credit score resulted in a higher interest rate on the loan I was applying for. Understanding a service's policy is as important as understanding the message format.
Truth is not on the wire, truth is the real world effect of what happens when a SOAP message is processed. Truth is semantics.
|
|
|
|
|
Monday, October 24, 2005 |
|
|
Agile based software development methodologies often remind me of the story about the person who jumps off a 100 story building, and passing the 45th floor yells out "No problems yet!"
Agile based software methods have many good ideas. Their critique of the waterfall method has great merit. The best documentation is the code itself. Document based solutions do not work. But it does no good to demolish one myth only to have it be replaced by another.
To imagine that because the attempt to completely design everything up front is futile, the idea that you can iterate every few weeks and wind up with an adequate design is often wrong. That might work for a project that is strongly user interface or end-user driven. I doubt it would work for designing an air traffic control system, or system software such as Microsoft's Windows Communication Foundation. These kinds of projects have strong lifecycle requirements about safety, security, performance, or scalability. Often they require individuals to acquire new areas of knowledge or expertise.
Barry Boehm's spiral model of software development is a much better approach.1 The idea behind the spiral model is that at each choice point in the software development process one assess the risk that the project could fail to meet its goals. Based on that analysis the next step is to mitigate that risk. It might mean doing a prototype, refining the requirements, or doing more testing. Some of these tasks may be done concurrently. Analyzing the results of these steps might cause the development process to backtrack. In all cases, the views of all the project stakeholders (customers, developers, marketing, etc.) are considered at each analysis point.
Given this approach, the classic view (from Boehm's original paper) looks like a spiral:
Since the spiral model is a risk driven process, some circumstances might dictate an agile methodology. Other cases would require other approaches. By making risk the focus, rather than a manifesto of principles there is a higher probability of making the correct choices.
Let risk mitigation guide your development process.
1. Boehm's original paper appeared in IEEE Computer 21(5) 61-72 in 1988. In 2000 he updated the model at the "Spiral Development: Experience, Principles, and Refinements Spiral Development Workshop". |
|
|
|
|
Friday, August 05, 2005 |
|
|
One of the benefits of service oriented systems is that they are loosely coupled.
David Orchard
analyzes what loose coupling means from the perspective of the Web services stack. A human being can recognize that a field in a form is misplaced, software cannot. So for a particular message invocation, early binding is necessary. This is certainly true for standards. There needs to be a defined place for addresses and security tokens.
Orchard asks us to imagine Purchase Order system. A particular piece of information in a particular message must be bound to the appropriate programming types. If you need to know the name of the purchaser, you must early bind to the format of that name. Or to use fancy language, the service must understand its semantics. But it is only necessary for those programming types that the service needs to understand. Here is where building service interactions as messages rather than as remote procedure calls (RPC) is important.
If a service interaction is defined in terms of RPC, then if you change the semantics, you must change the service interface. As long as one type of the method call changes, the whole interface is broken. If you send messages (concretely XML messages), so long as the service can find the information it needs, the service is not bound to a particular message format. Other information can change, but the service does not care.
For example, if a service processing a message does not care about security, they can ignore the WS-Security SOAP headers. Those headers can change and the service can ignore all the security possibilities. The inventory service does not care if the credit information changes.
True, if XPath is used you are dependent on a certain structure to find information, but if you mark your documents with its version, or associated XML Schema, you could use the appropriate location path for the document. Or if you want to bind everything to type you can use the appropriate XML Schema instance to serialize the message to the appropriate programming types.
Loose coupling at the application level is about inserting levels of indirection to handle versioning (so what else is new?). But a message can do this because at the service interface the message is opaque. A RPC is not opaque.
At the application level loose coupling is how easy is to make a change that does not impact other parts of the system. With opaque messaging, a new version can be added without impacting other clients. If a service wants to reject a version it no longer supports, or does not yet support, it can do so without impacting other clients. In this restricted, but vitally important sense, semantic meaning in a Web service can be late bound.
|
|
|
|
|
Tuesday, March 29, 2005 |
|
|
Sarbanes-Oxley mandates that public companies should be able to produce all materially relevant transactions during an audit.
In the world of service oriented architecture, huge volumes of business documents flow freely as messages between services. These services are orchestrated (or choreographed if you wish) to produced business processes. To give you some idea of the volume, some people fear that the volume of XML is starting to take larger and larger fractions of network bandwidth. This is why some are starting to push the use of Binary XML for SOA messages.
In this world of huge stores of electronic messages and documents, how in the world do you find all the relevant ones? This is where XML Schema comes to the rescue. Your XML documents should be defined with schema, and hence subject to validation. Performance considerations may dictate that you do not validate your documents during message processing. Nonetheless, with schema definitions you should be able to query your messages to search and find the relevant documents.
For example, if you need to find all transactions with a given company worth over a certain threshold, you have to the tools to find it.
|
|
|
|
|
Tuesday, March 08, 2005 |
|
|
Microsoft's Indigo platform will unify all the divergent transport technologies (ASMX, WSE, COM+, MSMQ, Remoting) that are in use today. For building a service on the .NET platform this is the technology you will use.
What technology should you use today?
The ASMX platform's programming model is the same as Indigo's. Attributes, indicating what technologies (security, reliability, etc.) you want the infrastructure to use are applied to methods. Hence, a converter will be provided to convert ASMX code to Indigo code.
Does this mean ASMX should be the technology of choice? I would argue that WSE is the better technology to use. WSE's programming model is not that of Indigo. Classes and inheritance are used to interact with the WSE infrastructure. WSE will interoperate with Indigo. Nonetheless, the conceptual model of WSE is identical to that of Indigo.
ASMX is tied to the HTTP transport and its request / response protocol. It encourages programmers to think of a service call as a remote procedure call with programming types, not as an interoperable, versioned XML document message validated by XML Schema.
Service developers need to think of request / response as one of several possible message exchange patterns (MEP). The most fundamental MEP, the one all MEPs are built from, as the WS-Addressing spec makes clear, is the one-way asynchronous message. Business services tend to be asynchronous; you apply for a loan and you do not hear back for days.
Service messages can go through intermediaries before reaching the ultimate recipient. Each message segment may go over transports other than HTTP.
WSE's transport classes allow you to build services that use different MEPs over various transports. The SOAP envelope classes make it easy to build the SOAP message body as XML, or serialized XML objects. You learn to think in terms of XML documents and messages, not execution environment dependent types.
Using this conceptual model your services will last longer, and be easier to evolve in a business environment. That will be of more use to your business than using a technology that has a better upgrade path, but will have to be rewritten sooner because it is poorly designed and implemented.
|
|
|
|
|
Friday, December 24, 2004 |
|
|
Grady Booch has
fired
another attacking missile in the great debate over software factories, and the idea's defenders have replied. Microsoft's view of the world is outlined in a series of articles by Jack Greenfield on the MSDN site:
http://msdn.microsoft.com/architecture/overview/softwarefactories/default.aspx?pull=/library/en-us/dnmaj/html/aj3softfac.asp
http://msdn.microsoft.com/architecture/overview/softwarefactories/default.aspx?pull=/library/en-us/dnbda/html/softwarefactwo.asp
http://msdn.microsoft.com/architecture/overview/softwarefactories/default.aspx?pull=/library/en-us/dnbda/html/softfact3.asp
The basic idea behind software factories is to move the production of software from a craft to an industry resembling manufacturing. This is not a new idea. In the second article, after reviewing why these efforts have failed in the past, Greenfield says:
"We are unable to achieve commercially significant levels of reuse beyond platform technology. The primary cause of this problem is that we develop most software products as individuals in isolation from other software products. We treat every software product as unique, although most are more similar to others than they are different from them. Return on investment in software development would be far higher if multiple versions or multiple products were taken into account during software product planning. Consequently, we rarely make commercially significant investments in identifying, harvesting, packaging and distributing reusable assets. The reuse that does occur is ad hoc, rather than systematic. Given the low probably that a component can be reused in a context other than the one for which it was designed, ad hoc reuse is almost an oxymoron. Reuse rarely occurs unless it is explicitly planned in advance."
I agree as far as he goes, but I do not think he fully comes to grips with why the idea of software factories has a long way to go. No doubt part of the reason is that the idea of a software factory resembles the idea of artificial intelligence. Every success redefines the goal. Nonetheless, I think there is a more fundamental reason.
Programming has always been a labor intensive activity. As a result, from the very start people have tried to figure out how to automate as much of the process as possible. Compilers were one of the very first attempts to automate software development. Today we take them for granted, but well into the 1970s people were still arguing over whether a good human assembly language programmer could code better than a compiler. Debuggers, linkers, loaders, file systems, operating systems, distributed transaction coordinators, were all invented to automate parts of the software development. How long did it take for the idea of a virtual machine (as in Java or .NET) to become practical for most software development?
You can, as Greenfield does, view these artifacts as improved abstractions. Abstractions are very critical to software development. I view these developments differently. I see them as automating what we understand how to automate. Code libraries such as for .NET and Java are in the same category. After years of experience we now understand enough of some of the critical elements of certain parts of software development to encapsulate them in libraries.
But software is not like other engineering pursuits such as bridge building. Most bridges, although they look different, are really one a few basic kinds. Because you cannot copy a bridge like you can copy a program, you need to build a new bridge at every place you need to cross a river. Hence you can much more easily replicate what you did before, or learn from experience.
This really came became clear to me when I was a graduate student in nuclear engineering. In the reactor design course final exam we were asked to design a cooling system for a nuclear reactor. We were not, however, to use our fundamental understanding of physics and engineering to do this. We were to apply the American Society of Mechanical Engineering (AMSE) standards for cooling systems to do the design.
Why has this been so difficult to do with software? Since software is easy to copy, you only need to create a new piece of software when you need to do something new. Automation is about understanding what you have done in the past. You cannot automate what you do not understand. So much of software development is done for things we have not done before. We do not know, or do not fully understand the domain models that Greenfield relies on for the idea of software factories. This is why the CASE tools, or the code generators of the past have not provided any reduction in cost and time. This is why I think UML based code generators will not be wildly successful either.
Of course you need to understand the domain models. It is just that in a dynamic economic environment, you do not arrive at the knowledge in time to automate the process until it becomes yesterday's understanding. As yesterday's understanding it will take a while to see if it is fundamental enough to be worth automating.
So long as software is about innovation, or doing what we do not yet understand how to do, it will always have a large craft component. Maybe automation will decrease the need for programmers, and thus reduce the labor cost. So far this has not happened in 60 years. People may use cheaper programmers, but that is another story for another time.
|
|
|
|
|
Thursday, December 16, 2004 |
|
|
Adam Bosworth has given a talk (discussed in his
blog
entry) that has received a lot of attention and comment. He argues that software programs and their tools are way too complex and should be simple.
The problem I have with his argument (and arguments similar to that) is that it posits a false binary choice: either be complex or simple. Complexity is a continuum. Bosworth argues against sophisticated abstractions. But it is sophisticated abstractions that make simplicity possible.
After all, the computer is just atomic particles. Does any programmer worry about that? Or the gotos/branches that are all over the microcode? What about the instruction pipeline? That is all abstracted away in the "hardware". How many programmers worry about exactly how the operating system scheduler works? The whole idea behind class libraries that come with Java and .NET is to allow the programmer to concentrate on the business logic and not worry about the "plumbing code".
Occasionally we have to break through that abstraction and worry about exactly how things work. I discovered that when I wrote my first test code to test the performance of the first MIPS machines back in the 1980s. I found that if I did not return a value from my test routine, the loops would be optimized out. Most of the time we can remain blissfully ignorant of the abstractions. Performance, scalability, and most important of all security, are problems that are classic examples of where we often have to worry about complexity and look at the abstractions. The solutions to those problems are sometimes simple, but more often than not messy.
You cannot divorce simplicity from abstraction. People dealing with complicated things need complicated abstractions. Engineers often make products and technologies that are too geeky, but sometimes things are too simple. After all, the Swiss Army knife comes in several sizes. You can match the level of simplicity that you need.
The Swiss Army knife analogy strikes at the heart of the issue for me. You need to keep it simple enough. Saint-Exupery's famous saying applies here. Perfection is achieved in design when there is nothing more to take away, not when you have nothing more to add. In other words you have to keep it simple, but it still has to accomplish the task. The issue is to make it simple enough for your user, whether they be a writer of a blog, or a user of a class library. But even the simple user to be effective has to understand the limits of the tool, or to be more sophisticated, the abstractions and assumptions used. This applies to all sophisticated problems whether they be the accuracy of a medical test, the stability of Social Security, or the usefulness of Atom or RSS.
Bosworth speaks about the virtue of "keeping it simple and sloppy and its effect on computing on the internet." Well if you have to be HIPPA compliant you cannot be sloppy and forgiving of human foibles and weaknesses. Human weaknesses and foibles are precisely the problem, and they cannot be abstracted or assumed away to achieve simplicity. If you do so, you will have a system so rigid, so bureaucratic, it would be unusable.
Bosworth concludes by talking about achieving simplicity in the information search space to avoid information overload. He talks about data mining, and machine learning as the potential solutions. But they all rely on abstractions about what is important, and what is not. Users better understand how they work. I cannot wait for the day when the social scientists start deconstructing data mining and machine learning for their social assumptions. At that point both humans and machines will prove once again what Hobbes argued so many years ago. Knowledge and the assumptions that go with it are the product of human actions. Knowledge is partly determined by our social relationships and what we assume. Simplicity results from assumptions and abstractions. But we cannot hide from the mess in the name of simplicity.
|
|
|
|
|
Monday, November 01, 2004 |
|
|
David Chappell,
in his latest newsletter
, argues that Service Oriented Architecture (SOA) promotes software reuse far better than objects do. For him, object reuse usually fails for two reasons. First, in an evolving business environment it is difficult to come up with a good definition of a business object such as a customer. Second, software developers seem to catch the not-invented-here plague fairly easily.
I certainly agree with this.
One of my favorite questions when discussing object oriented design with students is to ask for the definition of a chair. Invariably, the answers will include legs. Then I ask them if an ottoman or a bean bag chair fits their definition. Coming up with good class definitions is hard. I have been involved in software development long enough to have frequently seen the not-invented-here syndrome.
He also argues that the best examples of reuse occur with applications such as PeopleSoft or SAP. SOA reuse resembles application reuse. I certainly agree with this as well.
But, in my opinion, the fundamental reason that reuse with SOA will be more frequent than with objects is that SOA reuse is loosely-coupled black box reuse, most object reuse is tightly-coupled white box reuse. SOA is a design pattern that has benefited from our struggles with object-oriented approaches.
When you treat a component as a black box you interact with it through its external properties, or as we say in the programming world, its interfaces. You do not have to understand how a fork is built or what it is composed of. If you do not like your two-pronged desert fork, you get a three-pronged fork to eat your peas. You do not try to modify the two-pronged fork. The real world is loosely coupled.
With inheritance you must understand the object's implementation. You must always be aware of the fragile base class problem. You can change the behavior of existing programs if you are not careful. This only reinforces the not-invented-here syndrome because it is perceived to be easier to write classes from scratch, than to try to understand another programmer's code. Objects are tightly coupled because they are connected through a stack and a linker/loader.
Modern object oriented practitioners have realized this. The design patterns community emphasizes composition/delegation over inheritance, and interfaces over implementation. You use the component through its interface treating it as a black box. Interfaces produce looser coupling.
SOA is a design pattern for building business process in the real world. It does not tell you how to architect your application. It does not provide an implementation. It does tell you to construct your services completely independent of each other. Services are independently deployed. Code or database tables are not shared between services. Services in a SOA interact through a loosely coupled interface that is defined as a series of messages.
SOA is a continuation in the large of what good object oriented designs have started to become: less emphasis on the classes, and more emphasis on the loose coupling provided by interfaces. Obviously objects provide the actual implementation, but that is not how the users of those implementations view them. They view them as black box interfaces.
Class frameworks such as J2EE and .NET have been successful. Both those frameworks, encapsulate stable, well-understood problems that the software world has been working on for over 50 years. Not only are the problems of business not well understood, the environment is quite dynamic. Services in a SOA are loosely coupled black boxes because they reflect the loosely coupled real world.
|
|
|
|
|
Wednesday, August 11, 2004 |
|
|
Everybody talks about how the New England Patriots Super Bowl win last year was a team effort. Whether it was the backup quarterback imitating Peyton Manning on the scout team, the statisticians, the players, the coach, or the personnel guy, everybody contributed.
Teamwork, of course, is one the perennial topics du jour in the software world. Demarco and Lister’s classic Peopleware talks about it, introducing the concept of a “jelled” team. All the variants of Extreme Programming rave about it.
But what makes a team good? It is difficult to have a good discussion about software teams because there are not enough public concrete case studies. Could sports teams provide a basis for such a discussion?
Peter Drucker, the management expert, thought so. In a Wall Street Journal essay written several years ago, he discussed three paradigmatic teams: baseball, football and tennis.
What type of team does your organization have? As Drucker makes clear in his essay these teams are distinct alternatives. They have unique strengths and weaknesses, but attempts to combine parts of each are a recipe for disaster.
I bet a lot of traditional software organizations have baseball style teams. You do not play as baseball team, you are a member of the team. The third baseman never pitches, the tester does not do development. Designers do not have much interaction with developers. This is the old-style Detroit assembly line. The big advantage to this team is that it makes it easy to train and evaluate personnel. Everybody can be a "star" no matter how difficult they are to get along with. On many plays, certain players are not important. The left fielder does not do much on a routine ground ball to the second baseman. Symphony orchestras are like this as well.
This approach works well when the task is well understood and can be reduced to "routine". Here one can understand the drive to outsource and offshore software tasks. If things are well defined, and competence exists elsewhere, then price drives all.
The problem comes when you need to innovate quickly.
Football teams are more flexible. There is no equivalent to the halfback option pass in baseball. On almost every play, every player is necessary, if for no other reason then to prevent some other player from getting to the ball carrier, or the quarterback. Everybody works in parallel. Unlike baseball, everybody has to follow the coach's orders or else the team will not win. You have to train together to be effective.
How do you evaluate and train people? An individual's value is often related to how they complement the rest of the team, not only on their individual strengths and weaknesses. Play without a linebacker or a safety, and you will fail just as if you played without a quarterback. But why do quarterbacks get more money? Why do they both get more money than teachers? This is the old diamond-water paradox in economics. Here you have to reward people based on their marginal value to your team, not on some absolute scale. It is much harder to outsource, much less offshore if you live in this world.
You still need, as Drucker points out, a score to evaluate how well the team is doing. Though as any football fan will tell you, the score does not relate well to how an individual player is doing. That is why you have to "watch film" to evaluate players, something baseball sabermetricians ( http://www-math.bgsu.edu/~albert/papers/saber.html ) do not have to do. Software managers in this world really have to understand what is going on in order to evaluate and train their people.
Finally there are tennis doubles teams (or jazz combos). There are no clear players, only roles to be filled by different team members at different times. This is the ultimate in flexibility and adoptability to changing circumstances. But there really has to be a fit here. How do you train and compensate in such a world where you can succeed or fail, but there is no score. It is hard to relate the end result to the individual. You certainly cannot outsource or offshore here.
As Drucker says, teams are tools, and you have to understand your environment and pick the appropriate approach. This is what management is all about.
|
|
|
|
|
Wednesday, June 30, 2004 |
|
|
One of the recently discovered Internet Explorer bugs allows malicious sites to install key stroke recording code on your system. This certainly has got a lot of press and deservedly so because of the widespread presence of IE as a browser.
Every time this happens I wonder, does open source produce more secure code? Do “more eyeballs” reviewing the code produce better code? Looking over the list of vulnerabilities on the US-Cert Issues Advisory list makes me doubt that this is true.
Based on my experience, too many reviewers often make for poorer reviews. Remember the last time you had to sign off on a document with a long list of reviewers? The early reviewers glance at the document knowing more reviewers will look at it later. The later reviewers assume the early reviewers did most of the work already. The result is a lackadaisical, poor job of reviewing. You cannot tell me that the open source community is immune from the natural tendencies of human nature.
Approval by committee is no different than design by committee. Just because the committee is larger does not automatically make the review better.
|
|
|
|
|
Monday, June 14, 2004 |
|
|
Driving on the highway around Boston I was wondering about its virtual counterpart, the Information Superhighway. Massachusetts’s accident rate is the highest in the country. People mutter in frustration, “You can’t get there from here” as they navigate streets that look as if cow meanderings determined their path. Yet people and commerce move with an ease and openness that can only imagined on the Information Superhighway.
What makes one so open and the other not? Some might proclaim “Heed the Three Opens of Modern Enterprise Architecture: Open Source, Open Standards, and Open Data.” In my mind I compared the concrete and virtual parallels.
The most obvious analogy is Open Standards. Traffic laws allow for vehicles to travel. Vehicles must be able to signal turns. Vehicles have to stay in lane. They have stop lights and backup lights. In fact any vehicle that follows these standards is allowed on the road. Much to the chagrin of many drivers, following these standards allows bicycles on the road. Standards can even allow for varying defaults. Everywhere but New York City has “right turn on red” as the default. When vehicles arrive at their destination then the work begins.
Vehicles are similar to messages. Open standards define the contents of a message and allow them to get to their proper destination. When the messages arrive at their destination, the actual work begins. Here Web Services standards (SOAP, WSDL, WS-Security, etc.) seem to have reached critical mass. While much more work needs to be done, the industry seems to understand what must be done (routing, federated security, etc.), although in some areas, such as transactions, it is not clear what the right approach is.
Applications create these messages. Viewed in this light, the dispute over Open Source does not seem as important as Open Standards. How the vehicles are built is not as important as their ability to interoperate on the open road. Yes, both the real and virtual counterparts have to be reliable and economic. You have to be able to upgrade and maintain them. But how that is accomplished is not critical to either superhighway. Some drive a BMW, others a Ford Escort. Different cars perform differently, they just have to perform. The success of the Information Superhighway does not depend on the success or failure of Open Source.
What does matter is what happens when the message or vehicle arrives at its destination. This is where commerce, recreation, or whatever occurs. In the real world, human beings can interpret the ambiguity of their interaction. To sign into a building, a security guard can judge whether the picture on your driver’s license (your federated security id) matches the person in front of them. A human can interpret the way you write out your address, or whether you put dashes or dots in your phone number. Data need not be strongly typed in the real world.
The data that moves on the Information Superhighway is different. If two applications have a different way of encoding an address, or a list of drug interactions in a data structure, these applications cannot interoperate even if they can exchange messages. Without Open Data information cannot easily move.
There is much sound and fury over Open Source, much love and singing kumbaya with Open Standards, and confusion over Open Data. Open Source and Open Standards people understand. But what is this “Open Data” concept? Look at one of the great intellectual popularity contests of our generation, Google (6/4/2004) by searching on the terms “open source”, “open standards” and “open data” and see the quality of what comes back, the first two are understood terms, the latter is not.
XML by itself does not help here. A customer record, or an address, or a list of drug interactions can be encoded in any one of several posssible sets of XML elements. Open Data requires XML Schema so that XML can be typed. If organizations can agree on the appropriate schemas they will be able to transform the content of their messages into their applications data structures.
Open Data is the missing link to make the Information Superhighway a reality. How can you integrate business services unless you have Open Data? You can talk about Service Oriented Architecture (SOA) until you are blue in the face, but without Open Data it will all be pointless. SOA is a way to build flexible, evolvable applications, but it is the moving of data that makes the building of services a useful endeavor.
It will take a while before enterprises learn how to achieve Open Data. On the other hand, do not be overly discouraged. Our automotive superhighway was like that once. Imagine what driving across the country in the late 19th or early 20th century was like. There were only 150 miles of paved road in the US in 1903. It was an adventure. Read books such as “Horatio’s Drive: America’s First Road Trip” by Dayton Duncan and Ken Burns, or “Coast to Coast by Automobile” by Curt McConnell and compare those experiences with ours today. We tend to forget how far we came, and how long it took us.
|
|
|
|
|
Sunday, March 14, 2004 |
|
|
Over lunch the other day, a programmer mentioned that Gamma, et.al’s book Design Patterns revolutionized his thinking about software development. He asked me what programming books revolutionized my thinking. I agreed that Design Patterns changed the way I think about software, but I did not consider it revolutionary.
Thinking about this later, I realized that the most important books that I had read were not about the mechanics of programming itself, or about designing, or architecting software. They were about the cultural anthropology of programmers.
Cultural anthropology studies the patterns of human relationships in areas such as language, communication, socialization, relationships, and politics. Cultural anthropologists try to relate the organization of a person’s mind to their behavior. Books that helped me understand the programmer’s mind, and how it relates to their behavior have revolutionized the way I think about software development.
The success or failure of programming projects that I have been part of usually has less to do with the technology, than of the patterns of human relationships. One of my favorite examples is the software project whose architecture followed the corporate structure – the application groups designed the applications, the graphics group the graphics, the database group the database, and the user interface group the user interface. The resulting program was like feudal Europe; it did not work well together.
Here, in no particular order, are the books that have really influenced me.
The Mythical Man-Month by Frederick P. Brooks - pure wisdom about why software projects succeed or fail. See my review of it
here
.
The Psychology of Computer Programming by Gerald M. Weinberg - the first book that got me to see programming as a human activity, and why understanding human behavior is important to understanding how to build better software.
Both these books have been republished with new material. Get the latest editions.
Peopleware by Tom Demarco and Timothy Lister - a great book on the workplace and software teams. This is a book your supervisor should read as well. There is a second edition of the book, but I have not read it.
Donald E. Knuth wrote two essays that strongly influenced me. While dated, and I do not recommend them as strongly as the others, I feel obliged to mention them. The first, "Structured Programming with goto statements", written in 1974, made me realize the importance of thinking about software structure, and not language constructs. The point of the article is not that gotos are great things; but that the correct level of abstraction is critical to writing good programs. The second, “Literate Programming”, written in 1984, got me to realize that software programs could be written clearly using literary concepts. Programs that are clear to human readers are better programs because they are clear about what they want to accomplish. Both essays have been reprinted in the book Literate Programming.
|
|
|
|
|
Sunday, February 29, 2004 |
|
|
When the speakers on the .NET track of the Syscon Edge 2004 conference got together, Carl Franklin and I were talking about why people think that C# is the "official language" for .NET. I told him that even though most of my consulting is in C#, I think that attitude is wrong. I believe it is important to elaborate why I feel this way.
People who feel that VB.NET is an inferior language to C#, or that somehow C# is a "better language", or the "official language" for accessing the .NET Framework Class Library are just plain wrong. My personal opinion is that I prefer C# to VB.NET because I like the compact syntax among other things, but that is a personal judgement.
People who talk that way about VB.NET are confusing three issues.
First suitability to access the Framework Class Library (FCL). Every example in my book "Application Development Using C# and .NET" has been translated into VB.NET and works exactly the same way. I have used the same courseware for both C# training and VB.NET training with the only difference that the examples were in the different languages. From the point of view of the FCL, everything C# can do, VB.NET can do as well.
Second issue: suitability to a given task. Equality before the FCL, or the Common Language Runtime is not everything. Perl.NET can do things that C# cannot. Does that make Perl.NET a better language than C#? No. It just makes it a better choice in some cases. If you need to use unsafe mode, you need C#. You cannot overload operators in VB.NET. You might find VB.NET's late binding feature more convenient than using the reflection API in C#. You might like background compilation in VB.NET. It is is possible, that for certain features the IL that C# generates is more efficient than the IL that VB.NET does. I do not know if this is true, but even if it is, it probably does not matter for most applications. After all, in some performance situations managed C++ is better than C#. For people interested in the differences between the languages look at O'Reilly's C# and VB.NET Conversion pocket reference.
FInally: de gustibus non disputandum est, there are matters of personal preference. I like C#'s compactness. I think it has certain advantages, but that is a matter of taste. Taste is important even in technical matters, but do not confuse taste with other factors, or mistake taste for intuition.
I wish VB.NET programmers a long and productive life. VB.NET programmers should not feel inferior.
|
|
|
|
|
|
|
|
Archive |
April, 2013 (1) |
March, 2013 (1) |
July, 2012 (1) |
June, 2012 (1) |
May, 2012 (3) |
March, 2012 (2) |
February, 2012 (1) |
January, 2012 (1) |
October, 2011 (1) |
May, 2011 (1) |
January, 2011 (1) |
December, 2010 (1) |
November, 2010 (1) |
September, 2010 (2) |
August, 2010 (1) |
July, 2010 (1) |
March, 2010 (1) |
December, 2009 (2) |
November, 2009 (3) |
October, 2009 (2) |
August, 2009 (2) |
July, 2009 (1) |
June, 2009 (2) |
May, 2009 (3) |
January, 2009 (3) |
October, 2008 (1) |
September, 2008 (2) |
August, 2008 (1) |
June, 2008 (1) |
April, 2008 (1) |
March, 2008 (3) |
February, 2008 (2) |
January, 2008 (1) |
November, 2007 (1) |
October, 2007 (1) |
August, 2007 (1) |
May, 2007 (1) |
October, 2006 (1) |
September, 2006 (2) |
August, 2006 (1) |
July, 2006 (1) |
June, 2006 (8) |
February, 2006 (1) |
November, 2005 (1) |
October, 2005 (1) |
August, 2005 (1) |
March, 2005 (2) |
December, 2004 (2) |
November, 2004 (1) |
August, 2004 (1) |
June, 2004 (2) |
March, 2004 (1) |
February, 2004 (1) |
|
|
|
|
Themes |
Pick a theme:
|
|
|
|