I just got back from Denver and the Rocky Mountain Tech Trifecta v3.  I will post more about the trip soon, but for now you can download my slides and examples in the links below.

Even though this is more than just my tech blog I don't often post content of a political nature, but my train of thought has been so surprisingly close to this quote that I could hardly believe it when I saw it. 

I just read this in a comment on an article about parts of the health care law being found unconstitutional. I think it's awesome and sums up what I try to tell people when they ask why I am a republican:

"You cannot legislate the poor into freedom by legislating the wealthy out of freedom. What one person receives without working for, another person must work for without receiving. The government cannot give to anybody anything that the government does not first take from somebody else. When half of the people get the idea that they do not have to work because the other half is going to take care of them, and when the other half gets the idea that it does no good to work because somebody else is going to get what they work for, that my dear friend, is about the end of any nation. You cannot multiply wealth by dividing it."

-- Dr. Adrian Rogers 

I believe myself to be a very middle of the road person when it comes to welfare policy.  I believe that some welfare is absolutely necessary. After all, if you see government as a community organization that has interests in making investments for the future benefit of the community, then keeping people from falling through to rock bottom is in our best interest.  At some point in our lives almost anyone could be found in a free-fall situation and keeping us from falling all of the way to the bottom before we can begin to rebound would be a very smart investment. The problem is that the governing philosophy does not reflect the proper values. Let me explain.

Every person needs a safety net.  A safety net is defined as people and resources that can be used to come to your rescue.  The first and built-in safety net is our family.  If you needed to move back home because of unemployment, for example, it is very likely that your parents are willing to share their resources and assets to help you out in this situation.  What if you don't have any family or your family is in the same situation?  The community should, in my opinion, be your next safety barrier.  Depending on where you live you may know your neighbors really well or you may not know them at all.  I think we should know our neighbors very well and should be the kind of community where we can help neighbors out when they go through a tough time.  After all, having a neighbor check in on you when your sick is the next best thing to a family member. We should also seek to become this kind of neighbor. 

Lastly, if a person has no family safety net, and no community safety net, then there should be a government safety net.  The trick is to not think of the government as an independently wealthy entity that owes goods and services to you; but rather the collective resources of your community pooled together for things that are essential and needful. It should also be your last line of support. I think it can be easy to forget that what you demand of your government you are really demanding of your neighbors. With this philosophy in mind perhaps the welfare system is not quite in line with what really is best for all.  

For example, if someone relies on the welfare to pay unemployment benefits then perhaps this philosophy could help.  First, the benefits that are paid should probably be closer to what that person was making when they were working.  For the short term that will save the person from huge financial problems and help them transition easily onto a new job. Once at the new job they can begin paying back the system they relied on using fair no-gain payment arrangements that could be adjusted to be repaid easily over a long period of time. This would also remedy the stigma of employing the unemployed -- that somehow they have baggage that the management prefers to not get involved in.

Obviously this strategy only works well for those who are unemployed short-term.  For long term unemployed we would probably need to take a much different strategy.  First we would need to assess if the person has aptitude for something better than what they were doing.  If they are long term unemployed then it's quite likely that there will not be any new jobs that want to hire that person with their existing qualifications.  Offer them a grant or loan to attend training to receive qualifications that should not only help them find a job but help them find a better job. At the same time get them working -- at least part time.  This part could be tricky but important.  Finding work for someone who can't find work may seem like an impossible idea, but consider that there are just some jobs that almost anyone can do, even if they have to work for the government filing papers. There is always this kind of work that needs to be done.  

Perhaps you were a programmer and can't find work programming.  No problem! Take some tech support calls for this company while you go to school to upgrade your Fortran 77 skills at the local community college.  No, you probably can't make a living doing the tech support calls, but if your unemployment benefits helped out a little during this transitional phase then you would be fine! When you're done you would be in a much better place to be able to contribute back to the community. 

In some cases a person will need to draw much more from the support network then they could possibly repay.  In these cases they should not be subject to crushing financial obligations -- although some should be repaid when possible.  Just like anything else, not every investment will yield a return and steps will have to be taken to minimize the number of losses but I think that thoughtful implementation of a program along these lines could go a long way.  It would allow people to reach their full potential and not reward those who are content to live off of hand-outs.

Then, once we have figured out that the government is in the business of investing in it's people and people realize that the government is really a group of their neighbors / community, then I think we can figure out better solutions to a whole host of problems.  The term "entitlement programs" would evaporate and be replaced with "community support programs" or "perpetual support programs" being perpetual in the sense that once you have been a benefactor you can replenish the resources you used so another can also benefit.

Imagine a world where our payroll taxes were a hundredth of what they are now. You may be paying more taxes from time to time, but overall there would be very few instances of people using the government like they would their rich uncle. If you look at where the money is going right now you would realize that this is absolutely possible, and in order for our country to survive as a democratic republic long-term, absolutely necessary!

Not to make too fine a point, but pretty much everyone who lives in the U.S. will admit that it's the greatest country in the world and there isn't anywhere else quite like it. At the same time they argue for more and more social programs (the health care legislation being one of those) at the expense of "the rich". They probably don't realize that even if we taxed the rich for 100% of everything they earn it would be woefully short of what would be needed to sustain these programs. The way we lose democracy starts out like this: a well-meaning program to help the "have not's" takes from the "haves" and starts a class war. Being more poor than rich, we vote into government those who are willing to wage this war.  When the money is gone, and it will go fast, society will start to collapse.  I may not have all of the specifics of how exactly this will happen but the eventuality is that we'll either have the government stop promising people things it just can't deliver or we will slowly move from democracy to socialism and then to communism.  

What's so bad about that socialism and communism? It seems to work for billions of people, right? WRONG! There is nothing to envy! While a certain country may or may not become the new economic power -- it does not help the majority of its citizens. It doesn't suppress freedoms because it's so powerful it needs something to do, it does so because it must.  People are not a potential investment, they are a liability.

I learned long ago while studying Computer Science that if the underlying architecture of something is flawed then anything built on that architecture will be brittle, burdensome, and end up costing a lot more then switching to the appropriate architecture in the long run.  Just like in Computer Science, if we build social policy on some bad underlying philosophy then we can expect the same kind of results.

 

with 1 comment(s)
Filed under:

HTC-HD3-mockup

Microsoft announced recently that Windows Phone 7 has RTM’d to manufactures.  The developer tools will be release September 16th, 2010 and the new marketplace is expected to open the first week of October. 

I plan to be standing outside of the T-Mobile store, iPhone style, to get my hands on this new hardware.  I may not be getting the first model to be released because I really want the rumored HTC HD3 or HD7 or whatever they plan to call it. 

While my friend has recently acquired an Android phone and I currently have an iPhone I am really excited about this new offering.  Below is an explanation of why. 

 

Reasons I want a Windows Phone 7 device:

  • The number 1 reason is the development experience.  Windows Phone 7 allows tools that we use to build everyday applications to be used to build apps for the phone with very few (if any) modifications.  In fact, there are over 300,000 downloads of the beta developer tools so far! I am used to writing quick little apps on the desktop when I need something.  I like that I’ll be able to do the same thing with my WP7!
  • I have a *lot* of code already written in C# that I can use!  In fact, I am using a library I wrote years ago for one of my apps. It is a lot of stuff I don’t have to do special just because I am targeting a mobile device.
  • Did I mention that I got really far into the development of my first WP7 app in just a couple of hours!  The productivity is amazing!  Even sending the app to the Emulator is lightening fast!  The developer story for this device couldn't be better!
  • Microsoft has made it very easy for software houses to port Games using XNA to WP7.  I expect the gaming experience on WP7 to be fantastic!  I mean it supports DirectX 9!  Another thing to consider is that people tend to downplay the XBox Live experience part of the phone but it really could take off.  Challenging people to games, for example, is all the boon on Facebook which has a huge gaming presence.  This same kind of interaction takes place on XBox live and could catch fire and that would make the devices hugely popular.
  • FM Tuner – This goes a little bit to the “iPhone is not enough” feeling I’ve been having, but it sure would be nice to have a HD FM Tuner on my phone.
  • The hardware on the Rumored HD3 is freakin’ Amazing!
  • My iPhone is good.  My iPhone is nice.  My iPhone has lots of apps, but I still don’t have a good user experience working with word, excel, etc. Really not impressed with it’s capability as a “business device”.  I can’t save files to disk, I can’t insert an external flash card, share my WiFi ala HotSpot or tethering, etc.  There are just a lot of “can’ts” with the iPhone.
  • My last dig on the iPhone is AT&T.  I currently have an jail broken 3G, but I’m reluctant to update the OS beyond the 3.0 OS I currently have.  I’m not even sure it will let me go to 4, but I can’t risk loosing my Jail break.  More and more applications will not run on my phone and Apple is really trying to kick us out! I can either fight it or take the message.  I’d rather get an Android phone than switch to AT&T!
  • Android phones are sure capable and functional but sexy they ain’t! Their app store is starting to rival Apples and the development story isn’t too bad.  My criticism is that their touch isn’t nearly as good as the iPhone or Zune (which WP7 is based on) and the user interface seems slow.  Mind you, I’ve only played with 3 real devices and 1 emulator but everything I see coming from WP7 looks super fast!
  • Another good / bad for the Android is the OS.  The Android OS is Open Source and rooting the device is straight forward and commonplace.  The bad is that it seems like Google leaves a lot of their “old generation” phones in the cold when it comes to OS enhancements.  If you buy a brand new phone today, there is every reason to believe that the next version of the OS won’t work on your phone.  There is absolutely no guarantee that Microsoft will behave any differently but it probably couldn't be any worse than Google for the upgrade story.
  • I believe that the missing features of the OS (Copy & Paste / Multitasking) will be delivered very soon.
  • I kind of like the idea of a fresh-faced marketplace with nothing on there yet.  As a developer / software engineer, there is more opportunity there than in say the iPhone marketplace that has been going for years.

So as you can tell I am hugely excited!  My only problem now is choosing which application to finish first and how much time to spend on them in any given weekend! :) I WANT TO GET MY NEW WINDOWS PHONE 7 TODAY!  I WANT IT! I WANT IT! I WANT IT!  Err, um, I guess developing applications on the emulator will have to distract me until then!

Cd_caddies_JPG

I bought a digital camera for our family back in 2005.  At the time I bought it everyone else I knew already had one and had been using it for many years.  It is now 5 years later and we have decided that after some close calls we need a better system to store our invaluable data!  After all it is impossible to re-create those photos.  While the Grand Canyon and Yellowstone National Park are not going anywhere in our lifetime, our family changes – fast!  If you don’t have lots of photos you can forget a bunch of the fun times you have together.

During a discussion, if you ask anyone two simple questions “Have you ever lost digital photos” or “what do you do to archive your digital photography” you will always get “Yes” and at least one of the following:

  • Printing the Photos
  • Burning a CD/DVD with the Photos
  • External Hard Disk
  • Online Backup
  • Scrapbook / Photo books

The truth is that only a combination of these techniques is going to work and nothing is 100% guaranteed and it is surprising how many people have lost so many photos! The phenomenon of losing data or in which the format the data is stored in becomes obsolete is called the Digital Dark Age. Storing large amounts of data for long periods of time is tricky business. We’ll discuss each approach separately before discussing a combined approach.

Printing Your Photos

Paper lasts forever, right?  Not exactly!  Did you know that the Declaration of Independence has to be kept in a case made of titanium and aluminum filled with argon gas to help preserve the document? Artwork has a similar problem.  White turns to yellow and the colors either fade or darken. Photo’s from your childhood are yellowing at an alarming rate!  Go back and look at your chemical prints and see how much they have yellowed.  You may be surprised!  Generally no matter what the physical media the culprits are: acid/chemicals, air/oxygen, heat/light.  You can get acid free paper and store it in an acid free box but it’s likely you still have acid and chemical exposure from external sources such as the oil in your fingerprints.  Even this will not keep chemical photo’s from aging because the very process used to develop these pictures adds acids and chemicals that will eventually destroy the photos.  Photo’s that are printed using ink suffer from these same problems.  There are special inks that resist aging but in general most still either fade or are damaged by water and humidity.

Another noteworthy item is that this archival method cannot save the video snippits that most point and shoot digital cameras have now days.

Burning a CD/DVD with the Photos

I burned my first CD back in the summer of 1995!  It was a CD of MP3’s (pretty new at the time) and was one of my most favorite possessions!  It was burned on a CD writer that had an internal HDD to help with buffer underruns (a very common problem! I lost more CD’s to buffer underruns then I had successfully write back then).  It was burned at 1x speed and took an additional hour to verify the data was written correctly.  I had to drive miles and miles to find a Circuit City that carried writable CD media! That CD lasted until about 2008 at which time it became unreadable – despite optimal storage conditions. That is a lifespan of about 13 years.  This was a gold CD and was very high quality.  Low quality CD’s will not last as long.  Many of them have a lifespan of only 1-3 years! Word to the wise – DO NOT CHEAP OUT ON ARCHIVAL OPTICAL MEDIA!  Just last month we had a scare where the CD for the August 2009 photos wouldn't read.  Luckily we had a backup and did not loose those photos and videos! That disk was only 1 year old!  If you think that writing your photos to CD/DVD is going to preserve your memories forever, I have news for you – That ain’t going to work, Alice!

Keeping in mind that typical higher quality CD media only has an average lifespan of 5-10 years in optimal conditions; knowing which CD-R/DVD-R is the highest quality is not simply a matter of picking the best brand. There are some CD/DVD media that are made for archival.  One company claims that their CD media can last 300 years and that their DVD media can last 100 years. This is obviously a theoretical average using the ISO 18927-2002 guidelines. Personally I’ll replace these disks every 10 years, but at least I know I can go 10 years without a lot of worry about loosing a few disks. It is also recommended that data stored on optical disks are not stored in a compressed format (like zip).  This is because if there are disk errors the contents of a compressed file are completely gone where some uncompressed files could be recovered individually.

External Hard Disk Drive

External hard disks are the best way to backup modern computers with terabytes of data. They are fast and the disk failure rate has gone down significantly. Simply having an extra copy of your data is a valid data backup strategy but it’s not really an archival strategy.  Eventually that disk will fail AND your computer disk will also fail! They are mechanical devices and simply cannot last forever.  When these disks fail, they usually fail in a spectacular way often leading to large expenses to recover data and the possibility that not all of the data could be recovered.

An alternative to the traditional disk drives are SSD Disks.  SSD (Solid State Disks) have no moving parts and will theoretically last a lot longer.  They have the added advantage of failing on write rather than read.  That means when these disks do fail it’s because they cannot write. SSD’s are a new technology and do have some problems.  For instance they can only be written to so many times before a cell cannot be re-written.  This is exacerbated by write amplification where SSD’s have a larger cell of memory than the file system.  Each time a block in that cell needs to be changed the entire cell must be re-written. There is some new technology to help overcome these limitations but all things need to be considered in a data archival scenario.  For my money though, I’ll probably buy a 128GB SSD for our “secondary” backup device. It may be pricy now, but when we fill it up the next one we buy will probably be much larger for the same price.

If you choose to use a SSD for archival there are some things you need to be aware of.  First, while the data on the drive can last a very long time, it will eventually loose charge if the drive is not plugged in from time to time. The amount of time a drive can sit unplugged and retain it’s data is said to be about 10 years, so it’s a longer period of time.  Still, if you fill a drive up and are no longer writing to it every month you need to devise a scheme to plug them in every once in a while.  It’s not clear to me yet if the data actually needs to be re-written in order to prolong the longevity of the data but I hope to figure that out soon.  Another consideration is that each time you write to a cell the shelf life on that cell decreases some small amount.  This means that it is optimal to use this device as archival only and resist the temptation to double the speed of your computer. :)  Last, there is a difference between SLC memory and MLC memory.  SLC has more write cycles but is quickly loosing favor to MLC because it’s less expensive to manufacture. Again, I’m not sure which one makes a better archival drive but I’ll post here when I find that out.

Online Backup

With online backup services like Mozy and Carbonite combined with inexpensive broadband Internet, it’s easy to see why this will be a good alternative for some.  Broadband internet would be a must in this case, but if you had this setup you would also be protected in case of a fire. The only downside to the online backup is the question about what happens if the company goes out of business and/or if you were unable to keep paying the annual premium.  A lesser issue might be that if you did need to restore your backup it could be difficult to retrieve all of that data. All things to consider when you devise your archival strategy.

Scrapbooks / Photobooks

My wife really likes to do digital scrapbooking.  She has been working on creating a photo book each year of our favorite pictures from that year.  We would typically upload the pre-finished scrapbook images to Shutterfly or some other online photobook website. We really love these books and while they are time consuming and expensive we’ll continue to produce them because this is how we enjoy our images.  Our favorite part is that once our kids grow up we can make a special book for them with all of their photos and/or order another family photo book for them. 

I have zero photos of myself when I was a kid!  It is our goal to scan all of our childhood photos before they fade too much.  We have already done our wedding photos using our fancy HP Scanjet G4050 Photo Scanner (with transparency attachment for scanning negatives) and will eventually spend a lot of time at our parents and siblings houses scanning photographs.

These photobooks are not really a backup of our images.  If you were to scan one of these images back in you would not get a good enough resolution to print it again or order a larger print. And again, they do nothing to preserve the home videos to which we have equal attachment.

Conclusion

We have decided that we are going to backup our images monthly on both a SSD device and the special archival CD/DVD.  Some months we only need a CD to backup or data and other months we need a DVD – mostly because of movies. We also hope to be able to get a new DSLR camera and the image space required to store those will increase by at least double the output of our current 6MP camera. After 10 years we will make a backup for the current month and also a fresh copy of the backup from 10 years earlier.  We haven’t figured out 100% how we’re going to deal with the scenario where there is a fire it will likely be something like we store them at a relatives house or a safe deposit box.

The important thing is to make a plan for your data and get it implemented!

programmingJoke

I was having a bad day at the office a couple of weeks ago.  One of the things that makes me feel a little better is to look up some programming jokes.  I really am a programmer at heart and a nerdy one at that. 

So before I was about to endure the drudgery of the commute home I searched for programming jokes and read a few.  There are some great programming jokes out there!  I especially like the ones about C++ and UNIX.  For example “Unix is user friendly, it’s just particular about who it’s friends are”.

Anyway, I was reading this image and kind of chuckled.

Happy1

After reading this screen shot I briefly thought to myself “There is no way a DOS prompt now days would say that, but I wonder what it does say”. I decided to give it a try so I <WinKey>+R CMD<Enter>.  I was having such a bad day that even typing the word “Happy” was difficult, but the result made me laugh so hard I could hardly keep on my chair:

Happy2

I suppose it was a little perfect!  Happy was unexpected at this time! By the way, 100 points to anyone who can convincingly explain why it does that other than “Some programmer at Microsoft thought it would be really funny!”.  If you type “If” or “If Your” into a command prompt it responds “The syntax of the command is incorrect.” which is kind of what I expected.  If you type just “your” or “happy” you get the all so familiar error “'your' is not recognized as an internal or external command, operable program or batch file.

Just for fun here is another programming joke – The color of 6 letter words.  I rather like the color of “Cashed”.

rgbwords

CommunicationThe Model-View-ViewModel (M-V-VM or MVVM) pattern has been widely adopted by those who build WPF and Silverlight applications.  WPF and Silverlight have such strength in data binding that there is no real need to have any code-behind in a view.  It is sometimes difficult to get to this point but once you get there the application becomes much more maintainable, and testable. 

This is because a clear separation of concerns where the View’s responsibility (concern) is to display UX controls that are bound to real data.  The ViewModel is to represent the data in a way that the View can bind to it and to allow a way to produce a model object as a result.  The Model is responsible for validating data and is used to persist back to the database. 

While this greatly simplifies a single view, in practice many views/view models need to work together.  The controller is supposed to help provide some of this cross-view-model communication.  It’s actually a myth that MVVM architecture does not include (or need) a controller.  At some point there has to be an object that creates a view-model and opens up a view and by definition that object is the controller.

MVC Pattern

MVP Pattern

MVVM Pattern

One simple answer to the problem of communication between different parts of the application is to use the Mediator Pattern.  I am by no means the first to suggest the mediator pattern as a possible solution to this problem.  Josh Smith, among others, suggested it in a past blog post. The implementation I built is a little unique in that it does not make use of a string or an enum to determine message type. 

This implementation allows:

  1. Views to communicate with the controller in a static-typed way.
  2. Views to communicate with other views in a static-typed way.
  3. Messages to be sent to all ICommunication classes that are listening for a specific message type.
  4. Messages to be sent to all ICommunication classes based on the host classes type.
  5. Messages to be sent to a specific object (callback).

CS Mediator

But we’ll get to that, lets start at the beginning. First, each class that wishes to participate in communication (ViewModels and the Controller) must impelment the ICommunication object.

public interface ICommunication
{
void ReceiveMessage(ICommunication sender, Message message);
}

The reason all participants must implement this is because we want to make it possible for some ViewModelBase to implement the stuff that is shared between all view models.  It is also a “catch all” for messages that do not apply to a better action.

This implementation of the Mediator pattern is based on types.  This means that the type of message we send determines how the message is routed.  For example, consider the following Message types.

public abstract class Message { }

public abstract class DataMessage : Message { }

public abstract class ActionMessage : Message { }

public abstract class StatusMessage : Message { }

public class StatusUpdateMessage : StatusMessage
{
public string ProgressAction { get; set; }
public string ProgressText { get; set; }

/// <summary>
/// The value to set the progress bar.
/// A negative value hides the progress bar
/// </summary>
public sbyte ProgressBarValue { get; set; }
}

public class CloseWindowActionMessage : ActionMessage
{
public WorkspaceViewModelBase Window { get; set; }
public CloseWindowActionMessage(WorkspaceViewModelBase value)
{
Window = value;
}
}

public class OpenWindowMessage : ActionMessage
{
public string ViewName { get; set; }
}

public class OpenWindowMessage<T> : OpenWindowMessage
{
public T WindowParameter { get; set; }
}

You can see that they all derive from the abstract base class Message, but there are different taxonomies for different kinds of messages.  For example, if I want to open a new window and I want to pass a parameter to that window to open it, the type I would use is OpenWindowMessage<Fruit> which derives from OpenWindowMessage.  OpenWindowMessage contains the name of the window we want to open.  OpenWindowMessage is also an ActionMessage which tells the controller it wants it to do something. ActionMessage simply derives from Message.

Therefore to send a message to open a FruitWindowView with a Banana instance we would call:

// Send a message to open up a new view
Controller.SendMessage<OpenWindowMessage<Banana>>(this,
new OpenWindowMessage<Banana>() { ViewName = "Edit Fruit", WindowParameter = SelectedBanana });

The Controller will receive an OpenWindow message and act accordingly.  In fact, at this point you may be curious how a subscriber requests these messages.  Here is a excerpt from the Controller:

// Setup Messaging
Controller.Subscribe<ActionMessage>(this, (MessageDelegate<ActionMessage>)((sender, message) =>
{
if ( message is CloseWindowActionMessage )
Workspaces.Remove(((CloseWindowActionMessage)message).Window);

if (message is OpenWindowMessage)
{
OpenWindowMessage openMessage = message as OpenWindowMessage;
if (openMessage.ViewName == "Edit Fruit")
{
AddJobRun(((OpenWindowMessage<Fruit>)openMessage).WindowParameter);
}
}
}));

You can see in this example that the controller is opting to subscribe to any messages that are ActionMessage.  This includes messages that are ActionMessage or OpenWindowMessage or CloseWindowMessage, as they inherit from ActionMessage.  You can subscribe to messages at any level.  If we wanted specific implementation to handle OpenWindow<Peanuts> we can register a message handler to execute specific code when that message is received.  The great part is because this is all static-typed our message handler gets full use of code completion features and we do not have to cast our object.

You can start to see that it is going to be very important to keep the types of messages clear, clean, and concise.  That’s why I didn’t create an OpenFruitWindowMessage, because an OpenWindowMessage<T> should work for any window that requires a parameter.

Here is the interface for IMediator, which is generally implemented by a Mediator class and is used by the controller.

/// <summary>
/// Subscribe to listen for specific types of messages
/// </summary>
/// <typeparam name="T">The message type to listen for</typeparam>
/// <param name="self">The class setting up the event delegate callback</param>
/// <param name="eventDelegate">The event delegate to receive the messages when they arrive</param>
void Subscribe<T>(ICommunication self, MessageDelegate<T> eventDelegate) where T : Message;

/// <summary>
/// Subscribe to listen for specific types of messages
/// </summary>
/// <param name="self">The class setting up the event delegate callback</param>
/// <param name="eventDelegate">The event delegate to receive the messages when they arrive</param>
void Subscribe(ICommunication self, MessageDelegate<Message> eventDelegate);

/// <summary>
/// Remove a ICommunication object from all communication activity
/// </summary>
/// <param name="self">The ICommunication object you wish to unsubscribe</param>
void UnSubscribeAll(ICommunication self);

// Send Message
/// <summary>
/// Sends a message to any other ICommunication object listening for this message type
/// </summary>
/// <typeparam name="T">The type of message you are sending</typeparam>
/// <param name="self">The sender's ICommunication object</param>
/// <param name="value">The message to send</param>
void SendMessage<T>(ICommunication self, T value) where T : Message;

/// <summary>
/// Sends a message to a specific ICommunication object listening for this message type
/// </summary>
/// <typeparam name="T">The type of message you are sending</typeparam>
/// <param name="self">The sender's ICommunication object</param>
/// <param name="recipient">The direct recipient to send the message to</param>
/// <param name="value">The message to send</param>
void SendMessage<T>(ICommunication self, ICommunication recipient, T value) where T : Message;

/// <summary>
/// Sends a message of a particular type to any other ICommunication object that is of a particular type.
/// </summary>
/// <typeparam name="T1">The type of message to send</typeparam>
/// <typeparam name="T2">The type of object to send the message to</typeparam>
/// <param name="self">The sender's ICommunication object</param>
/// <param name="value">The message to send</param>
void SendMessage<T1, T2>(ICommunication self, T1 value) where T1 : Message where T2 : ICommunication;

Final Notes

You will want to try to manage the lifecycle of your ViewModels as you want to subscribe to messages when the ViewModel is created and UnsubscribeAll when it’s about to go away.  Because we use a weak reference it won’t prevent garbage collection but if the class is not yet garbage collected and it receives a message it may throw an exception.

You can download the source files here and they consist of 5 CS files. Note: This solution enlists the help of C# 4 dynamic. As such the implantation in CSMediator.cs will only work under the .NET 4 framework. It may be impossible to back-port to an earlier version, but if you manage it, please send me the code! It’s also great to hear from people who find it useful or have suggestions.

  • ICommunication.cs – Required to implement for all objects that wish to participate in communication
  • IMediator.cs – The interface for the mediator pattern
  • Message.cs – The various types of messages that are pre-defined. You will want to change and configure this file.
  • MessageRoute.cs – A helper class that keeps track of actions, owners, and registration information.
  • CSMediator.cs – This is the main implementation of the Mediator pattern and the IMediator interface.  Ideally, your Controller will create an instance of this class as part of it’s lifecycle and provide each ViewModel with a reference during their lifecycles.
References:

Silverlight Logo Small

This past weekend I got to fly to Denver and present at the Rocky Mountain Tech Trifecta v2.  It was a lot of fun!  I can see why people get addicted to code camp!

My presentation was on Silverlight 4 and some of the new features in this release.  There were so many new features that I couldn't possibly cover them all, but I did hit quite a few.

Here is the screen cast.

I got to Denver in on Friday afternoon and rented a car.  The car rental place was a mistake…I could see that from almost the get-go.  I saw a shuttle for every other rental care place before I saw one for E-Z Rent-a-car.  I waited only about 20 min’s and I was on my way there.  When I got there, I passed big shiny car lots, and then at the end of the big long line was a run-down looking building where my car was waiting!  Yay!  Good thing I found a place that was $5 cheaper than the rest.  Undeterred – I’ll do almost anything for a good deal – I went in and got my car.  It felt more like a used car dealer than a rental car place.  The guy at the counter said that I should note any damage on a piece of paper and then he ripped it off and took the half they keep so I didn’t really get a chance to inspect the car. There seemed to be a lot of pressure to buy the rental insurance (even though your insurance carrier covers you for liability).

The car had other problems too.  It was, by my reckoning, a couple of years old and was in pretty good condition for that age, but the dash board said that there was a tire pressure gauge fault and I couldn't see my mileage or trip meter.  That is really useful when you’re following directions that have a mileage.  It also made some pretty ugly noises at freeway speeds.  This was not the last of the car issues, but more on that later.

I stayed at the Comfort Inn about 10 miles from down town.  It was pretty nice.  It was a bit on the small side, the desk chair was uncomfortable, but the bed was VERY comfortable and the bedding was very clean and fresh. 

IMG_0194[1] IMG_0197[1] IMG_0196[1] IMG_0206[1]

One funny thing was the flagpole outside my window.  Check this out!

IMG_0199[1] Notice a flag missing?  Notice that thing in the tree?  That’s right.  The flag for the great state of Colorado is stuck in a tree!  I also had bunnies outside of my window so it wasn’t all bad.

There was a party the night before the event for the presenters.  it was lots of fun!

IMG_0200[1]

The Tech Trifecta is huge!  Almost 4 times as large at the Utah Code Camp.  It was a bit annoying that there wasn’t enough room for the Silverlight birds of a feather and there wasn’t enough room in the keynote.  The only other complaint I had was that it seemed like Session 4 had a bunch of the Microsoft technologies bunched together so you had to pick and choose between ones that you only kind of wanted to see for earlier sessions and between ones you really wanted to see on the later ones.

IMG_0207[1] IMG_0208[1]

IMG_0209[1] IMG_0210[1]

IMG_0211[1] IMG_0212[1]

Right after I finished my presentation I had to head to the airport.  The rental car place will charge you $5 per gal of gas so you need to bring it in full!  I figured I would take an exit or two before the airport and fill it up there.  I took the exit before the airport and drove over 10 miles before I saw a gas station!  No Kidding!  I was starting to think that maybe cars in Denver were powered by batteries or something!  Needless to say it was an unplanned detour that added 20 min’s to my trip to the airport.  It was only then that I saw there was a gas station right next to the airport.  Live & Learn!

Here is where things get really fun!  I returned the car to the rental place where they inspected the car post haste.  The lady handed me the inspection slip and I went inside.  That's where I was told that somehow I must have scratched the rear bumper!  I was VERY ANGRY!!  There was no way that I damaged that car even a little bit!  I guess that's what you get when you don’t buy the all-but-mandatory insurance.  It took me another 20 min’s to put up enough of a fight that they waved the damage charges.  I now had less than 1 hour to get to the airport.  The shuttle driver sure took his sweet time and then dropped me off in the wrong wing.

I go to check in with Delta and it wouldn't let me check in (because my flight was within 45 min’s) so I had to wait at the ticket counter.  The lady was going to reschedule my flight as it was leaving in just 30 min’s and I was departing from the C-gate.  She eventually agreed to give me the boarding pass and told me to RUN to the gate.  The rest of the passengers had already boarded.  I ran to get through security and picked the shortest of the 6 screening lines.  It wasn’t really one of those thing where you could jump to a different lines.  The person in front of me took EVERYTHING for carry on including a cheesecake!  Apparently it’s impossible to tell the difference between cheesecake and explosives?  Yum, C4 with graham cracker crust and cream cheese frosting.  After getting through security I only had 10 min’s to make it to the C-gate.  I hopped on the tram, got to the C-gate, and they plan took off almost immediately after I boarded!

 

 

 

 

 

 

 

 

 

 

 

I hope to be able to go next year but perhaps I’ll drive up instead.

Silverlight Logo Small This is a pretty quick post with the notes and screencast from the training I did for an internal company training.  I also planed to give this presentation at the NUNUG meeting in February but my new baby girl decided that she wanted to come that day.

Remember, this is Beta software and can change from the actual release.  Also, I do not work for Microsoft so if I did something incorrectly, I apologize! :)

// Entity Client & Entity SQL
using (EntityConnection conn = new EntityConnection("Name=UtahCodeCampEntities"))
{
    conn.Open();
    EntityCommand cmd = conn.CreateCommand();
    cmd.CommandText = @"SELECT VALUE p
        FROM UtahCodeCampEntities.Presentations AS p 
        WHERE P.EventID = @EventID";
    cmd.Parameters.AddWithValue("EventID", 1);

    DbDataReader rdr = cmd.ExecuteReader(CommandBehavior.SequentialAccess);
    while (rdr.Read())
        Console.WriteLine(rdr["Title"]);
}

// Object Services & Entity SQL
using (UtahCodeCampEntities data = new UtahCodeCampEntities())
{
    ObjectQuery<Presentation> presentations = data.CreateQuery<Presentation>(
        "SELECT VALUE p FROM Presentations AS p WHERE p.User.UserID = @UserId",
        new ObjectParameter("UserID", 5));

    foreach (Presentation presentation in presentations)
        Console.WriteLine(presentation.Title);
}

// Lazy Loading with Object Services
using (UtahCodeCampEntities data = new UtahCodeCampEntities())
{
    ObjectQuery<Presentation> presentations = data.CreateQuery<Presentation>(
        "SELECT VALUE p FROM Presentations AS p");

    foreach (Presentation presentation in presentations)
    {
        if (!presentation.UserReference.IsLoaded)
            presentation.UserReference.Load();

        Console.WriteLine(string.Format("{0} Suggested the topic: {1}",
            presentation.User.UserName,
            presentation.Title));
    }
}

// Eager Loading with Object Services
using (UtahCodeCampEntities data = new UtahCodeCampEntities())
{
    ObjectQuery<Presentation> presentations = data.CreateQuery<Presentation>(
        "SELECT VALUE p FROM Presentations AS p").Include("User");

    foreach (Presentation presentation in presentations)
        Console.WriteLine(string.Format("{0} Suggested the topic: {1}",
            presentation.User.UserName,
            presentation.Title));
}
// LINQ to Entities with Lazy Loading
using (UtahCodeCampEntities data = new UtahCodeCampEntities())
{
    var presentations = from p in data.Presentations
                        where p.Title.Length > 30
                        select p;
    foreach (Presentation presentation in presentations)
        Console.WriteLine(string.Format("{0} Suggested the topic: {1}",
            presentation.User.UserName,
            presentation.Title));
}

// LINQ to Entities with Eager Loading
using (UtahCodeCampEntities data = new UtahCodeCampEntities())
{
    var presentations = from p in data.Presentations.Include("User")
                        where p.Title.Length > 30
                        select p;
    
    foreach (Presentation presentation in presentations)
        Console.WriteLine(string.Format("{0} Suggested the topic: {1}",
            presentation.User.UserName,
            presentation.Title));
}

// POCO Insert New Object & Transaction Support
User u = new User();
u.UserName = "JohnDoe999";
u.PassHash = "APasswordHere";
u.FirstName = "John";
u.LastName = "Doe";
u.EmailAddress = "JDoe@compserv.com";
u.ShirtSize = "XL";
u.CreatedDate = DateTime.Today;

using (UtahCodeCampEntities data = new UtahCodeCampEntities())
{
    DbTransaction trans = data.Connection.BeginTransaction();
    data.Users.AddObject(u);
    data.SaveChanges();
    trans.Commit();
}

// POCO Update Object
User usr = null;
using (UtahCodeCampEntities data = new UtahCodeCampEntities())
{
    usr = data.Users.Where(n => n.UserName == "JohnDoe999").Single();
}

// Send user over WCF -- Simulated Disconnected Scenerio
usr.ShirtSize = "M";

// Return back to server
using (UtahCodeCampEntities data = new UtahCodeCampEntities())
{
    data.Users.ApplyChanges(usr);
    data.SaveChanges();
}

I hope this helps someone who is looking for some answers on how to do stuff.

with no comments
Filed under: ,

I am in the middle of a great book right now called The Mother Tongue.  It’s about the origin of the English language.  In the early pages it pokes a little fun at the attempt some companies who have a false mastery of the language.

I seem to buy a lot of these products and they are hilarious!  I’ve been meaning to post this image for a while!  It’s the back of the box of my network cable tester.

Cable Tester Back

My favorite is the “Attention”, “Do not change it on your mind.”  I can’t even fathom what it’s trying to warn me against.  I’ve got a manual for my XO brand radio that is just as bad!  Most of the time you have no clue what it’s trying to tell you.  The Radio, by the way, is not much better.  The English on it is pretty simple, but there are large sections of the user interface that are not translated from Chinese! It’s also a pretty poor MP3 player which sucks because that is the feature I purchased the radio for.

with no comments
Filed under:

Cryptograpny

There are three main tools used in cryptography.  They are hashing, synchronous encryption, and asynchronous encryption.  A very superficial definition could be:

Hashing – Creating a unique value based on blocks of input. Even a minor change in the input, say a value changes by 1 bit, will cause a good hash to change drastically.  Ideally, every bit in the right place is the only way to create this number. Hashing is a one-way function so it is not useful for data hiding, but it is very useful for validating the authenticity of data such as a download or a certificate.  It is used to ensure that a message has not been tampered with.  If even a little change is introduced the unique value becomes very different.  If two different blocks of data are hashed using an algorithm that produces the same resultant value it is called a collision, and the algorithm is considered compromised.  This is because a malicious hacker could change the value of the data and still have the resulting hashed value not change.  When this happens the hash algorithm is useless.  This actually happens a lot!

Synchronous Encryption – This means changing an input block using a key into something unrecognizable by anyone not in possession of that key. This type of encryption is what people most commonly think of when you talk about encryption and has been used since about 1900 BC! Depending on the size of the key and the technique used, It is also the most difficult to break.

Asynchronous Encryption – Two keys are generated that are mathematically linked.  A key that is used to write (public key) and another key to read (private key).  The idea is that a public key can be given to whomever wants to communicate with us.  It doesn’t matter if a hacker captures this key because it doesn’t allow them to read the message that the client sends back.  In the case of SSL/TSL the message that the client sends back is a key to use for synchronous encryption. 

Only the recipient whom has the private key will be able to unlock this message and be able to further communicate with the key that was sent.  The only way to break this type of encryption is to guess the key, but if you have sufficient key lengths (1024 is now standard in the US, for example), then a brute-force attack would take longer than the Universe has to live to guess the key.  Furthermore, because we passed a random set of data (a new synchronous encryption key) a hacker wouldn't know if they successfully cracked the message or not.

This may sound all well and good but SSL/TSL breaks down if the authenticity of the RP (Remote Party / Server) cannot be verified.  That is why we have CA’s, or Certificate Authorities.  The job of a CA is to vouch for the authenticity of the remote party.  This means that IF YOU GET A CERTIFICATE WARNING THIS IS THE ONLY THING PROTECTING YOU FROM A MAN-IN-THE-MIDDLE ATTACK.  If someone relayed communication between you and the remote party the certificate warning is the only thing that will protect you.

Actually, all three of these cryptographic technologies are used in SSL/TSL.  If you have ever gone to an https website and clicked on the “lock” icon, this is what you are likely to see.

Certificate-1 Certificate-2 Certificate-3

The first tab states that the certificate’s purpose is to “Ensure the identity of a remote computer” and has some information about the CA and expiration date.  In the second tab you can see that the hash algorithm is SHA1 and that the public key is an RSA 1024 bit key. Symmetric communication takes over when we pick a key and a symmetric encryption algorithm. The server may reject the algorithm we selected and you would be forced to pick again until you both agreed on which algorithm to use.  Now days it’s likely to be Rijndael AES encryption that is chosen to communicate the rest of your session.

Back to Hashes

Hashes are usually considered the weakest link in this chain of cryptography.  To date the following hash algorithms have been compromised: HAVAL, MD2, MD4, MD5, PANAMA, RadioGatun, RIPEMD, SHA-0, SHA-1, and Tiger.  Many of these algorithms are not completely compromised, bur rather practically compromised.  We know that we can break them but it’s unfeasible. That is to say that generating a collision for a message is likely to be more difficult than it’s worth.  The most common algorithms in use today are MD5 and SHA1, both of which have recently been compromised.

In 1996 a flaw was found in the design of MD5 and in 2007 it was broken completely and it was demonstrated that a pair of files could be created which have the same hashed value and they were able to fake SSL certificate validity.  This caused the DHS (Department of Homeland Security) to issue a statement indicating that the MD5 function should be considered cryptographically broken and that it is unsuitable for further use.  This is no small thing as we have an insatiable addiction for MD5!  We use it everywhere!  Everywhere else we use SHA-1 which has also been cracked.  The DHS suggests SHA-2 but since it is algorithmically similar to SHA-1 there is a good chance it won’t be effective for very long. 

So how do we make hashes stronger?

I don’t believe that we are all of a sudden going to invent a new generation of hashing algorithms that are significantly stronger.  In fact, if anything we’re getting better and better at cracking these hash functions so they have shorter and shorter lifespan.  I think what we will have to do is become more inventive in our use of them.

Hash functions work a lot like block ciphers (synchronous encryption), that is they work on blocks of data at a time.  If you are going to fool a hash function then you either need to make the tampered block result in the same hashed value or you need to “balance” out the message in a later block.  Blocks are typically 512 bytes of data.  If there is a remainder at the end of the message it will be hashed with a partially-empty block. 

Blocks

One obvious thing you can do to protect yourself is to supply more than one type of hash for any given set of data.  So, for example, if you download a file from the Internet you sometimes see a few different hashes.  You may fool one hash, but you’re not going to fool two hashes with the same collision, especially if the two hashing algorithms are unrelated.

While using two hashes seems a reasonable approach there are some down sides.  First, it’s twice the work for your computer hashing the same bits using two different hash functions. Secondly, it is not possible in every scenario where a single hash must be used.  One example of this is the use of HMAC authentication.  Such authentication works kind of like this:

Client   Server
User “nzaugg” wishes to authenticate

------>

 
 

<------

Okay, here is a very large and unique phrase.
Hash(Password+Phrase)

------>

Yep, that’s what I get when I Hash(Password + Phrase)

We can’t really send more than one hash as a response and even if we did, it wouldn't help any.

While playing with a pair of hacked files I had an idea which could still use MD5 but stagger the hashes so the blocks align differently, then hash those hashes together.  It sounds pretty home-brew, but let me explain.

Offset Hashing

Lets say that I have that same hash as before but I have compromised data in block #1 (red line). If we hash that data again with different data before it and at a different place in the block it will yield a different result. In order for this to work though, we need actual data in the offset blocks (in blue).  We can simply take the last 256 bits from the end of the file for the beginning and the first 256 bits for the end. We could also fill it with A..Z, etc. It doesn’t matter too much but should be convention.  This is important because if we left the blocks on the beginning and the end empty then we could not detect tampering on those two blocks using MD5 as it’s somewhat position independent.  At least it didn’t work when I tried it.  It has more to do with what values preceeded it and therefore produced the same value if the previous values were all zero.

Offset Blocks

And using no new hashing algorithms we can now detect the hash tampered data. It’s still about twice as expensive as a 1-pass hash but overall hashing is fairly inexpensive and modern computers more that compensate.  It may be possible to calculate a collision once, but is impossible to calculate the collision for one hash pass and have it work for the other.

It was a little troubling to me that unless the blue squares were filled with some data I was unable to detect the changed data. So I thought why not offset hash with two different hash algorithms and then use a 3rd to hash together the results of the other two hashes.  I could use MD5 for pass 1, SHA-1 for pass 2, and SHA-2 to combine the two passes into a single hash.

Offset Blocks Different Hash Functions

The result is an unbeatable hash!  It doesn’t matter that both SHA-1 and MD5 have been broken, no one has ever broken the pair used together and offset hashing makes that task even more impossible especially considering that the two algorithms chosen are very different algorithms. The third algorithm doesn’t need to be different than one of the other two chosen, it essentially just helps make subtle differences in the hashes stand out.

By the way, BitTorrent downloads rely heavily on hashing functions for both data verification (did I hear you correctly) but also for authenticity.  There are programs out there (links below) that can change the file and get it to generate the same hash.  When you download stuff off of BitTorrents you can never know what you are downloading does not contain a virus.  Even if they were to use my staggered diff hash idea the virus could have been there before the torrent file was even created.

Links

SearchIcon

A short time ago I was working on a project using SQL Server Full Text Search.  This was my first real deep exposure to the engine and it worked pretty well.  However, for the project I was working on there were some very serious problems that I was never able to overcome.  Try as I might to tweak SQL Server Full Text Search I was never able to tweak it as much as I needed. 

Here are some of the problems that I faced while playing with the Full Text Engine:

  1. SQL Server Full Text Engine 2005 (FTE) had some scalability issues.  I didn’t so much need it to index hundreds of millions of rows (although it needed to be able to do a lot of records) but mostly I need a lot of queries per second.  It seemed to only be able to run on a single processor at at time.  If someone did a ridiculously huge query everyone else on that server would come to a halt!  On a box with 8 logical processors this was not acceptable!
  2. FTE has not capability for compound words (i.e. the term ‘Riverboat’ and ‘River Boat’).  Sure, you could put such pairs in the Thesaurus as expansions, right?  Well, I will get to that.
  3. FTE black boxed the word breaker.  You might think that this is not a big deal but you would be wrong!  FTE considered the ampersand ‘&’ a word breaker when I needed it to not be.  For example, in FTE if you did a search for ‘R&B’ the ampersand would break that into the words ‘R’ and ‘B’.  Both of such are in the noise words list by default. Therefore, the terms ‘R&B’ and ‘AT&T’, etc, were optimized out of the query by design.  Creating your own word breaker is possible but very difficult and not recommended.  Also, I needed an underscore ‘_’ to break words and it did not.
  4. The ranking information that came back from FTE was not very good.  This is because the ranking could not be used to compare two different queries and also because the ranking data was not very good.  The numbers were not very evenly distributed. IE, I might have the top 30 rows with a rank of “34”, and the rest had a rank of “20”.  The numbers are also arbitrary and meaningless.
  5. The ‘FREETEXTTABLE’ term is useless!  It will only UNION sets rather than INTERSECT them.  This means that a search for ‘Sax’ could return 1,254 rows while the term ‘Tenor Sax’ would return 2,259 rows.  Every term you add will increase the size of the result rather than decrease it. We had to use the ‘CONTAINSTABLE’ search term but that led to problems with any compound word in the thesaurus and looked awful! Something like:
    (“Sax” OR “Sax*” OR FORMSOF(“Sax”) OR THESAURUS(“Sax”))
    That is for a one term word.  Each word would need it’s own set of criteria.
  6. FTE was kind of slow on the larger queries.  Returning a set of over 1,000 seemed to be quite a chore!
  7. In order to add things to the thesaurus you had to change the XML file and restart the service.  You may even have to rebuild the index – I don’t remember.
  8. Every few days or so the searches would slow down a lot!  In order to get it speed back up we had to shut down the FTE service and the SQL Server service and rebuild the index. We hoped to write a script to do this chore every so often but for some reason the script that Management Studio generated didn't seem to work the same way.

That's all well and good, but what did I gain by writing my own version?

  1. Most of the code is reentrant and the code that is not uses highly efficient reader/writer locking to make sure the index doesn’t change out from underneath you!  This means that I can fully utilize all logical processors on the machine and large queries will not interfere with smaller ones.
  2. It also means that the index can be rebuilt while the old index is still in use.  Once the build is complete they can be quickly and easily swapped.
  3. I was able to create ranking that yielded very nice even distribution between every result in the set. (more on this below)
  4. I was able to pull word breaking terms from a file. There is also one file per language. These files are also used to break the search term that was used to create the index. Because it uses the exact same word breaker on both the search terms and the index data we got much better search matching!  Terms like ‘R&B’ were indexed and searched for in the same way. Noise words would be dropped the same way as well. It made working with compound words possible.
  5. This search is much faster than SQL Server FTE.  All my queries returned in less than 100ms, usually much less!
  6. Ranking could be customized. (more on this below)
  7. I was able to create a ‘FREETEXTTABLE’ style query that reduced the number of results as the search term became more specific.
  8. I could change the thesaurus on the fly.
  9. I was able to manage compound words much more effectively.
  10. I got to implement my own custom Binary Search which was pretty fun!

Before you go tearing off to download my version of search and rip out SQL Server FTE, there are some tradeoffs to using my system.  Most notably I keep my index in memory where I don’t believe FTE does that.  In the case of this project it only resulted in ~16MiB of memory usage per language.  We had 32GiB of memory on that system so that isn’t such a big deal even with the pretty deep set of data indexed. But if you had 100 million or more then this may not be a good solution for you. 

CSharpQuery is also slower at creating the index.  It seems like the index creation for FTE is almost instantaneous where it takes about 10 seconds per language to build mine.  Again, not a big deal in most cases but could be problematic for very large data sets. Lastly, FTE is able to provide the data right to you in T-SQL and is easy to setup and consume.  CSharpQuery will merely give you a list of indexes and ranking info and you are expected to get the data into SQL Server yourself.  Although, this does mean that it can be used outside of SQL Server which is a plus!

The real genius in the CSharpQuery code is the ranking.  As mentioned above the ranking in FTE has two fatal flaws.  It doesn’t allow the rankings to be used outside of the result set and the ranking values is pretty scant.  In a typical result set of 100 rows, there would only be maybe 5-10 distinct numbers for ranking data.  This means that there are large sections that appear unsorted or unranked! This is a pretty difficult problem to come up with a ranking algorithm that is both universal in it’s application and provides a ranking so unique and precise as to almost be synonymous with a hash. 

This is accomplished by using these four filters:

  • Word Proximity – How close are the search terms to each other?
  • Multiple Occurrence – How many times does the search term appear in the indexed phrase?
  • Low Phrase Index – Are the search terms found near the beginning of the phrase, perhaps in a title?
  • Word Matching – Did we find the exact words or did we use the thesaurus or front word matching to find this row?

The ranking of results is the most complex part of the code. It is also the most process intensive. Each row is ranked independently based on the 4 different filters. Each filter ranks the row between 0 and 1. To produce the final rank, each filter result is weighted as some filters are better at finding what you are looking for than others. Based on this setup some of the less discretionary filters can be used to break ties. This results in a very nice even distribution between the entire result set. The final ranking number is also a number between 0 and 1.

Another great feature of CSharpQuery is the tweaked thesaurus for compound words.  Lets say that you were looking for a song titled “Riverboat Shuffle”. In FTE if we were to simply do a thesaurus lookup on “Riverboat” we’ll also get “River Boat”.  This means it will return all results with river and boat but not necessarily together.  In my version of the thesaurus the exact same results are returned from the queries “Riverboat Shuffle” or “River Boat Shuffle”.

NOTE: The thesaurus that comes with this download is not necessarly a great one.  Basically I used a dictionary to find compound words but discovered that there are a lot of words that to a computer look like a compound word but are actually not.  Such an example is “monkey” was found from “mon” and “key” but “mon key” is not the same as “monkey”.  It is your responsibility to clean up the thesaurus.  If you would like you can also send it back to me when your done so everyone else can benefit.

Building an Index

In yet another instance of “It’s free for a reason”, CSharpQuery doesn’t come with any kind of tool to build your index for you.  Instead you have to make a small program to build the index.  This can also be advantageous.  For example, in my C# code I combine different rows from various track tables. This way even though the composer is part of a different table, typing their name in will show all of the tracks for that composer.

public static void UpdateCSharpFullTextSearchIndex(
    IDataStore con, int langId, CultureInfo cultureInfo, 
    string cSharpQueryIndexDirectory) {
    
    // Quicksearch SQL
    string quicksearchSql = @"
        SELECT 
            t.TrackID,
            p1.Text + ' ' + -- TrackName
            p2.Text + ' ' + -- TrackDescription
            p3.Text + ' ' + -- AlbumName
            p4.Text + ' ' + -- LibraryName
            ar.ArtistName + ' ' +
            t.Publisher as IndexText
        FROM Track t 
        INNER JOIN Album a 
            ON t.AlbumID = a.AlbumID
        INNER JOIN RecordLabel r 
            ON a.RecordLabelID = r.RecordLabelID
        INNER JOIN Artist ar 
            ON t.ArtistID = ar.ArtistID

        INNER JOIN Phrase p1 
            ON t.Title_Dict = p1.DictionaryID 
            AND p1.LanguageID=@LangID
        INNER JOIN Phrase p2 
            ON t.Description_Dict = p2.DictionaryID 
            AND p2.LanguageID=@LangID
        INNER JOIN Phrase p3 
            ON a.AlbumName_Dict = p3.DictionaryID 
            AND p3.LanguageID=@LangID
        INNER JOIN Phrase p4 
            ON r.RecordLabelName_Dict = p4.DictionaryID 
            AND p4.LanguageID=@LangID";

    SqlConnection conn = Con(con).Connection as SqlConnection;
    try {
        conn.Open();
        SqlCommand cmd = new SqlCommand(quicksearchSql, conn);
        cmd.CommandType = System.Data.CommandType.Text;
        cmd.Parameters.AddWithValue("@LangID", langId);
        SqlDataReader rdr = cmd.ExecuteReader();

        SQLServerIndexCreator creater = new SQLServerIndexCreator();
        
        // Quicksearch Index
        creater.CreateIndex(
            "QuickSearch", // The name of the index
            cSharpQueryIndexDirectory, // Index Dir
            rdr, // An open Data Reader
            cultureInfo, // The culture info for this index
            "TrackID", // The [Key] (int) column of the index
            "IndexText"); // The [Value] column of the index
        rdr.Close();
    } finally {
        if (conn != null && 
            conn.State == System.Data.ConnectionState.Open)
            conn.Close();
    }
}

As you can see in this index creation example, we concatenate the track name, track description, album name, and library name for the quick search index.  This will then create the .index file like “Index_QuickSearch.en-US.index”.  You will notice the file is in the format of “Index_[index name].[culture code].index”.  It is important to have a few things in place before you try this.  The following files should exist by default:

  • Invalid Chars.txt – Contains invalid chars like [tab]~{}[CR][LF], etc.
  • Noisewords.global.txt – Contains words that are not useful to index like [a-z], “and”, “the”, etc.
  • Substitutions.global.txt – Contains a list of substitutions you wish to make, this is usually used to indicate what symbols break words and which ones do not.  For example: “:= “ means that we’re going to substitute the “:” sign for a blank space.
  • Thesaurus.global.xml – The thesaurus contains synonyms and compound words.  NOTE: if you use the compound word functionality, the compound term must come second.
  • WhiteSpace.global.txt  – This file tells the work breaker which chars are whitespace so those can be safely used to split the word terms.

These files all work together to help create a better index.  You will also notice that the convention is “.global.txt” for these files.  That is because for each culture you will want to specialize these files for these languages.  So you can have the file “WhiteSpace.en.txt” and “WhiteSpace.en-us.txt” etc. The global lists are merged with the list for a specific language so they are global for all languages.

Congratulations, you now have an index!  Now for the easy part – Using the index.  A typical full text query will look something like this:

public static List<Track> QuickSearch(
    IDataStore con, 
    string csharpQueryIndexLocation, 
    string searchCriteria, 
    int recordLabelID, 
    int pageSize, 
    int pageNumber, 
    SortCriteriaType sort, 
    Language uiCulture, 
    User user, 
    out int rowCount) {
    
    rowCount = 0;
    int? labelID = 
        (recordLabelID > 0) ? recordLabelID : (int?)null;

    FreeTextQuery.DatabasePath = csharpQueryIndexLocation;
    CultureInfo ci = 
        uiCulture.LanguageID == 1 ? 
        CultureInfo.InvariantCulture : 
        new CultureInfo(uiCulture.CultureCode);

    List<QueryResult> trackResults = 
        FreeTextQuery.SearchFreeTextQuery(
        "QuickSearch",    // The Index Name 
        ci,               // The culture info
        searchCriteria);  // The search term

    string trackIDxml = FormatResultsAsXml(trackResults);


    // Do the search
    int? nRowCount = 0;
    var searchResults = Con(con).QuickSearch2(
        trackIDxml, 
        uiCulture.LanguageID, 
        pageSize, 
        pageNumber, 
        labelID, 
        user.UserID, 
        (int)sort, 
        ref nRowCount);
    
    rowCount = nRowCount ?? 0;

    return searchResults;
}

The code above simply calls FreeTextQuery.DatabasePath to set the location of the text index.  You only need to set this path once since it is static but I set it for every call, just in case.  Next I call FreeTextQuery.SearchFreeTextQuery to perform the search.  This returns a list of QueryResult which gives you back the [Key] that you specified when creating the index, the rank (presorted), and the locations of the words in the original [Value] that it matched to.  This is very handy if you wanted to do something like highlight search terms especially if you want to also highlight words that matched via the thesaurus.

After I get my results I call a function called FormatResutlsAsXml so I can get an XML string to send to SQL server and join these keys to the actual track information.  My implementation of FormatResultsAsXml actually uses a StringBuilder because I found this to be a little quicker at creating an XML string.

private static string FormatResultsAsXml(
List<QueryResult> phraseResults) {
if (phraseResults == null || phraseResults.Count == 0) return null; StringBuilder sb = new StringBuilder(); sb.AppendLine("<root>"); foreach (QueryResult r in phraseResults) { sb.AppendLine("<result>"); sb.AppendLine("<key>" + r.Key + "</key>"); sb.AppendLine("<rank>" + r.Rank + "</rank>"); sb.AppendLine("</result>"); } sb.AppendLine("</root>"); return sb.ToString(); }

All we have to do now is pass this into our stored proc.  Now, at about this point I’m sure you are thinking something along the lines of I’m using SQL Server 2005, why not use the XmlDocument rather than pass this value in as text.  The answer to that question is simple – For some very strange reason the new XML capability in SQL 2005 is VERY SLOW! It is unusably slow for sets of data larger than just 500 nodes!

Here is a snippet of my T-SQL code:

ALTER PROCEDURE [dbo].[QuickSearch]
(
    @TracksKeysXml text
    ,@LanguageID int
    ,@PageSize int
    ,@PageNumber int
    ,@RecordLabelID int = NULL
    ,@UserID int
    ,@SortID int = NULL 
    ,@RowCount int OUTPUT
) AS
BEGIN

    DECLARE @Tracks TABLE 
    ( 
        TrackID int primary key, 
        [Rank] real 
    )

    IF ( @TracksKeysXml IS NOT NULL )
    BEGIN
        DECLARE @xmlDocTracks int
        EXEC sp_xml_preparedocument 
            @xmlDocTracks OUTPUT, 
            @TracksKeysXml

        /* xml input: 
        <root>
            <result>
                <key>123456</key>
                <rank>0.75245</rank>
            </result>
        </root>*/
                
        INSERT INTO @Tracks
            SELECT * FROM 
            OPENXML(@xmlDocTracks, '/root/result', 2) 
            WITH ( [key] int 'key[1]', [rank] real 'rank[1]')
            
        EXEC sp_xml_removedocument @xmlDocTracks
    END

...

It’s as easy as that!  Feel free to leave a comment if you have any questions.

This software is distributed under the terms of the Microsoft Public License (Ms-PL). Under the terms of this license, and in addition to it, you may:

  • Use this code royalty free in either open source or for profit software.
  • You may not remove the copyright notices from any of the files.
  • You may not charge any 3rd party for the use of this code.
  • You may alter the code, but must not distribute the source once it has been altered.
  • You should give the author, Nathan Zaugg, credit for code whenever possible.
  • The code is provided to you as-is without any warranty, implicit or explicit.

I would also appreciate it if you left a comment if you found the code to be useful.

You may download the source from here: http://interactiveasp.net/media/p/1124.aspx

UPATE: This project is now being maintained on CodePlex.  See http://csharpquery.codeplex.com/ to get the latest code.

Microsoft Public License (Ms-PL) This license governs use of the accompanying software. If you use the software, you accept this license. If you do not accept the license, do not use the software. Definitions The terms "reproduce," "reproduction," "derivative...

MacVsPC

We have all seen the TV commercials of Mac and PC standing there while the Mac makes some funny remark about how unreliable PC is or how much more fun Mac is.  We all got a good laugh and I even started recording them before realizing that they were available online.  I also think that Microsoft had a good laugh, at first.  Or at least until it became clear that this one-sided add campaign was really starting to hurt their reputation. 

Microsoft, perhaps wanting to fire back started a series of TV commercials that were horribly unfunny!  Those were followed by vague “I AM PC” commercials that weren’t meant to be funny but didn’t have much substance either.  Now Microsoft has started the “Buy anything you want for $1,000 or $1,500 – BINGO! The commercials, in my opinion, are very effective at dispelling the myth that Mac’s are better than PC’s.  This has been dubbed the “Mac Tax”.

They aren’t convincing everyone though.  It was a comment from Redmond Report newsletter that got me fired up!

MAILBAG: THE MAC TAX THAT ISN'T, MORE

Microsoft has been talking up the so-called "Mac tax" to dissuade
people from moving to Apple. Marc thinks it's a little disingenuous to
call it that:

"For what it is worth, the 'Mac Tax' is not real! If you want, you
can configure a Dell with specifications virtually identical to any
Macintosh in the Apple product line and come up with very nearly
identical pricing. The catch, of course, is that an Apple Macintosh
is severely overpowered to meet the needs of most folks. Most folks
can meet their computing needs with a $500 to $800 Dell, or they can
go overboard and spend $1,000 and get a 'fully loaded' Dell that will
last them a good five years. Or, they can buy a 'bottom-of-the-line'
MacBook.

The truth is that if Apple could sell as many computers as Dell or
HP, they could afford to sell low-end $500 computers, but because
they don't sell a large enough number of computers to tolerate the
extremely narrow profit margins Dell and HP get on those $500
systems, Apple simply cannot afford to do so. Dell and HP 'take a
loss' on those entry-level systems but they make it up on very high
volumes and the occasional sale of $1,000-plus systems. All of
Apple's systems must be $1,000-plus systems for them to stay
in business."
-Marc

Well Marc, that sounds like a challenge!  I completely disagree with the argument that you can configure a Dell with specifications virtually identical to any Mac and come up with nearly identical pricing.

Mac vs. PC Challenge

So what does Apple have to offer?

White MacBook

MacBook

MacBook Air

MacBook Pro

MacBook Pro

Price

2.0GHz, 120GB $999.00

2.0GHz, 160GB $1,299.00

2.4GHz, 250GB $1,599.00

1.6GHz, 120GB $1,799.00

1.86GHz, 128GB SSD $2,499.00

2.4GHz, 250GB $1,999.00

2.66GHz, 320GB $2,499.00

2.66GHz, 320GB $2,799.00

Display

13.3-inch (viewable) glossy widescreen

1280 x 800 pixels

13.3-inch (viewable) LED-backlit glossy widescreen

1280 x 800 pixels

13.3-inch (viewable) LED-backlit glossy widescreen

1280 x 800 pixels

15.4-inch (viewable) LED-backlit glossy widescreen

1440 x 900 pixels

17-inch (viewable) high-resolution LED-backlit glossy widescreen

1920 x 1200 pixels

Option: Antiglare display

Processor

Intel Core 2 Duo

1066MHz frontside bus

3MB shared L2 cache

Intel Core 2 Duo

1066MHz frontside bus

3MB shared L2 cache

Intel Core 2 Duo

1066MHz frontside bus

6MB shared L2 cache

Intel Core 2 Duo

1066MHz frontside bus

3MB or 6MB shared L2 cache

Option: 2.93GHz

Intel Core 2 Duo

1066MHz frontside bus

6MB shared L2 cache

Option: 2.93GHz

Memory

2GB (two 1GB) of 667MHz DDR2 SDRAM

Option: Up to 4GB DDR2

2GB (two 1GB) of 1066MHz DDR3 SDRAM

Option: Up to 4GB DDR3

2GB of 1066MHz DDR3 SDRAM (onboard)

2GB (two 1GB) or 4GB (two 2GB) of 1066MHz DDR3 SDRAM

Option: Up to 4GB DDR3

4GB (two 2GB) of 1066MHz DDR3 SDRAM

Option: Up to 8GB DDR3

Hard drive1

120GB Serial ATA, 5400 rpm

Option: Up to 320GB hard drive

160GB or 250GB Serial ATA, 5400 rpm

Option: Up to 320GB hard drive or 128GB solid-state drive

120GB Serial ATA, 4200 rpm or 128GB solid-state drive

250GB or 320GB Serial ATA, 5400 rpm

Option: Up to 320GB hard drive at 7200 rpm or 128GB solid-state drive

320GB Serial ATA, 5400 rpm

Option: 320GB hard drive at 7200 rpm, 128GB or 256GB solid state drive

Battery3

Up to 4.5 hours of wireless productivity

Up to 5 hours of wireless productivity

Up to 4.5 hours of wireless productivity

Up to 5 hours of wireless productivity

Up to 8 hours of wireless productivity2

Graphics

NVIDIA GeForce 9400M with 256MB of shared DDR2 SDRAM

NVIDIA GeForce 9400M with 256MB of shared DDR3 SDRAM

NVIDIA GeForce 9400M with 256MB of shared DDR3 SDRAM

NVIDIA GeForce 9400M and 9600M GT with 256MB or 512MB of GDDR3 memory

NVIDIA GeForce 9400M and 9600M GT with 512MB of GDDR3 memory

Enclosure

Polycarbonate

Precision aluminum unibody

Size (H x W x D)

1.08 x 12.78 x 8.92 inches

2.75 x 32.5 x 22.7 cm

0.95 x 12.78 x 8.94 inches

2.41 x 32.5 x 22.7 cm

0.16 to 0.76 x 12.8 x 8.94 inches

0.4 to 1.94 x 32.5 x 22.7 cm

0.95 x 14.35 x 9.82 inches

2.41 x 36.4 x 24.9 cm

0.98 x 15.47 x 10.51 inches

2.50 x 39.3 x 26.7 cm

Weight4

5.0 pounds

2.27 kg

4.5 pounds

2.04 kg

3.0 pounds

1.36 kg

5.5 pounds

2.49 kg

6.6 pounds

2.99 kg

What does HP have to offer?

The computer that compares with the entry level 13” white MacBook computer:

Well, this comparison isn’t really apples to apples.  After all I couldn't find anything with a 13” screen (either smaller or bigger).  Also, I took upgrades that I think most people would take.  Here is what I came up with:

  • HP G60t Laptop running Windows Vista Home Premium x64
  • Intel(R) Core(TM)2 Duo Processor T6400 (2.0GHz) (=)
  • 3GB DDR2 (+)
  • 250GB 5400RPM SATA (+)
  • 256MB NVIDIA GeForce 9200M GE( )=
  • 16.0" diagonal High Definition HP Brightview Display (1366x768) (+)
  • Free HP DESKJET D4360 PRINTER (with mail-in rebate) (+)
  • Cost: $708.99 with $100 instant rebate.  Savings: $290.01; I did get some other free upgrades, but because they always have this kind of thing I keep them. Here is the link (for as long as it lasts)

The (-) indicates that the chosen component was less than it’s apple counterpart. The (=) indicates identical hardware.  The (+) indicates superior hardware. 

DECISION: I got a VASTLY superior PC from HP for almost $300 less!  The HP notebook even looks nicer than the white apple book.  Hands down winner here.

Because my previous configuration beat the next level up MacBook, I’ll move on to the MacBook Air using Dell’s website:

  • Dell XPS M1330 running Windows Vista Home Premium x64
  • Intel® Core™ 2 Duo T9300 (2.5GHz/800Mhz FSB/6MB cache) (+ & –)
  • 3GB Shared Dual Channel DDR2 SDRAM at 667MHz (+ & –)
  • Ultra Performance: 128GB Solid State Drive (=)
  • 128MB NVIDIA® GeForce™ 8400M GS (–)
  • 13.3" UltraSharpTM  WXGA (1280 x 800) display with TrueLifeTM  (available with 2.0 MP camera) (=)
  • Weight 3.97 lbs; Size 31.8 x 2.31x 23.8 cm (–)
  • Cost: $1,444.  Savings: $355 / $1,055. 

This wasn’t as quite as good as comparison as I hoped.  I needed a Dell Laptop that did the Solid Sate drive (the whole reason for MacBook Air).  The processor is faster than both configurations of MacBook but with a slower BUS speed.  In mind this makes them a wash but depending on what you are doing it could make a difference one way or another although it’s not likely to. 

The other was the RAM.  I couldn't configure this computer with anything less than 3GB which is 50% more RAM, but again, it’s slower RAM so there could be some performance considerations.  Most of the time, though, the quantity of RAM beats out the speed of the RAM.  It’s still many times quicker than virtual memory residing on a disk drive. The graphics card available for the PC was not as good as the one that comes with the MacBook Air. The XPS system came in a bit chunkier in both weight and size.

DECISION: If you are in the market for a PC that is powerful and small / light and cost is not much of a consideration then the MacBook is a real contender.  I might have done better if I would have gone with Acer who is known for their small computers but I still rather doubt that I’d get something as small and powerful.  The MacBook Air is the winner for performance and size – that is if you can overlook the price tag.

The next comparison is the MacBook Pro series laptops.  While shopping at HP here is what I got and how it compares:

  • HP HDX 18t with Windows Vista Ultimate with Service Pack 1 (64-bit)
  • Intel(R) Core(TM)2 Duo Processor T9550 (2.66 GHz) (+ & =)
  • 4GB DDR2 System Memory (2 Dimm) (-)
  • 320GB 7200RPM SATA Hard Drive (=)
  • 512MB NVIDIA GeForce 9600M GT (=)
  • 18.4" diagonal High Definition HP Ultra BrightView Infinity Display (1920x1080p) (+)
  • Blu-Ray ROM with SuperMulti DVD+/-R/RW Double Layer
  • HP Integrated HDTV Hybrid Tuner (+)
  • Cost: $1,606.99 Savings: $392.01 / $892.01 / $1,192.01 Again, here is the link so you can see for yourself.

I didn’t bother comparing all three MacBook Pro’s separately, the system I built beat even the high end MacBook Pro.  This HP system is every bit as good as the MacBook or even better!  What are the differences?  Again, I had a hard time coming up with comparable memory from the manufacturer. If I wanted that high end stuff, I can still get it on the cheap from New Egg.  I even got some freebies that I didn’t expect like a bigger monitor, Blu-Ray DVD-RW combo and an HDTV tuner integrated into the system. 

DECISION: The HP notebook is a clear winner!  If I had the $1,606.99 I would buy the PC right now! PC’s are a great deal!

CAVEOTS: I wasn’t able to weigh in on all aspects of the laptop such as battery life which I have no data from HP or Dell (mostly because it varies greatly between configurations).  I also make the assumption that the overall quality of the computer is equivalent.  This is probably a true assumption – all of the PC’s had very high ratings from their customers.  I also didn’t investigate warranty models very carefully so that is left to the buyer to evaluate. 

Summary

I wouldn't recommend to my parents, grand parents, co-workers, friends, or peers that they choose a Mac over a PC.  A Mac is not a better computer by virtue of being made from Apple.  You are paying more for a name brand just like anything else.  If the software you need to run is Windows based then buy a PC, if it’s Mac based then buy a Mac.  Simply running Windows in some kind of VM software is not going to give you a very good end-user experience.  The “Mac Tax” is real!

windows-logo-small

I have been playing with Windows 7 in my VM machine for a few weeks now and have found it to be amazingly fast and lean!  So what will keep people from making the leap from XP to Windows 7? Application and hardware compatibility of course!  Windows has such a large following now days in large part because of it’s commitment to compatibility between versions of windows.  Compatibility is one of the major reasons Windows Vista has been so marginalized. 

Since Microsoft is not providing a direct upgrade experience from XP to Windows 7, and because Windows 7 is based on Vista technology, it can be quite a sale to get people to convert from Windows XP to Windows 7.  For those in this camp there is some good news.  Last week the Windows Blog team posted a very provocative solution called Windows XP Mode for Virtual PC

Based on the information provided it looks as though they have developed “unity” like features for Microsoft Virtual PC.  This allows a program running under a virtual machine to be moved onto the hosts desktop.  It will look like it is running on the host machine when it is really running under the virtual machine. It can be so transparent that the only way you can tell on the example in the blog post is because of the tool tip on the shortcut.  VMware has had this technology for a little while but my experience with it is kind of mixed.  It is really cool to see your app running on say a Mac, but the user experience is not even as good as running the application in the virtual machine directly.  Still, this technology has real promise.  Citrix, for example, bases their business on this type of technology.  Microsoft’s Terminal Services even has support for application virtualization.

For users who only have one or two incompatible applications keeping them from upgrading, this should be a big help!  That in conjunction with the great product Windows 7 is shaping up to be I can see people migrating en masse

WindowsVPC7_2_23127862 
taken from the original blog post.

Links:

Silverlight Logo Small

Silverlight 2 is great! Silverlight 3 is AWESOME! My first experiences with the beta framework and tools have been overwhelmingly positive.  In this post I’ll go over my experience creating an application using some of the new features and I’ll show how easy it is to make your Silverlight application available offline.  To get started you’ll need the tools. Download the Silverlight Tools for Visual Studio 2008 SP1 from the official Silverlight website.  It is also recommended that you download and install the Silverlight Toolkit from the same site.

Getting Started

After installing the Silverlight tools open Visual Studio and click File –> New –> Project. Click Silverlight in the project types section and select Silverlight Application from the Templates group.

New Silverlight 3 Project

When prompted make sure you check the Host the Silverlight application in a new Web site checkbox and press the OK button.

New Silverlight Application Settings

Once your project comes up we can get started in code.  Open the MainPage.xaml file, open the toolbox and double click TwilightBlueTheme.  This will add some references to our project and update the background of our window.  That is how you use a built-in theme.  Any tool box item that ends with Theme will change the look of your application.

Another great feature of the Silverlight 3 runtime is that we finally have full binding support!  The most useful binding feature is binding the value of one control to another. 

<Grid x:Name="LayoutRoot" Background="White">
    <twilightBlue:TwilightBlueTheme />
    <Button Height="30" Width="150" Content="This is my button Fool" 
            HorizontalAlignment="Left" VerticalAlignment="Top" 
            Margin="5" Click="Button_Click"  />
    <TextBlock Name="tbValue" Margin="50" 
               Text="{Binding ElementName=slider1, Path=Value}" />
    <Slider Name="slider1" Height="80" Width="300" 
            Margin="0 0 0 0" Value="3" />
</Grid>

Now, lets run it.

First Run

You can see that when you move the slider we can see it’s slider position value in the TextBlock we have on our form.  Now that we have our ultra-simple app we want to make it available offline.  We do that by opening the Properties\AppManifest.xml file and add the following:

<Deployment.ApplicationIdentity>
      <ApplicationIdentity 
          ShortName="Nate Test Application" 
          Title="Nate Test App">
          <ApplicationIdentity.Blurb>This is a test silverlight 
                        3 application out of the browser.</ApplicationIdentity.Blurb>
      </ApplicationIdentity>
  </Deployment.ApplicationIdentity>

Now run the application again.  This time right click the Silverlight app and there is a new option to install the application.

Install Silverlight 3 App

You will then get a very simple “install” dialog.  The install process is extremely fast!

Silverlight 3 Application Install 

And now we can launch our application from the start menu.  Here is what it looks like when it is run outside of the browser.

Silverlight 3 Application Running Outside of the browser

To remove the application from your system, just right click to uninstall.

Silverlight 3 Application Uninstall

And that is it!  Silverlight 3 works very well outside of the browser!

 

Links:
More Posts « Previous page - Next page »