netdev I know the title of this post is a little bit of a misnomer, business objects will never map 1 to 1 with relational database objects.  However, when just starting out and when you get to control the schema of the database, it sure makes a good starting point and this trick makes it really easy and avoids a lot of typing.  It supports SQL 2005 and SQL 2008.

 

The first step is to download the script. The script I wrote to do this can be found at http://interactiveasp.net/media/p/87.aspx

Once the script has been downloaded, open it in SQL Server Management Studio and fill in the fields for Database name and Table Name.

Set Settings for Property Generation

After the settings are set run the script.  You should get something that looks a lot like:

 

Result Properties

Next, select all of the rows that are part of the table you are creating the class for.  Notice that my selection excludes the first column.  After the rows are selected, copy the rows out of the results grid.  Next, open up your project in Visual Studio.  Create a new class with the appropriate name and paste the contents of the clipboard as shown below.

Paste copied Code

The act of pasting the code into a class will cause the code to be formatted.  This should create nice looking properties inside of your class.

Newly Created Class

And that is all there is to it.  You will notice that by default the properties are sorted first by data type (desc), then by column name.  You can change this sort by changing the SQL Script if you wish.  I tend to like to group like-types together and then alphabetize.  I think it makes it a little more readable.  Also, we are using auto-implemented properties (aka Automatic Properties).

 

If you use the script or have comments / suggestions, please leave a comment and let me know!

Links

This is just a quick post.  I had a bug where any time I tried to compile a WPF project on my laptop I got the following pair of errors:

Error Message 1:
The "SplashScreen" parameter is not supported by the "MarkupCompilePass1" task. Verify the parameter exists on the task, and it is a settable public instance property.


Error Message 2:
The "MarkupCompilePass1" task could not be initialized with its input parameters.

I completely uninstalled Visual Studio 2008 & SP1 and reinstalled everything and this did not go away.  Anyway, here is how you fix it:

  1. Using a text editor, open the file: C:\Windows\Microsoft.NET\Framework\v3.5\Microsoft.WinFx.targets
  2. Search for "MarkupCompilePass1"; this was line 294 for me
  3. Remove the following line from the XML tag line: SplashScreen="@(SplashScreen)"

 

I have also posted my work around on the Microsoft Connect site (link below).

 

Links:

Threading Recently I have been doing a lot with threading.  This is a concept that used to be very difficult for me and now is only just difficult! smile_regular  Threading is becoming increasingly important as modern processors are not getting faster, they are getting more cores.  In order for a application to utilize any of the power of modern CPU's it must use threading! So I thought I would take a second and go through all of the classes in the System.Threading namespace.  We'll start with the simple stuff and move on to more advanced stuff!

The most familiar and basic structure for locking in .net is the lock statement shown below:

lock (this) {
    ObjectCount = value;
}

The lock is a simple synchronization structure which will only allow a single thread into the "critical" locked section at a time.  Many people don't realize this but the following code does the exact same thing:

Monitor.Enter(this);
    ObjectCount = value;
Monitor.Exit(this);

In fact, the Monitor object can do a lot more than enter a critical section; though I've not found a lot of use for the other functions.  But some operations of note are Monitor.TryEnter which allows you to specify a time span to wait for the lock.  Monitor.Pulse and Monitor.PulseAll notify the next waiting thread or all waiting threads respectively that a lock has been released.  This is not necessarily required but may result in the next thread being able to enter the critical section a bit quicker.  Monitor.Wait will release the lock to let another thread enter and then re-acquire the lock. 

Another note-worthy threading trick is the keyword volatile. This designation is useful in conditions where we are waiting for a variable to be changed in a while loop.  The C# compiler is really smart and usually assumed volatility for you, but it's a great habit to use it explicitly.  Basically what it does is it does not allow the value of a variable to be cached in a register or a stack.  Any time the value is referenced it will be pulled from it's memory location on the stack.  If this were not the case the value could change and you may never break out of the while loop.  Here is a quick example:

private volatile bool IsCanceled;

private void RunThread(object state) {
    while (!IsCanceled) {
        // Do work here!
    }
}

In the code block above, if some other thread changes the state of IsCanceled then the while loop will stop and the thread will exit.  While this is the behavior you would expect anyway the compiler might not agree with you (especially if there is late-binding or the value is modified outside of the scope of the class).  This keyword only works on fields, not properties and should only be used where it must as it will adversely affect performance.  Again, it's just good practice to use it when reading a value that can be mutated by another thread.

One of my favorite patterns is the singleton pattern.  This pattern is the most widely recognized but the least understood.  Lets take a second to examine a standard implementation of the singleton pattern.

public class SingletonExampleClass {

    private static volatile SingletonExampleClass _instance = null;

    public static SingletonExampleClass Instance {
        get {
            if (_instance == null) {
                lock (typeof(SingletonExampleClass)) {
                    if (_instance == null) {
                        _instance = new SingletonExampleClass();
                    }
                }
            }
            return _instance;
        }
    }
}

As you can see from the code above we have to implement double-checking.  The reason is that after the evaluation another thread could be slightly ahead and have created the object just ahead of you.  You could lock the type before doing your check in the first place but locking is expensive and completely unnecessary most of the times.  Only the first call requires the lock, after that all other calls simply want an a reference to that class.  Singleton is a very important pattern for serialization and is one of the most common, that's why Microsoft gave us a break here with the readonly keyword.  The same section of code above can be written as:

public class SingletonExampleClass {

    public static readonly SingletonExampleClass Instance = 
        new SingletonExampleClass();

}

Much simpler! It literally eliminated 12 lines of code!  This is more or less the same as it's brother.  The only difference is the lazy instantiation in the first example, but in almost all cases this pattern is simpler and more intuitive.  This pattern is also a little more flexible as it will allow you to use readonly on non-static fields as well.  Additionally you may instantiate the value in the constructor rather than the initializer, but you must do one or the other.  Also, once the value is set it may not be changed!

Another favorite of mine is the Interlocked class.  Interlocked.Increment and Interlocked.Decrement allows you to increment or decrement an numeric value using a thread-safe mechanism. 

private int ItemsProcessed = 0;

private void RunThread(object state) {
    List<Order> orders = state as List<Order>;
    
    if ( orders == null )
        return;

    foreach (Order ord in orders) {
        // Process Order
        // ...
        Interlocked.Increment(ref ItemsProcessed);
    }
}

As you can see, it's pretty simple to  use.  You may be wondering about the use of ref in the function signature.  Well, as you know int and long are value types (structs) and would be passed to any function by value (copying the contents to the function).  When we use the ref function signature it tells the compiler that we don't want to pass a copy of this value to the function, we want to pass the actual variable.  I.E. we passed a pointer to the variable rather than the value of it. This means that any mutations made to the variable inside of the function are effective outside of the function. 

The next basic structure I'd like to discuss is a Mutex.  A Mutex is a lot like the Monitor object I showed above and a ManualResetEvent but can work between different processes, not just different threads.  The specifics of a Mutex are best described in this snippit from the MSDN documentation:

Mutexes are of two types: local mutexes, which are unnamed, and named system mutexes. A local mutex exists only within your process. It can be used by any thread in your process that has a reference to the Mutex object that represents the mutex. Each unnamed Mutex object represents a separate local mutex.

Named system mutexes are visible throughout the operating system, and can be used to synchronize the activities of processes. You can create a Mutex object that represents a named system mutex by using a constructor that accepts a name. The operating-system object can be created at the same time, or it can exist before the creation of the Mutex object. You can create multiple Mutex objects that represent the same named system mutex, and you can use the OpenExisting method to open an existing named system mutex.

Note:

On a server that is running Terminal Services, a named system mutex can have two levels of visibility. If its name begins with the prefix "Global\", the mutex is visible in all terminal server sessions. If its name begins with the prefix "Local\", the mutex is visible only in the terminal server session where it wa s created . In that case, a separate mutex with the same name can exist in each of the other terminal server sessions on the server . If you do not specify a prefix when you create a named mutex , it takes the prefix "Local\". Within a terminal server session, two mutexes whose names differ only by their prefixes are separate mutexes, and both are visible to all processes in the terminal server session. That is, the prefix names "Global\" and "Local\" describe the scope of the mutex name relative to terminal server sessions, not relative to processes .

Here is a short sample that checks for the an existing Mutex and exits if found.  This is useful for single-instance applications.

static class Program {

    static Mutex m;

    /// <summary>
    /// The main entry point for the application.
    /// </summary>
    [STAThread]
    static void Main() {
        // Check for single instance of our application
        bool createdNew;
        m = new Mutex(true, "TestThreadingApplication", out createdNew);
        if (!createdNew) {
            m.ReleaseMutex();
            return;
        }
        
        Application.EnableVisualStyles();
        Application.SetCompatibleTextRenderingDefault(false);
        Application.Run(new Form1());
        m.ReleaseMutex();
    }
}

The code works by trying to get an existing Mutex named "TestThreadingApplication".  If that Mutex does not exist in the Operating System it will be created and your thread will be assigned the owner.  If you were the first instance you will have created the Mutex and you may resume execution, otherwise your application will exit.

The last thing we will discuss in this post is a Semaphore.  A Semaphore works much the same way as a Mutex, but works best as a way to manage a pool of objects.  The Semaphore starts with an initial count of objects.  Each time the WaitOne method is called the count will be decrement.  When the count reaches zero threads will be blocked until another thread calls Release.  Unlike the Mutex, the Semaphore does not track which threads have incremented or decremented the internal count so the programmer must be careful to Release the exact number of times WaitOne is called.  It's best to think of a Semaphore as a synchronization mechanism that can let more than one thread into the critical section.  Because of it's similarity to other synchronization objects I didn't create a sample code block.

 

Links:

here is some great code for moving a window without a title bar. 

Step 1: Make your window transparent

<Window x:Class="TestWpfApp.Window1"
    xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
    xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
    AllowsTransparency="True"
    WindowStyle="None"
    Background="Transparent"
    Title="My WPF Application" Height="300" Width="300">

Step 2: Make a new title bar

<Grid>
    <Rectangle HorizontalAlignment="Stretch"  VerticalAlignment="Top" 
Height="40" MouseDown="move_window" /> </Grid>

Step 3: Add some code

using System.Runtime.InteropServices;
using System.Windows.Interop;

...
public const int WM_NCLBUTTONDOWN = 0xA1;
public const int HT_CAPTION = 0x2;

[DllImportAttribute("user32.dll")]
public static extern int SendMessage(IntPtr hWnd, int Msg, 
int wParam, int lParam);
[DllImportAttribute("user32.dll")]
public static extern bool ReleaseCapture();

Step 4: Write the event

public void move_window(object sender, MouseButtonEventArgs e) {
    ReleaseCapture();
    SendMessage(new WindowInteropHelper(this).Handle, 
WM_NCLBUTTONDOWN, HT_CAPTION, 0); }

 

Links:
with 1 comment(s)
Filed under: , ,

chrome-205_noshadow Google has done it again!  In a completely unexpected (at least by myself) move they have released a browser into mainstream.  Frankly, I am quite surprised!  Google was such a fan of Firefox that I never thought I would see them in competition.  It also surprises me because building web browsers has is not part of Google's core business.  This is also an obvious swipe at Microsoft which doesn't excite me.  A friend of mine upon hearing the news said "I don't like the idea of any major web presence building a major web browser and driving a supposedly "open source" technology to do what suits them best."

Install

I have no idea how large the install really is.  The download is a tool that downloads the rest of the browser.  It downloaded for a few min's and began the install.  My estimation was it was probably about 10-30MB though I really don't know. The installation was very uneventful.

What Google did right

There are a few things I really like.  Here they are:

  1. The interface is nice, sleek, and small.  I hate browsers (Firefox, you know who you are) that take up TONS of vertical space on my screen!  The space is wasted with things I don't even need or want!  Google's chrome did a nice job here!
  2. I like the idea of a multi-threaded browser (even if it is a multi-process browser).
  3. JavaScript is now a 1st class language in chrome.  V8, the JavaScript execution engine It is runs JavaScript much the same way as managed code being JIT'ed directly to the CPU. As a result, it is LIGHTNING FAST!
  4. I really like the 'URL Bar'.  It has a suggest/append that actually gets what I want!  I also like the landing page.
  5. I like what they did to get the "sandbox" mode.  I especially like the way it's transparent to the user what is crashing your browser (usually a plug-in).
  6. I like the incognito feature which allows zero trace browsing.  I don't know why we don't do this all of the time! 
  7. The browsing experience is very fast.  The browser UI is also fast.
  8. I like the tab reordering.  It's very smooth!
  9. I think the "move tab to another Google browser window" is cool but ultimately not very useful.
  10. I like the "task manager"

What Google didn't do right

  1. The multi-process approach is a turn off.  This means that they have to have a JavaScript rendering engine in memory for each process, it means that things like settings, etc exist in each process.  Most of all processes are EXPENSIVE!  Very much overkill!  Yes, I know that this is how IE does it, and I've never liked it!
  2. The browser is too simple and has very little customization out of the box.
  3. It uses a *lot* of system resources! Especially Memory!
  4. I do not like that it uses a lot of the same code as Safari!  SAFARI NEEDS TO DIE!  IT SUCKS!!!  HOW MANY YEARS DO WE HAVE TO WAIT FOR HTTP1.1 PROGRESSIVE DOWNLOADING?  This is a big deal in my opinion.
  5. It has a book marking system that claims to be different but sure looks exactly like IE7 bookmarks to me!
  6. It was using a lot of my CPU!  Something like 60% of my CPU was going toward chrome!  I may have had a malfunctioning plug-in but even when I closed the tabs that looked like the culprit it was still using 11-17% of my CPU!
  7. It didn't seem to start up any faster than IE8 or FF3.
  8. Google didn't release anything until now.  So far as I know there wasn't even a beta version for users to try.  I looked for some plug-ins but couldn't find any.  A little heads up to the development community could have had many plug-ins available by the time the browser launched.
  9. I know this is a small thing, but I hate when the browser "styles" text boxes for me!  What if I wanted to implement my own highlighting?  What if I didn't want it to highlight?
  10. It would lie to me about processor usage!  I had a process that was using 45% of my CPU and that none of the processes claimed to take that much CPU.  This was a Gmail tab, ironically!
  11. Silverlight didn't work.  It tired, but never really loaded.

I've never been comfortable with Google's privacy policy.  The fact that it reads my G-Mail and suggests products much like the topics inside rubs me the wrong way.  The fact that they would like to track everything you do including every web page you visit SCARES ME!

Anyway, here are some JavaScript comparisons.  I only have IE8 (which isn't going to be ready for such tests) so those tests aren't very relevant.

Google vs IE 8 (in IE 7 Emulation Mode)

TEST                   COMPARISON            FROM                 TO             DETAILS

=============================================================================

** TOTAL **: 4.93x as fast 18324.4ms +/- 1.9% 3716.2ms +/- 3.5% significant

=============================================================================

3d: 6.99x as fast 1816.8ms +/- 4.1% 260.0ms +/- 11.7% significant
cube: 10.5x as fast 645.2ms +/- 5.8% 61.6ms +/- 8.8% significant
morph: 3.96x as fast 481.8ms +/- 11.0% 121.6ms +/- 21.7% significant
raytrace: 8.98x as fast 689.8ms +/- 3.4% 76.8ms +/- 18.2% significant

access: 19.5x as fast 3398.4ms +/- 1.3% 174.6ms +/- 11.1% significant
binary-trees: 47.3x as fast 482.4ms +/- 5.7% 10.2ms +/- 5.5% significant
fannkuch: 18.2x as fast 1199.0ms +/- 1.4% 65.8ms +/- 29.0% significant
nbody: 21.6x as fast 1322.4ms +/- 1.5% 61.2ms +/- 14.0% significant
nsieve: 10.6x as fast 394.6ms +/- 8.9% 37.4ms +/- 17.0% significant

bitops: 18.4x as fast 2197.0ms +/- 3.1% 119.2ms +/- 11.7% significant
3bit-bits-in-byte: 38.1x as fast 327.6ms +/- 7.1% 8.6ms +/- 7.9% significant
bits-in-byte: 24.5x as fast 485.2ms +/- 3.2% 19.8ms +/- 19.6% significant
bitwise-and: 25.4x as fast 858.2ms +/- 2.8% 33.8ms +/- 9.5% significant
nsieve-bits: 9.23x as fast 526.0ms +/- 7.3% 57.0ms +/- 19.9% significant

controlflow: 77.7x as fast 435.0ms +/- 1.8% 5.6ms +/- 12.2% significant
recursive: 77.7x as fast 435.0ms +/- 1.8% 5.6ms +/- 12.2% significant

crypto: 10.8x as fast 1176.4ms +/- 3.6% 108.6ms +/- 12.5% significant
aes: 12.2x as fast 518.8ms +/- 9.1% 42.6ms +/- 11.4% significant
md5: 8.04x as fast 318.4ms +/- 6.2% 39.6ms +/- 33.3% significant
sha1: 12.8x as fast 339.2ms +/- 7.3% 26.4ms +/- 9.8% significant

date: 1.87x as fast 1739.4ms +/- 5.1% 928.4ms +/- 6.6% significant
format-tofte: 1.57x as fast 733.8ms +/- 6.9% 468.6ms +/- 6.0% significant
format-xparb: 2.19x as fast 1005.6ms +/- 8.6% 459.8ms +/- 9.9% significant

math: 6.73x as fast 1607.6ms +/- 4.9% 239.0ms +/- 15.2% significant
cordic: 4.79x as fast 676.4ms +/- 5.9% 141.2ms +/- 27.4% significant
partial-sums: 5.98x as fast 443.4ms +/- 4.3% 74.2ms +/- 18.6% significant
spectral-norm: 20.7x as fast 487.8ms +/- 12.2% 23.6ms +/- 19.9% significant

regexp: *1.07x as slow* 719.6ms +/- 3.6% 767.2ms +/- 3.3% significant
dna: *1.07x as slow* 719.6ms +/- 3.6% 767.2ms +/- 3.3% significant

string: 4.70x as fast 5234.2ms +/- 2.4% 1113.6ms +/- 5.4% significant
base64: 15.8x as fast 2212.6ms +/- 4.4% 139.6ms +/- 20.7% significant
fasta: 9.57x as fast 1117.2ms +/- 2.8% 116.8ms +/- 7.5% significant
tagcloud: 1.86x as fast 597.0ms +/- 3.4% 321.4ms +/- 8.5% significant
unpack-code: 1.82x as fast 710.8ms +/- 3.8% 391.2ms +/- 3.3% significant
validate-input: 4.13x as fast 596.6ms +/- 1.8% 144.6ms +/- 10.4% significant

Google vs Firefox 3

TEST                   COMPARISON            FROM                 TO             DETAILS

=============================================================================

** TOTAL **: 1.65x as fast 6138.2ms +/- 27.7% 3716.2ms +/- 3.5% significant

=============================================================================

3d: 3.31x as fast 860.8ms +/- 66.9% 260.0ms +/- 11.7% significant
cube: - 422.8ms +/- 114.5% 61.6ms +/- 8.8%
morph: 1.68x as fast 204.6ms +/- 1.6% 121.6ms +/- 21.7% significant
raytrace: 3.04x as fast 233.4ms +/- 40.0% 76.8ms +/- 18.2% significant

access: 6.01x as fast 1049.8ms +/- 31.9% 174.6ms +/- 11.1% significant
binary-trees: - 185.2ms +/- 168.8% 10.2ms +/- 5.5%
fannkuch: 7.25x as fast 477.2ms +/- 8.9% 65.8ms +/- 29.0% significant
nbody: 3.98x as fast 243.4ms +/- 11.2% 61.2ms +/- 14.0% significant
nsieve: 3.85x as fast 144.0ms +/- 10.4% 37.4ms +/- 17.0% significant

bitops: 5.52x as fast 657.8ms +/- 2.2% 119.2ms +/- 11.7% significant
3bit-bits-in-byte: 13.2x as fast 113.4ms +/- 7.2% 8.6ms +/- 7.9% significant
bits-in-byte: 8.76x as fast 173.4ms +/- 6.4% 19.8ms +/- 19.6% significant
bitwise-and: 4.44x as fast 150.0ms +/- 12.0% 33.8ms +/- 9.5% significant
nsieve-bits: 3.88x as fast 221.0ms +/- 2.5% 57.0ms +/- 19.9% significant

controlflow: 12.8x as fast 71.8ms +/- 5.1% 5.6ms +/- 12.2% significant
recursive: 12.8x as fast 71.8ms +/- 5.1% 5.6ms +/- 12.2% significant

crypto: 3.29x as fast 357.0ms +/- 6.3% 108.6ms +/- 12.5% significant
aes: 3.37x as fast 143.6ms +/- 11.4% 42.6ms +/- 11.4% significant
md5: 2.73x as fast 108.2ms +/- 9.9% 39.6ms +/- 33.3% significant
sha1: 3.98x as fast 105.2ms +/- 3.1% 26.4ms +/- 9.8% significant

date: *1.73x as slow* 537.8ms +/- 43.8% 928.4ms +/- 6.6% significant
format-tofte: ?? 359.8ms +/- 56.1% 468.6ms +/- 6.0% not conclusive: might be *1.30x as slow*
format-xparb: *2.58x as slow* 178.0ms +/- 19.3% 459.8ms +/- 9.9% significant

math: 2.59x as fast 619.0ms +/- 12.5% 239.0ms +/- 15.2% significant
cordic: 1.93x as fast 272.6ms +/- 6.1% 141.2ms +/- 27.4% significant
partial-sums: 3.14x as fast 233.0ms +/- 35.7% 74.2ms +/- 18.6% significant
spectral-norm: 4.81x as fast 113.4ms +/- 9.1% 23.6ms +/- 19.9% significant

regexp: *1.68x as slow* 457.0ms +/- 7.7% 767.2ms +/- 3.3% significant
dna: *1.68x as slow* 457.0ms +/- 7.7% 767.2ms +/- 3.3% significant

string: - 1527.2ms +/- 37.5% 1113.6ms +/- 5.4%
base64: - 145.0ms +/- 12.2% 139.6ms +/- 20.7%
fasta: 3.01x as fast 351.2ms +/- 49.9% 116.8ms +/- 7.5% significant
tagcloud: ?? 311.8ms +/- 71.6% 321.4ms +/- 8.5% not conclusive: might be *1.03x as slow*
unpack-code: 1.28x as fast 501.0ms +/- 14.4% 391.2ms +/- 3.3% significant
validate-input: - 218.2ms +/- 54.4% 144.6ms +/- 10.4%

 

Google vs IE 8

TEST                   COMPARISON            FROM                 TO             DETAILS

=============================================================================

** TOTAL **: 4.70x as fast 17474.2ms +/- 1.6% 3716.2ms +/- 3.5% significant

=============================================================================

3d: 6.87x as fast 1786.8ms +/- 1.8% 260.0ms +/- 11.7% significant
cube: 10.1x as fast 620.0ms +/- 5.4% 61.6ms +/- 8.8% significant
morph: 3.86x as fast 469.6ms +/- 6.5% 121.6ms +/- 21.7% significant
raytrace: 9.08x as fast 697.2ms +/- 4.1% 76.8ms +/- 18.2% significant

access: 16.3x as fast 2847.8ms +/- 1.7% 174.6ms +/- 11.1% significant
binary-trees: 49.4x as fast 504.2ms +/- 8.6% 10.2ms +/- 5.5% significant
fannkuch: 18.5x as fast 1219.8ms +/- 6.5% 65.8ms +/- 29.0% significant
nbody: 11.6x as fast 712.0ms +/- 6.8% 61.2ms +/- 14.0% significant
nsieve: 11.0x as fast 411.8ms +/- 14.2% 37.4ms +/- 17.0% significant

bitops: 19.0x as fast 2268.8ms +/- 6.8% 119.2ms +/- 11.7% significant
3bit-bits-in-byte: 37.9x as fast 325.6ms +/- 11.3% 8.6ms +/- 7.9% significant
bits-in-byte: 24.9x as fast 493.4ms +/- 8.0% 19.8ms +/- 19.6% significant
bitwise-and: 27.0x as fast 913.8ms +/- 13.5% 33.8ms +/- 9.5% significant
nsieve-bits: 9.40x as fast 536.0ms +/- 4.3% 57.0ms +/- 19.9% significant

controlflow: 78.6x as fast 440.4ms +/- 7.2% 5.6ms +/- 12.2% significant
recursive: 78.6x as fast 440.4ms +/- 7.2% 5.6ms +/- 12.2% significant

crypto: 10.8x as fast 1168.0ms +/- 5.7% 108.6ms +/- 12.5% significant
aes: 12.4x as fast 528.4ms +/- 6.3% 42.6ms +/- 11.4% significant
md5: 7.96x as fast 315.4ms +/- 8.8% 39.6ms +/- 33.3% significant
sha1: 12.3x as fast 324.2ms +/- 4.8% 26.4ms +/- 9.8% significant

date: 1.75x as fast 1620.4ms +/- 1.9% 928.4ms +/- 6.6% significant
format-tofte: 1.53x as fast 716.2ms +/- 3.1% 468.6ms +/- 6.0% significant
format-xparb: 1.97x as fast 904.2ms +/- 3.8% 459.8ms +/- 9.9% significant

math: 6.38x as fast 1524.8ms +/- 2.9% 239.0ms +/- 15.2% significant
cordic: 4.68x as fast 661.0ms +/- 2.9% 141.2ms +/- 27.4% significant
partial-sums: 5.74x as fast 426.0ms +/- 5.7% 74.2ms +/- 18.6% significant
spectral-norm: 18.6x as fast 437.8ms +/- 2.8% 23.6ms +/- 19.9% significant

regexp: *1.06x as slow* 724.8ms +/- 3.0% 767.2ms +/- 3.3% significant
dna: *1.06x as slow* 724.8ms +/- 3.0% 767.2ms +/- 3.3% significant

string: 4.57x as fast 5092.4ms +/- 2.0% 1113.6ms +/- 5.4% significant
base64: 16.0x as fast 2232.8ms +/- 4.5% 139.6ms +/- 20.7% significant
fasta: 8.52x as fast 994.8ms +/- 3.1% 116.8ms +/- 7.5% significant
tagcloud: 1.80x as fast 579.0ms +/- 4.3% 321.4ms +/- 8.5% significant
unpack-code: 1.78x as fast 698.2ms +/- 3.5% 391.2ms +/- 3.3% significant
validate-input: 4.06x as fast 587.6ms +/- 6.2% 144.6ms +/- 10.4% significant

You can see that JavaScript performance is the major advantage to Chrome.  They claimed in their comic intro that using multiple processes would result in less memory usage, but I didn't think so -- that woulden't make any sense.  The figure next is a table from Google's own memory tool.  I opened all of the same websites on Chrome and Firefox to see what the memory footprint really looked like.  Keep in mind that Chrome base install is a very basic browser!  My Firefox has a TON of useless plug-ins and themes installed and it still beat Chrome quite handily!

Memory (Google vs Firefox 3*)

 

Summary 

Memory 
Virtual memory 
BrowserPrivateSharedTotalPrivateMapped
Chrome 0.2.149.27
154,844k 5,816k 160,660k 287,304k 107,336k
Firefox 3.0.1
132,572k 11,552k 144,124k 129,752k 11,844k



Processes 

MemoryVirtual memory
PIDNamePrivateSharedTotalPrivateMapped
7172
Browser
37564k 21116k 58680k 43728k 26796k
832
Tab 2
iGoogle
Gmail - Inbox (9) - nzaugg@gmail.com
27616k 2660k 30276k 44416k 9816k
4456
Tab 3
CUEgle 3
8472k 2304k 10776k 32484k 9816k
3724
Plug-in
Shockwave Flash
56276k 10032k 66308k 94536k 11828k
7972
Tab 9
MSN Video
10872k 3376k 14248k 25140k 9816k
4348
Tab 10
Untitled
7736k 7380k 15116k 10396k 9816k
5552
Tab 14
Understanding User-Agent Strings
2668k 1736k 4404k 12256k 9816k
2420
Tab 17
Windows Live Hotmail
2304k 1624k 3928k 12540k 9816k
4204
Tab 20
SunSpider JavaScript Benchmark Results
1336k 2120k 3456k 11808k 9816k
5776
Tab 25 (diagnostics)
About Memory
8908k 3428k 12336k 9644k 9816k
Σ 163752k 219528k 296948k
 117152k

I haven't decided wither I will use this browser on a daily basis. Probably not -- no compelling reason to switch, but I can see that some will really like this browser. Perhaps once I get used to the idea of Google making a browser and have IE tick me off once more I'd be in a different place, but for now I kind of don't want a Google Browser.

Links

Thumbs Down!I don't normally do product reviews on my blog but recently I have come across some really poor electronics and have had such a bad experience I want to warn any potential buyer -- These products are beyond poor, they are unacceptable!  So I have decided that at least for the time being I would share my experiences online. 

There are all kinds of new electronics out there to buy.  Flat screen TV's, iPods, iPhones, HD Radio's, BluRay, etc.. However, I personally feel that there are some companies who's business model is to produce junk and sell it as these new style electronics.  In the case of both of these products it is obvious that they rushed to market in such a way that makes them completely worthless!  This is much reminiscent of the late 90's where computer manufacturers were pushing out components before they were ready resulting in very unstable PC's!

Another thing to note is that I am a little more savvy than the average consumer.  Not to tout too much my credentials as a "product test guy" but I did used to work for a service company called Service West and I would fix electronics.  When you see the guts of some electronics you really see the difference between brands!  You could (at least then) really tell the difference between a Sony and a JVC!  The Sony was eloquent and beautiful inside and I never saw many of them!  Some of the JVC's I worked on I had to de-solder wires just to get the unit open to perform a simple mechanical adjustment and they were always in there because the user tried to do two things at once (like change disks and press play in rapid succession).  In short, you get what you pay for.  Unlike computers where you buy a Dell for the name or cereal where you pay a lot more for the little improvement in quality; electronics are a little different.  If you can make a cheaper brand electronics component work for you then that's great but it is not going to be as good as quality as a better brand.

Westinghouse 32H570D Flat Panel TV

When we first purchased this TV we loved it!  The best feature was the DVD player, it is built in such a way that you can pop a DVD into the front of the TV (front loading) and it will turn the TV on and start playing the DVD.  How they were able to do a front-loading DVD slot and keep the TV so low profile I'll never know.  I bought this from Target back in April 2008 and returned it exactly 90 days later.  I was hoping to find something else to replace it with but we're going to be doing a lot more homework before trying that again. 

Westinghouse TV

The Pro's:

  • Front loading DVD slot!
  • The price
  • Very low profile
  • Great picture
  • Easy to play DVD's

 

The Con's:

  • The TV would hang every one in a while.  Not an innocent "the picture is frozen" but it completely LOCKED UP!  You would have to get up, un plug it, and plug it back in!  You can't even power it off with the remote!  Yes, there was adequate ventilation and the TV did not appear to be hot.
  • The audio inputs did not work!  I spent literally DAYS trying to get audio from my computer (where we would watch online content) to the TV!  Getting the audio from the laptop speakers was not desirable! 
  • The HD tuner was clumsily laid out.  You could literally take a full 20 seconds just to flip past channel 7!
  • The remote was nothing special!  It had a poor layout as well and did not have buttons for things you would do often.
  • The menu did not have a lot in the way of customization & function.  This may be part of the "easy to use" but I was unable to select an audio source.

 

I would not recommend this unit to a friend!  It's possible they fix the "bugs" and will have a good product in the future, but from other reviews their technical support was very poor and sending the TV for repair is very expensive.  I can not recommend this brand to a friend either.

Jensen HD5112

This radio supposedly had it all!  Literally!  HD Radio, MP3/WMA playback on CD/CD-R/CD-RW, SD Memory, USB.  IPod link, Aux input, pre-outs, Satellite ready, EVERYTHING!  The problem? None of it worked! (more below)  I really wanted this product to work for me -- epically since these puppies aren't easy to install! Plus it was pretty inexpensive and I knew that anything else wouldn't have as many features.  I thought that if I can just find 1 brand of product compatible with the SD reader or USB, or CD ROM then I would be fine.  I couldn't find any combination that worked!  I purchased this from WalMart in August 2008 and returned it days later.

HD5112

The Pro's:

  • Very Featureful (on the surface)
  • Price is right!
  • I really liked the HD radio.  It seems like the Satellite radio companies are fighting very hard to limit the number of units with this feature so it was nice to find one.  HD Radio is pretty cool and one of the main reasons for buying this unit.
  • I didn't get to try the iPod link (as I do not yet have an iPod and would probably opt for a Zune)
  • Once I got the correct dash kit (don't say that you support *any* Pontiac when you don't stupid first dash kit!) installation was pretty easy.  The wires were ISO compliant so I just had to match colors with my GM/Pontiac pigtail.
  • The unit seemed to have pretty good power.  It went much louder than my stock radio.

The Con's:

  • I could not install it without removing the "warranty void if removed" sticker!  It was right over where the sleeve slid over.  You slide it over that spot more than twice (which is usually required) that sticker is gone man!
  • The installation instructions couldn't be accessed.  They are usually provided online through a 3rd party.  You enter your serial number and it will let you download installation instructions for your vehicle.  I suspect I was the N'th person to buy this particular unit and so it wouldn't let me download the instructions!
  • I tried 6 different SD cards in this unit and could not get any of them to read.  I tried formatting these every which way possible and even tried a number of different MP3 formats just in case it was not capable of playing VBR.  I tried renaming the files to have as few as possible chars. I never got this feature working!
  • I tried 15 different CD brands! Nothing would play.  Though in defense (if you can call it that) this unit must have been defective because the CD playback didn't work with a regular music CD (CDDA) that came straight from the store. That's right, I couldn't play my Blink 182 album!
  • I tried all 6 thumb drives I had and it wouldn't read any of them so I bought a new one, and it wouldn't read that either!  Reading other reviews these last three are common.  One guy could only get 1 thing to work and that was a SD card reader to work in the USB slot.  I believe he described it as "an abomination sticking out of my dash!" and that a sudden stop would snap his card reader & radio like a twig!
  • The clock looked weird and it would never display the information I wanted. 
  • The aux port worked but only if the faceplate was not open (expected but still annoying)
  • The faceplate didn't detach or attach without a fight! 
  • The snaps that hold the radio to the sleeve weren't great.  After I installed it I couldn't get the right side to snap into place.
  • The product manual was very poor!  The website had few answers.

I would not recommend this unit to a friend.  Based on the whole number of poorly implemented features I could not buy this brand of electronics again.

I have bought the XOVision DVD player for my car to replace this CD Player.  It should get here in the next week so look for a review on that.

with 12 comment(s)
Filed under:

Managed vs Unmanaged I am asked all of the time about the performance of managed vs. unmanaged code and "how much slower is it?".  This is one of the questions I am going to attempt to answer using experimentation.  In this post I'll talk about some of the theory and make some predictions (I haven't written any code yet) and we'll see how close the theory matches the experiments. 

Managed Code Defined

Managed code is somewhat an misunderstood concept.  Managed code is simply code that targets the CLR runtime.  The CLR runtime is a bit more complex to explain and people often see the tem "virtual machine" and misunderstand the meaning of that statement.  When understanding the flow of execution for a managed application there is a three-step process.  First, the code is compiled to MSIL (Microsoft Intermediate Language).  This could be any code, C#, VB.NET, etc.  All Managed code compiles initially to MSIL.  MSIL is a low-level language like assembly.  It is often described as "Object Oriented Assembly" and rather than using registers you act upon memory in a highly optimized stack called the Evaluation Stack.  The beauty of IL is that is is low-level enough to be a very fast JIT (Just In Time) compile to native code, but the optimizations on a per-CPU basis can still be achieved in this final compilation step.  CPU optimizations like SSE2, SSE3, etc. that can greatly speed up code execution can be "given" to the users of managed code for free.  This could also include other optimizations such as GPU optimizations. 

When you run a piece of managed code it will normally take the first bit of execution to covert the MSIL to Native code specifically optimized for your platform.  This is known as JIT compilation usually happens very quickly and most people don't even realize that is what is happening every time they launch their application.  Of course, if your code is large, complex and has many dependencies you may be able to shave valuable milliseconds or in some cases even seconds by using ngen on your code.  Ngen is a tool that ships with the .NET framework that does the same compilation from IL to Native that the JIT compiler does but also keeps a copy of the native image that is generated.  This way a managed application is loaded much the same way a native application is loaded -- without JIT'ing. 

 

Code Execution

Figure 1 :: Managed Code Execution Lifecycle

So when you hear that .NET code is run in a virtual machine what they refer to is that you do not have to program to the specifics of a CPU; In that way the code is virtualized.  There are also services provided to you such as Memory Management, Thread Management, Exception Handling, Garbage Collection, and Security.  However, in my opinion the term virtual machine is a pretty poor fit.  These services all run side-by-side under the same process and even the same App Domain.  Therefore these services can be better thought of as a standard code template that gets compiled into every application rather than anything that is virtualized. 

On the Left - Managed Code

Managed code has some advantages!  Not the least of which is productivity.  Lets review some areas where managed code actually has an advantage! 

CPU Optimizations

As mentioned earlier, Managed Code is JIT compiled to target the specific platform in which you are running.  CPU optimizations can have a big advantage in performance!  While Native code can also take advantage of CPU optimizations you have to make a trade-off.  You have to either ship your code without such optimizations enabled for fear that someone without one of these will try to run your code OR you have to build some code and logic around working with or without each optimization you plan to target. 

Little is known, however, about the optimizations performed during JIT compilation.  Also, most modern CPU's are likely to have a base set of the most common CPU Optimizations so this argument gets somewhat weaker if there are few differences between chips. 

Managed Memory

Managed code has an awesome memory manager!  Suppose we have an array of integers like int[].  This is a data structure of contiguous integers.   Some of the advantages of using such a structure is that it is very quick to access.  You can use a syntax like MyInts[5] for access that is nearly as quick as referencing a local variable.  Internally pointer arithmetic is taking place where sizeof(int) * index + MyInts& = MyInts[5]; or Θ(1) which makes this the fastest method of accessing dynamically allocated memory.  The down-side to this stricture is adding elements to this array is extremely expensive!  Unlike dynamically-linked structures in order to add an element to this array usually requires the allocation of a completely new, bigger array, and the copying of all of the elements from the old array to the new one.  This has the complexity of Θ(n). This is extremely expensive for a single add operation! 

This is where managed code has another advantage!  Because there are no pointers, only system-managed references, it is possible for the memory manager to simply expand the array construct and anything in the way can be safely moved without harm to the execution of the application.  The worst case scenario for this operation is now a Θ(1) or that there is a constant cost for such an operation regardless of the size of the current structure.  So even in a worst case scenario we have the best possible performance.  This is good news for immutable data structures! 

Another memory-related advantage managed code has over native code is that it takes advantage of the memory vs. time tradeoff.  Basically all .NET applications consume more OS memory than they need.  There is some complex memory algorithm at the heart of this that tries to minimize calls to the OS for more memory and that such calls will gain larger blocks of memory.  Calls to the OS for more memory are very expensive whereas calls to the Memory Manager in Managed code is extremely fast.  Therefore, in theory, we should be able to dynamically create objects substantially faster in a managed language than a native one. 

Some of the other advantages to managed memory are the virtual non-existence of memory leaks, and automatic memory defragmentation.  And while the last of this list is somewhat controversial, I list it here as an advantage.  That is the garbage collector.  The garbage collector is the nebulous cloud that hangs over most C/C++ developers looking to write managed code.  They have been taught to manage memory themselves and do not like the idea of giving that control up to a complicated set of algorithms.  They all ask the same thing -- what happens if garbage collection is triggered at an inopportune time?  The answer to that question is a little complicated. But basically the garbage collection is very fast -- usually pausing execution for less than a few milliseconds.  Also, because the GC operates on many objects at once it can be a more efficient and less error-prone than self-styled memory management. Basically, the GC is not going to cause you any grief unless you do something stupid like leave your streams open (the only thing I can think of in Managed code that will cause a 'leak') and will mostly be a great burden off of your shoulders!

Security

One of the major reasons for the push for Managed Code was security!  Managed code can make certain guarantees about it's vulnerabilities to attack.  Even if an exploit is found (which very few, if any have been) then the CLR can usually be quickly patched to provide protection to ALL .net assemblies.  I believe this is one of the reasons Microsoft prefers you to allow the JIT to happen for each execution.  Part of these guarantees is type safety and bounds checking which are very common exploits for overflow attacks.

If you choose to sign your assemblies you are no longer venerable to "dll replacement" attacks as the CLR will verify the signature of the called assembly.  Additionally, with the GAC (Global Assembly Cache) different versions of the same assembly (dll) can live harmoniously side-by-side thus eliminating the infamous "dll hell".

On the Right - Unmanaged Code

Native code enjoys the legacy and reputation of performance.  This style of programming is considered "metal to metal" indicating you have complete control over the hardware of the system.  These applications are lean and efficient and have no "magic" services which is going to make this section very lean!  They use, in general, very little memory compared to their managed counterparts and have the ability to use memory in ways the managed code doesn't like or won't allow. 

Native code allows "unsafe" type casting which can result in better performance especially in cases where .net would employ boxing / un-boxing techniques which are very costly.  Because nothing is going to happen without you making it happen you should end up with faster code.  In smaller applications the memory management could be overkill.  You also have much more control over the lifetime of an object.  Rather than waiting for a GC to collect the memory at some unknown time you explicitly delete objects and the memory is reclaimed immediately.

There isn't much to say about Native code and I think that's why people are more comfortable with it's performance. 

 

The Matchup

I hope to be able to write the following tests in C# for managed code and C++ and/or Delphi for unmanaged code.  I will try to post the code for each "round" and am very open to criticism on fairness.  In some respects this is a little bit like comparing apples to oranges but that doesn't mean they can't compete!

Rounds:

  1. Round 1 : Theoretical (this round)
  2. Round 2 : Computational (bit manipulation, looping, adding, subtracting, searching, sorting, etc.)
  3. Round 3 : Dynamic Memory (object creation, array resizing, memory allocation, memory de-allocation, etc.)
  4. Round 4 : Windows Forms & Messages (dynamic creation of windows, buttons, etc.)
  5. Round 5 : IO (File System Access, Network Streams, etc.)

The Prediction

Based on the theory I would say that after a managed assembly has been loaded it should execute faster than it's native counterpart.  I base this idea mostly on the memory management provided to managed code. Though I would be a little surprised if Managed Code were to win a head-to-head challenge.  I believe that the end result will be native code will perform faster than managed code but by a statistically insignificant amount. 

I officially call this round: Winner - Managed Code (by decision).

Round By Round Predictions:

  1. Round 1 : Won by Decision: Managed Code
  2. Round 2 : Win by Native Code
  3. Round 3 : Win by Managed Code
  4. Round 4 : Tie (1 point each)
  5. Round 5 : Win by Native Code

 

References:

SQL2008Logo Pat is an excellent presenter and a friend.  I am also excited about SQL Server and Pizza so this meeting was a winning combination for me!  It was also nice to see some familiar faces and had a small but decent turnout. 

Environment

The meeting started late (as expected with almost anything I attend now days) but was forgivable due to unexpected traffic situations.  I personally arrived 5 min's late and was there in enough time to help setup.  The pizza came pretty late in the meeting which was a little annoying because I usually eat well before 7:30pm.  Also the drinks were all caffeinated (which is usually fine for programmers) but I like the non-caffeinated sort.  Also, I liked in the past how those rooms were setup with tables and chairs rather than just tables.  I usually like to take notes on my laptop but find the literal term "laptop" rather unworkable. 

Microsoft SQL Server 2008

The first question almost asked universally is "when is it going to come out"?  Although none of us had any insight into Microsoft it was generally agreed that we had heard 4th quarter 2008.  Perhaps even in time for Microsoft's PDC event.  Also RC0 is currently available for download

New features in SQL 2008 include:

  • New data types for Date and Time with enhanced precision
  • Compressed Backups (YAY)
  • New MERGE keyword for simultaneous Insert Update & Delete
  • New Table Variables allow you to pass arrays of objects into procs reducing round-trips
  • New Code Insight for the SSMS studio
  • New Transparent Data Encryption provides more control over encryption in the database
  • New Change Detect Capture (CDC) functionality for audit logging
  • New Data compression for compressed tables
  • New Policy-based management to enforce the stupid naming standards people sometimes put into a database (ughh!)
  • New Hierarchy ID field for Parent / Child / Grandchild, etc. Data
  • New FileStream Data (remains of WinFS) allows large amounts of binary data to be stored with (but not really in) the database
  • New Large User-Defined Types overcomes the 8KB limit for UDT's
  • New Spatial Data Types to store lat/long style data
  • New Grouping Sets allow multiple GROUP BY statements
  • Improvements to SQL Server Reporting Services

 

We did not talk about every point in that list, spending most of our time split between the new data types and their capabilities and the new MERGE statement.  We also had a fun discussion about http://www.sqldumbass.com/ and all of the 'unfortunate' database architectures we have encountered.

Summary

It was a really fun meeting.  It was a small crowd with an open atmosphere where we felt like we could ask questions.  Pat is more than knowledgeable about such questions and did a great job answering some of those.  Of course some of his answers left us scratching our heads as we can't follow a real DBA that deeply! 

 

Links

NUNUG Site: http://groups.google.com/group/nunug

Content Download: 2008DevPresentToNorthernNET.rar

Pat Wright Blog: http://insanesql.blogspot.com/

IE-Logo Have you ever noticed that with IE7 you can only download 2 files from any given domain at any given time?  It's actually slightly worse than that, you may only have 2 connections total, including a connection to request web pages.  That means if you are downloading two things from a single domain you are unable to browse!  Much to my angst this is actually a W3 standard! 

 

Basically it's a quick registry fix.  If you are comfortable making changes to your registry this will be really quick!

Windows Registry Editor Version 5.00

[HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Internet Settings]
"MaxConnectionsPerServer"=dword:00000032
"MaxConnectionsPer1_0Server"=dword:00000032

 

You can also download the .reg file here

Environment

I have to admit, I was very excited when I saw this was going to be a topic for the users group meeting.  I am very interested in hardware!  I remember building my own connector to the computer to communicate with my calculator (TI-86).  I always wondered how it worked smile_wink so when I saw the topic I was very excited to might learn some of this stuff.

It has been quite a while since I've been to a users group meeting here, but I can never find where we meet.  I don't remember what floor or what room and there are never any posters or signs.  The meeting started 15 min's late because the presenter was caught in traffic!  Also, the wireless at Neumont is always locked down and it really sucks not having Internet when we're group programming and need to know the format string for a DateTime.TryParseExact!  Pizza and drinks were plentiful and that is always nice!

The presenter started out apologizing for poor content, usually not a good sign!  However he really did seem to know his stuff and came prepared to amaze us with his gobs of cool hardware.  He is kind of a quiet talker and while I was near the back of the room I was still plenty close to be able to hear and I had to really strain to hear.  If he didn't have code on the screen most of the time I probably wouldn't have gotten much out of it.

I was a little annoyed that we had to wade through the code creation process.  Although this is something I like to do in my presentations I only apply it in cases where the topic I am presenting is new.  Parsing strings in C# is something we all do all of the time and it was annoying to sit around for over an hour waiting for the code to be created.  I was really disappointed when that resulted in skipping Ethernet and Wireless connectivity.  I didn't mind so much that the final application wasn't written.  I didn't actually even expect anything polished as this is a users group meeting.  About two minuets into looking through the code he did have I was satisfied and we could have moved on from there.

Presenter: Josh Perry - 6bit Inc.

http://www.6bit.com/

Sponsor: MindCenter sponsor

http://www.MindCenter.net

The Serial Port

  • RS (Recommended Standard)-232C
  • Defined in 1969
  • DTE (Data Terminal Equipment) - Client
  • DCE (Data Circuit-terminating Equipment) - Server

Physical

  • Computers wired as DTE
  • Devices (modems, etc..) wired as DCE (usually)
  • DE-9 most common connector
  • DV-25 (25 pin serial connector)

Pin-outs (DE-9)

  • RX - 2
  • TX - 3
  • GND - 5
  • RX on DTE goes to TX on DCE
  • TX on DTE goes to RX on DCE
  • NULL-Modem

Serial - Electrical

  • +12v to -12v swing; +12v = 1 - 12v = 0
  • Single ended communication, common ground
  • Small and embedded systems need level converters and inverters to go from logic to serial levels
  • Oscilloscope trace of serial communication

Serial - Protocol

  • Time-based sampling
  • Baud rate is the frequency of bytes
  • Bit rate is 8 * baud + 1 (start bit) + stop bits + Parity
  • Most common is 8 bit bytes, no parity, and 1 stop bit
  • Baud varies a lot, but 9600 and 115500 are popular
  • With only RX, TX, and ground; flow control is none or XON-XOFF. None is most common, XON-XOFF causes problems with binary communications.

Byes are send LSB first if you scope a serial connection under hyper terminal. 

We are implementing the NMEA protocol which is a standard maritime protocol that was first implemented for boats.  It is an ASCII protocol.  Each sentence is 1 line terminated with a CRLF.  Comma delimited with each of the values.

Atmel chips are a good way to get started.

The .net microframework chips allow you to program chips using the .net framework.  

Links

Encryption Block Transform Graphic Every company I consult with invariably has their own "security" assembly and they all have a hard-coded encryption key with the IV and the method to decrypt is right next to the method to encrypt.  This is what I call marginal protection.  Yes, it's encrypted and will probably get a security auditor off of your back but don't be fooled into thinking that you are protected!  A similar thing is done with information in the database, but I'll cover how to do this on an upcoming post. 

Why aren't you protected?  The answer to this question is actually quite simple.  If an attacker has access to download your web.config file (say, they brute forced a password on the FTP server) then there is nothing stopping them from downloading the your Security.dll which is responsible for decrypting the password.  Once they have that library it's seconds, not minuets, before they have got the password. 

One possible work around is to encrypt configuration sections of your web.config file using DPAPI as outlined in this MSDN How-to.  This is immune to the download attack because the DPAPI uses encryption that is based on a machine or a user.  Even if someone was able to download your web.config they would effectively have no way to decrypt that information. 

What happens, though, if the attacker has the ability to upload files?  Well, in theory, they may be able to grab that configuration in code which will, of course, be decrypted before it is returned.  Ahh, but they don't even know what the name of the connection string (in the case of databases) is because the entire section was encrypted.  However, they could guess it or get it from other code. By the way, you really shouldn't deploy the .cs files to production anyway; you should use the "publish website" option with the setting to not allow the site to be updated.  If you follow all of the standards pretty closely your in good shape.  Another great idea is to use Integrated Authentication for database access -- that way there is no password to steal!

The How to outlines 3 basic steps summarized below:

  1. Identify the configuration sections to be encrypted
    1. You may only encrypt the following:
    2. <appSettings>. This section contains custom application settings. 
      <connectionStrings>. This section contains connection strings. 
      <identity>. This section can contain impersonation credentials. 
      <sessionState>. The section contains the connection string for the out-of-process session state provider.
  2. Choose Machine or User store
    1. Use Machine store if this is a dedicated server with no other applications running on it or you want to be able to share this information with other applications running on this machine.
    2. Use User store if the above does not match your situation and in a scenario in which the user has limited access to the server.
  3. Encrypt your configuration file data
    1. To encrypt using Machine Store, run the following command from a .NET command prompt:
      aspnet_regiis.exe -pef "{ConfigSectionName}" {PhysicalDirectory} –prov "DataProtectionConfigurationProvider"
      OR
      aspnet_regiis.exe -pef "{ConfigSectionName}" -app "/{VirtualDirectory}" –prov "DataProtectionConfigurationProvider"
    2. To encrypt using User Store:
      Add the following section to your configuration file:
    3. <configProtectedData> 
          <providers> 
              <add useMachineProtection="false" keyEntropy="" 
                      name="MyUserDataProtectionConfigurationProvider" 
                      type="System.Configuration.DpapiProtectedConfigurationProvider, 
                      System.Configuration, Version=2.0.0.0, Culture=neutral, 
                      PublicKeyToken=b03f5f7f11d50a3a" /> 
          </providers> 
      </configProtectedData>

      Open a command prompt using the user you plan to encrypt the file with. To do so, Right click on the Command Prompts shortcut, right click -> Run As.  Or use the following command:

      Runas /profile /user:domain\user cmd
       
      Run the following command:
      Aspnet_regiis -pe "connectionStrings" -app "/{VirtualDirectory}" -prov "MyUserDataProtectionConfigurationProvider"

 

It really is that simple!  The great thing is that we don't have to do anything special in development to benefit from the encryption of the configuration sections. 

 

References:

http://msdn.microsoft.com/en-us/library/ms998280.aspx

Line Rider I am not a gamer!  I would get killed playing anything modern!  I was never very good at the games I did play (back in the 90's) like Marathon or StarCraft.  I did like games before that but they were mostly DOS based games.  I loved Cyberbox, a little game where you try to navigate your dot from one side of the screen to the other pushing blocks along the way.  A couple of years ago I was introduced to Line Rider.  A really simple game where you "draw" a track for our sledding hero to ride.  A simple premise but a fun one!  You can spend literally hours without realizing it just designing the most awesome ever track! 

LineRider has gone Silverlight!  It's been greatly enhanced with the new features of Silverlight and is pretty awesome.  I am hoping that more and more Silverlight is adopted and ubiquity increases to match or exceed Flash.  I know of no such place that keeps track of these types of statistics but it will be interesting in a year from now how much Silverlight will penetrate the market.  It seems that they should be able to make it available as an update via Windows Update like the .net frameworks, but reaching a growing OS X audience may be more difficult.  Hopefully there are plans in the works to get Silverlight on that platform.

If I can figure out how to post a capture video from Line Rider, I'll post it here.

Nathan Zaugg

with no comments
Filed under: ,

Have you ever needed to add a line to a config file like:

<type="Config.RoleService, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" />

These fully qualified types are not easy to figure out.  The syntax is a bit confusing and the PublicKeyToken is hard to get a look at.  This has been an issue -- until now!

GetAssemblyName App

I created a quick little app that will load an assembly you select and show you all of the AQN (Assembly Qualified Names) for each class contained in the assembly. Notice the cool glass look?  I'll post information about that in a later blog post.

Quick Instructions:

  1. Run the app
  2. Press the "Select Assembly" button
  3. Browse to the assembly you want to get the AQN for and press Open
  4. The window will fill up with the assembly full name and the AQN for each type found in the assembly
  5. Double Click the line to copy that class's AQN to the clipboard

 

Main Interface

Copy AQN to Clipboard

 

Downloads:
with 9 comment(s)
Filed under: , ,

DatabaseServer Naming conventions are like arm pits.  Everyone has them and they all stink!  Well, at least that's the perspective of pretty much every developer an DBA alike.  I will present my own personal philosophy for naming conventions on databases and hopefully spawn some discussion in the process.

Basic Principles

Consistency

As annoying as certain standards are (such as putting tbl_ before everything) it is more annoying and more difficult when there are no conventions or mixed conventions.  Being able to reliably predict the schema once the basic relational structure is understood is key to productivity.  Therefore, even if you get stuck with standards you disagree with, so long as they are consistent they will be much better than the alternative.  Unless you get to make the decision, my guess is that there are going to be some conventions that you do not agree with.

Abbreviations

It is a good idea to abbreviate, when appropriate, in the naming of objects in your database.  It may be a good idea to have a list of abbreviations that you plan to use in the database as part of your data dictionary.  However, if there is not a good, clear abbreviation for an object, don't make one up.  When in doubt, spell it out!  Especially with SQL Server where you don't have the pesky 30 char limit for tables and columns like Oracle.

Identities

Every table should have an Identity as it's primary key!  Sometime, in a future blog post I will explain why this is so critical, but suffice it to say that any table that does not have a primary key is considered by SQL Server a "heap".  If you are using something other than an Identity column for the primary key you better have a really compelling reason because it will cause major performance problems.  THERE IS NO SUCH THING AS A NATURAL KEY AND THEY SHOULD NEVER BE USED IN PLACE OF AN IDENTITY! So always use a surrogate key approach, even with join tables.

Security

Key I believe that with a good data layer like Linq to SQL there is no need for relegating all database access through stored procedures.  While it does remove some of the service area for venerability and bugs robust solutions like Linq to SQL are very limited by this approach.  You should grant specific access to tables and procs by user.  A good approach can be found on another one of my blog posts.

Object Naming

Table Names

  • If you are running a database that preserves case (like SQL Server) tables should have no prefixes and should not contain underscores "_" unless it is a join table.  Table names should also be Pascal Cased.  If you are running a database that makes all tables upper case (like Oracle) then you have little choice but to use underscores everywhere.
  • Avoid pluralizing table names (User vs Users).  This is a good idea for two reasons, first it can be confusing when doing the keys.  Do we use UserID or UsersID?  Second not all tables pluralize well (Addresses) so avoiding any plural names will keep it consistent.  It you are using Linq to SQL the designer will pluralize for you automatically.
  • Join tables should the two or three tables that they are joining together as part of the name seperated by underscores.  (ex: User_Address, User_Order). Although they are many-to-many relationship see if you can find a principle table.  Users have orders, orders do not have users, therefore the User table comes first in the name.

Column Names

  • Name the Primary Key Identity column the table name with ID.  (ex: UserID) With the possible exception of join tables, in which case just name the Identity ID.
  • Use Pascal casing (ex: EachWordsStartsCapolatized)
  • Do not use the table name as part of the column name.  If this is a shipping table don't name your columns ShippingAddress, just name it Address.
  • Do not prefix column names with the type (ex: strUserName).  It makes the database much more difficult to work with.
  • Use the correct data types.  Always use nvarchar types (unicode) rather than varchar types.  This avoids substantial complexity if you are ever requried to store non latin-based data!  Trust me, you do not want to have to deal with code pages in the database!  Also, use Date fields for dates, bit fields for boolean, etc.
  • Don't make every column nullable!  Think through what data is absolutely required.  If you want to hold "partally complete records" then I would suggest a different table or different "staging" database. 
  • Don't make a bit field nullable unless you have a great reason!
  • Try to include a TimeStamp column if you think you may have to worry about concurrency.
  • Don't prefix with anything.

Constraint & Index Names

  • Name your constraints and indexes.  With the exception of foreign key constraints they are not automatically assigned meaningful names.
  • Don't use prefixes and make light use of underscores.

Stored Procedure Names

  • Don't prefix your stored procedures!  People used to prefix them with "sp" because existing procs in the database use this convention.  It has been presumed that sp stands for system procedure and it wouldn't make any sense to use that.  Seriously, prefixes are not very helpful in the database!
  • The first part of the name of a proc should be the table name it works upon (ex: User_Insert).  If the proc works on multiple tables try to give it the name of the portion of the database this proc deals with.  For example, if it's a proc that the invoicing system uses it would be acceptable to name it Invoicing_Update, for example.
  • Don't generate procs for simple Insert, Update, Delete, and Select unless you have a policy in place for accessing data exclusively from procs.
  • Don't create any stored procedure you don't need or plan to immediately use.  At some point you will change the schema and you won't update procs your not using.  Someone may eventually want to use that proc later only to find it broken.
  • The verb in the naming convention does not have to be relegated to "Insert, Update, Delete, Select".  It should say what it does.  Just be careful that if there is another procedure that does this same thing to another table that the verbs are named the same.
  • You can add additional information to the proc name to help distinguish it from others.  (ex: User_Select_ByDate, User_Select_ByState)
  • Don't use a prefix for arguments (ex: @ArgUserID). In my experience they don't help at all and are quite annoying!

Tips & Tricks

SQL Server 2008 has a policy manager that can help create and enforce policies like naming conventions!  Regardless of using SQL 2008 be sure to keep a Data Dictionary of your database! The database is the heart and soul of your business processes and should be well documented!  There is nothing worse than an unclean database!

Nathan Zaugg

Database Secure These past two weeks have been very exciting for me.  I have gotten to be involved in some R&D for one of the companies that I consult for.  I LOVE R&D!  There is always a better way to do things and poking your head out from the sand every once in a while can be very beneficial! 

Okay, so here is the story.  You want to have auditing so you can log the user responsible for the change.  It follows that you simply connect with that users credentials and now you have a great audit log!  The problem is that if you have thousands of users (or maybe even less) you are going to start to experience a large number of connections on the server. [Image 1]  This is because each user has their own connection pool that, even if it is going through a service, cannot be shared with any other user.  A large number of connections is starting to really slow down your database so you decide to create a generic user account for the service.  The problem is now our audit log will only show the service account as the person responsible for the change! [Figure 2]

no connection pooling
Image 1

 

Pooled Connections (Shared Login) 
Image 2

So you have two ways in which you can fix this.  First you can mandate that all changes to the data must happen through stored procedures.  If we make sure that every stored procedure passes the user who is responsible for the DML changes then we can add our own audit records.  The upside is that not only can we take full advantage of connection pooling and security is better using procs.  The downside is that this can be intensive and the change log probably cannot be driven by triggers and we may have to come up with a complex and fallible process. 

Alternatively, you can use a basic service account for the connection and connection pool and run the SQL 2005 / 2008 "EXECUTE AS LOGIN" command before any other DML statement.  [Image 3] This is called User Context Switching and could be done automatically using a specialized command object.  The only down side is that because SqlCommand is a sealed class we have to use composition rather than inheritance.  This may also force us to create a compatible SqlDataAdapter but when all is said and done you have a system that is both scaleable and robust.  These changes are also likely to be compatible with SQL Server 2008's CDC technology which can automatically log changes to a table. 

-- TSQL TO CREATE A USER WITHOUT A LOGIN -- AND USE USER CONTEXT SWITCHING CREATE DATABASE [TestDB] GO USE [TestDB] GO -- Create the Service User CREATE LOGIN [ServiceLogin] WITH PASSWORD = 'Uor80$23b91'; CREATE USER [ServiceLogin] FOR LOGIN [ServiceLogin] GO -- If we ran this before then we need to drop this user DROP USER [nzaugg] GO -- Create a user without a login CREATE USER [nzaugg] WITHOUT LOGIN GO
-- Wade said this is backward, so I swaped it for him...although I'm not fully convinced! GRANT IMPERSONATE ON [nzaugg] TO USER::[ServiceLogin] GO -- Switch User Context; Optionally Specify 'NO REVERT' -- If we run this in Query Editor with 'NO REVERT' the -- only way to go back to our original login is to reconnect! EXECUTE AS USER = 'nzaugg' --WITH NO REVERT GO -- Verify that we are now user 'nzaugg' SELECT user_name(), suser_name(), original_login() -- If we used 'WITH NO REVERT' on our EXECUTE AS statment -- We won't be able to revert and this will throw an exception REVERT GO -- Are we still 'nzaugg'? SELECT user_name(), suser_name(), original_login() -- DROP THE DATABASE DROP LOGIN [ServiceLogin] GO USE [master] GO DROP DATABASE [TestDB] GO


SQL User Context Switching Results

Remember, in order to do this all of these users must exist in the database.  They must also have rights to perform the operation in the original DML statement.  This is where users without logins come in handy (see code lines 18 & 21).  The optional WITH NO REVERT will be handy for logging and will further secure our database.

Pooled Connections with EXECUTE AS LOGIN 
Image 3

 

EXECUTE AS MSDN Paragraph

SQL Server 2005 Books Online (September 2007)

EXECUTE AS (Transact-SQL)

Sets the execution context of a session.

By default, a session starts when a user logs in and ends when the user logs off. All operations during a session are subject to permission checks against that user. When an EXECUTE AS statement is run, the execution context of the session is switched to the specified login or user name. After the context switch, permissions are checked against the login and user security tokens for that account instead of the person calling the EXECUTE AS statement. In essence, the user or login account is impersonated for the duration of the session or module execution, or the context switch is explicitly reverted. For more information about execution context, see Understanding Execution Context. For more information about context switching, see Understanding Context Switching.

 

References:

Downloads:

 

More Posts « Previous page - Next page »