When I seem to have re-occurring themes happen naturally I find that those are good things to blog about. Recently I was speaking with my brother (who has incessant adware problems) about Internet Safety and Cyber Security. A week or two ago I had a stimulating conversation with my friend Anthony about security, and this morning on the Diane Rehm Show there was a segment about Cyber Threats. In which one of the guests stated that he works on Banking systems and will not engage on Online Banking.
The first question anyone has is how wide-spread is this problem and does this really affect me. The answer is that this is wide-spread and it affects everyone whether you own a computer attached to the Internet or not. The scary part is that even the most pragmatic and Internet-savvy users can fall victim. Does this mean you or I should stop using the Internet altogether? Absolutely not. While anyone can fall victim to this kind of threat there are steps to take to greatly reduce your risk!
In this age of technology we have almost no limits to our technical abilities. Unfortunately, our attackers have this same ability.
Types of attacks:
- Trojan Horses - A computer program that poses to be something useful but allows access to your system from the Internet.
- Adware - A computer program that either tracks your usage and sells that information to marketers or pops adds up on your computer.
- Spyware / Key-loggers - A program that "watches" what you do on your computer. They can record every keystroke and send that information to a scammer.
- Worms - A special kind of program (which usually includes spyware or Trojan horses) that spreads its self -- usually through email or mapped network drives.
- Proxy - A term used for an attack coming from a computer in which the user/operator has no idea about. This is a compromised computer system that a remote scammer has installed a Trojan horse on. This "bot" can now do anything it's owner wishes.
- DOS - Denial of Service attack. This is an attack on a server which will render it unable to complete the task it for which it was designed.
- DDOS - Distributed Denial of Service attacks. This type of attack usually involves overwhelming a web site to the point that it can not serve requests to legitimate customers. This usually involves a large amount of "bots" controlled by a single party and often controlled through a mechanism called IRC which is a lot like a chat room.
- Buffer Overflow - this is a special type of attack that targets specific code. Basically if the scammer can pass a malformed piece of data to a function in code then they might craft it in such a way that it will execute part of the data. That allows the attacker to run any kind of code on your machine. Depending on the security of the process that was compromised (which is usually pretty high) they can take over your computer. Remember, any maliciously-crafted data can cause this including data they try to send to you on an unprotected Internet port or data that you requested from a malicious website. Simple things such as an image can contain a buffer overflow attack (and have in the past). This type of attack is not only limited to windows. It can be attributed to careless programming but can often be a weakness in the compiler itself.
- Root Kits - This is a special kind of hacking technique which involves exploiting one small veunerability after another. This is typically on web servers who's upload function is unprotected or ones which have a buffer overflow exploit in place. Once a file is uploaded it is executed and causes a larger hole to be created. Eventually they can take control over that machine.
- Email Scams - Email is where most of the bad stuff originates from. That is because it is cheap and easy to send mail and because it is often easy to harvest or guess an email address. It's far more difficult to get people to visit a malicious website.
Q & A:
- Info: The terms virus's, adware, spyware, and worms can be safely summed by the term malware.
- Q: Are Mac's really more secure than PC's?
- A: Yes and No. Although the Mac has made a comeback the past few years it is still a very, very small percentage of the computers in the world. Because of this most every virus targets a PC running some version of Windows. However, this does not mean that your "safer" using a Mac. As Macs become more more popular more virus will be written to target them and they may have more success than targeting windows. Windows has gone a few rounds of cops and robbers where Macs have not. In my opinion, if you are buying a Mac simply because you think you are "more secure" than don't bother. A sense of false security is the most detrimental risk of all.
- Q: Who is attacking me and why?
- A: Attackers are generally part of 1 of 2 different types. People in it for personal gain and government-sponsored groups. There has been a very significant and organized amount of hacking coming from China which suggests that the Chinese government sponsors this type of activity. Much of their effort seems to be on mapping our resources around the net.
- Q: I get a lot of email about stocks, what is that about?
- A: This is the old pump & dump scam! They artificially inflate the price of "penny stock" that they own a large number of shares of. They send this email en masse telling people to buy lots an lots of this stock. Enough people buy that the stock price raises and the scammer sells the stock and allows it to tank. This kind of scam can be costly for both the business offering the stock and for those foolish enough to actually invest in it.
- Q: Do people really fall for the emails claiming to be from their bank?
- A: No, not really. The problem is that if just one in ten million *do* fall for this scam then it would have been worth it. They can send these phishing emails out at a rate of millions per minuet.
- Q: Will the Internet ever become a safe place.
- A: No. Like the game of cops and robbers this will likely play out forever. The programmer in me wants to believe that it is possible to have 100% secure software. The pragmatist in me knows that it may not ever be possible. However, overall I do tend to believe that it will get much better but will probably get much worse before that begins to happen.
- Q: Does looking at porn on the Internet make me more susceptible to malware?
- A: Absolutely! Porn and Malware go hand-in-hand!
- Q: Does downloading "cracked" programs make me more susceptible to malware?
- A: Absolutely! Crack sites, key sites, etc. are a Trojan horse delivery mechanism. Why do you think these people crack these apps? They do it to lure you there and take control of your computer.
- Q: Can a virus really take control of my email?
- A: Yes, it can but usually it doesn't have to. SMTP (the protocol in which mail is sent over the Internet) has absolutely no good way of verifying that you are who you say you are. If your computer is hacked then it's probably your email address book they are after.
Tips & Tricks:
- Get a home firewall that uses NAT. You may already have this and not realize it but computer systems sitting behind NAT "invisible" to Internet scans which greatly helps keep your computers safe!
- Let your computer update regularly! Make sure auto-update is turned on and working. If the computer needs to restart to apply a patch make sure that happens ASAP.
- Install a software firewall. They will slow down your computer, I know, but they are a necessity today.
- Let your virus scan run weekly
- Own two computers (especially if you have kids). Use one for Internet banking, and Internet purchases, storing personal information and nothing else. Use the other one for everything else, keeping any kind of personal information off of this machine.
- Use the least possible permissions you can for your user accounts.
- Our school used a hardware device in which every time the computer is rebooted the state is restored to exactly the same state every time. This would be a really great tool for your general use computer. (I'll post a link when I can find one)
- If you are at an Internet shopping site and you get a certificate error, leave now! That certificate error is the ONLY thing protecting you from a man-in-the-middle attack!
- NEVER, NEVER, NEVER, NEVER, NEVER download or open an attachment you are not expecting! Even if it looks like it is from a person you trust! If it is from a person you trust verify it's contents before opening! They may have been sent this wonderful screen saver and wanted to share it with you. That's great and all but that screen saver is probably a worm! Also, they may not have actually sent it to you, the screen saver did it!
- Do not download any executable file. Those include files that end with: .exe, .scr, .bat, .pif, .com, .dll, .ocx, .sys Also watch for the space trick where the filename is "myfile.zip .exe". Notice the spaces? You may not see those in Outlook or whatever else you are using.
- Verify from the author any other types of download. Recently virus have been able to attach to innocent PDF files! The moral is that there really is no such thing as an innocent file!
Opinion:
- Virus' are not easy to detect! Virus scanners use something called Heuristics to find virus's, adware, spyware, and worms.
- Both client and server need a way to be independently authenticated by a trusted 3rd party, and if the trust can not be established then there must be no way to continue.
- We need to phase out passwords! They are way too easy to predict and/or capture!
- We need a way to positively identify (for computing purposes) every user on the Internet. This is the only way we can really develop trust relationships with other systems and the only way to end SPAM/Phishing.
Visit these links for more information:
Be careful out there!
Recently, some Linux developers have condemned the practice of shipping drivers without the source in an open petition. The reasoning for such a petition is that "any closed-source Linux kernel module or driver to be harmful and undesirable. We have repeatedly found them to be detrimental to Linux users, businesses and the greater Linux ecosystem."
This is the same kinds of growing pains that Microsoft had in the early days of Windows, esp Windows 95. The Windows OS seemed to be riddled with bugs but the problem was (and always had been) the device drivers and graphics cards are the usual culprit. However, stating simply and closed source drivers are "harmful and undesirable" is a lot like saying that the only to prevent accidents is to not allow anyone to leave their homes. Very few device drivers for windows are open source and there has been tremendous improvement in stability over the past few years. Microsoft has made enormous efforts to train device driver programmers for other companies and has released an extensive DDK and debugging tools. Most recently they even made it so some drivers don't even have to operate in the kernel space but can exist in the user space. This will greatly increase stability of the Operating System by protecting the sensitive kernel. A similar concept could be adopted by the Linux Kernel.
Open source drivers may be a good solution for Linux but this petition makes it sound like it is the only solution for the longevity of Linux. There is a down-side to open access to source code including uncontrolled visioning which can cause worse problems.
--Nathan Zaugg
See also:
http://redmondmag.com/news/article.asp?editorialsid=9992
For now on you may address me as my proper title:
I wish the title was a little less blasphemous; therefore you may alternately call me 'King Nerd'. That would also match with my wife who is a "Slightly Dorky Nerd Queen".
I also scored quite high on the version 1.0 test:
Remember, that is 92 percentile!
--Nathan Zaugg
Those who know me will tell you that Microsoft might as well just put me on payroll because I talk about their new products all day long. I have even characterized myself as an "Unofficial, Unpaid, Microsoft Solutions Evangelist". Having said that, I know what's good and bad about the products I love so much. I also know when someone else has something worth taking a look at.
Google Defined
Google: very successful Internet advertising company seeking to find, buy, and evolve technology that has promise and then figure out a way to turn a profit on the new technology.
And they are very good at what they do. When Google purchased KeyHole (Google Earth) everyone started scratching their heads and wondered why! I may not understand exactly how but I am almost positive that they have turned a profit on it.
Well, here is another interesting gem. Google SketchUp is a armature 3D modeling tool. It's easy to pick up and has some pretty cool features. I created the sketch below of my back yard (or at least how I want it to be). There is tons of detail from the landscape brick in the back to the translucent windows. I did the below model in less than an hour and when I started I couldn't figure out how to add dimension to my square shape or how to move it around.
Cool Features
Some of the cool features are textures, and the 3D model community. The textures makes these simple sketches look very life-like. It has most of the common building materials and many basic outdoor shrubs & plant-life. You can also capture an image from Google Earth and transpose it onto your work area. You can even take a picture of something and mold it to your objects.
I added my model to the Google search and now anyone can use my shop!
The Not So Cool
It would be nice if there was more keyboard involvement. I find my self switching between tools a lot and it's cumbersome. While you can get results pretty quickly the interface is difficult to maneuver even after using it for a while. Simple tasks like changing the size of a rectangle are complicated. You also have an inability (or at least it seems to me for now) to make things exact. I'd like to be able to enter the size of the rectangle and then enter the coordinates. And although it's pretty fast for 3D modeling it does make you wait quite a while for some tasks and I think it could be a whole lot faster! It seems to be written in Ruby and my guess is that it does not take advantage of hardware acceleration and that if it were written in a language like C++ or C# than it would be much faster! Also, I hate the name! Couldnt they have named it Google 3D, G-3D, or even GSketch?
All said, for a free pice of software it does help a lot in convincing my wife that a 30-35 shop will not look too big for our yard!
http://sketchup.google.com/
--Nathan Zaugg
The term "circular dependency" may be foreign to some programmers (especially if you do Java as it is a pretty common practice). However, anyone who has done some scripting for a referential database knows that you have to run scripts in a certain order. Running scripts out of order causes errors when you run. The interesting trick is that if you run that same incorrectly-ordered DDL script again and again you will eventually get it to run without errors. If you were unaware of the order being incorrect and thought to yourself in that moment "Stupid database!" then this blog post is for you!
What is a circular dependency?
It is simply two libraries that use each other (either directly or indirectly) as shown below:
Figure 1: Circular Dependency
Figure 2: Complex Circular Dependency
The complexity of a circular dependency may vary. If you are using Visual Studio and have all of your projects loaded into a single solution AND you add Project References (Right click on project -> Add Reference -> Projects Tab -> {Project Name}) then the IDE will not allow you to create Circular Dependencies. In fact, this is a good practice as Visual Studio will ensure the correct build order.
Why are circular dependencies bad?
Just like our Database example above, a circular dependency makes it so you can not guarantee that your application has the latest code. That is a big deal! Here is why:
- I make changes to Application 1 (in Figure 2)
- I build my project, The changes I made in Application 1 may or may not have gotten into Application 2 (depending on build order). It may have taken a copy of the compiled code that was left over from the last time I built.
- Application 2 depends on this new functionality to provide services to Application 3; This functionality will not work correctly with this build.
- Application 3 may or may not depend on these same services to provide back to Application 1
As you can see in this scenario, there is no such thing as a "correct" build order when there are these circular dependencies. The only way you can arrive at the correct version of the code is to build it as many times as there are nodes in our circle. That would mean for Figure 1 that we would need to build twice and three times for Figure 3. Some of these dependencies can get really ugly! Here is some actual code running in an actual company that I did analysis on some time ago using a tool called Structure 101.
How do I fix circular dependencies?
There are some steps to take to solve even the most complex tangles! They all involve refactoring your code though.
- Refactor common code into a "base" dependency; I usually call this "Common" (figure 3). BEST SOLUTION
- Remove code that is unused. In the tangles shown above many of them are using deprecated/unused code.
- Duplicate the sections of code used. This should be seen as a last resort but given the choice between code duplication and circular dependencies, I take code duplication ever time!
Figure 3: Refactor a Common
Summary
There are two kinds of design concepts for nTier (and other types of architectures as well) called Logical Layout Design and Physical Layout Design. The Logical Layout is simply that your software occupies the same project/package but leverage different classes. In contrast Physical Layout Design forces each tier to be separated into different Projects/Packages. So long as we are careful to manage the dependencies between these packages from the start this is the preferable way to code. While the logical layout does not suffer from the dependency problem eventually you may wish to break these classes apart and find that there are a lot of inner-dependency that should not exist simply because they occupied the same project. Remember to keep it clean!
Microsoft Silverlight 2.0 Beta 2 has been released. This version is supposed to be pretty stable as I understand it. The tools on the other hand, still feel very much like a beta! I had a heck of a time getting this junk installed!
The link to download Silverlight 2.0 Beta 2 for Visual Studio is here: http://silverlight.net/GetStarted/
(note: This installation package includes all you need including Silverlight runtime and SDK)
My Installation Issues
I got to know these dialogs pretty good as I'd tried this many times!
It makes you close all browsers! Ok, sure!
DANG IT! What now?
So you're telling me that I can't have a patch?
Here is the output of the error:
Created new ExePerformer for Exe item
[6/9/2008, 13:48:11] Action: Performing Action on Exe at c:\6472026481f78458c12aa62cd2\silverlight_uninstallrtmpatches.exe...
[6/9/2008, 13:48:11] (IronSpigot::ExeInstallerBase::Launch) Launching CreateProcess with command line = c:\6472026481f78458c12aa62cd2\silverlight_uninstallrtmpatches.exe /q /uninstall
[6/9/2008, 13:49:11] (IronSpigot::ExeInstallerBase::PerformAction) c:\6472026481f78458c12aa62cd2\silverlight_uninstallrtmpatches.exe - Exe installer does not provide a log file name
[6/9/2008, 13:49:11] (IronSpigot::ExeInstallerBase::PerformAction) Exe (c:\6472026481f78458c12aa62cd2\silverlight_uninstallrtmpatches.exe) failed with 0x80070643 - Fatal error during installation. :
[6/9/2008, 13:49:11] (IronSpigot::ExeInstallerBase::PerformAction) PerformOperation on exe returned exit code 1603 (translates to HRESULT = 0x80070643)
[6/9/2008, 13:49:11] Action complete
[6/9/2008, 13:49:11] (IronSpigot::LogUtils::LogFinalResult) Final Result: Installation failed with error code: (0x80070643), Fatal error during installation.
so I hit the net and find these instructions: http://weblogs.asp.net/bradleyb/archive/2008/03/06/installation-tips-for-sivliverlight-tools-beta-1-for-visual-studio-2008.aspx
try to uninstall the KB:
(found under updates)
Kept asking me for some location! Wasn't going to work....what is the backup solution?
Ok, extract the files in the installer...sure.
I put them in a dir called "Install"
Then RUN: msiexec /uninstall VS90-KB949325.msp /L*vx VS90-KB949325-2.log
Double Dang! Ok, whats up??
Hit the web agian: http://silverlight.net/forums/p/17663/58925.aspx
and download MSI ZAP: http://support.microsoft.com/kb/290301
Tripple Dang! Updates are not listed!
Downloaded tool: MSIINV
Download Link:
msiinv.zip
RUN:
msiinv.exe -p > Installed.txt
No luck! Still not giving me the GUID for KB949325!
RUN:
msiinv.exe -v > Installed.txt
Took SEVERAL minuets to run and created a 10MB text file! When it was done I tried to run msizap but it wouldn't go. I'm just going to install these packages seperatly now that I have them extracted from silverlight_chainer.exe.
RUN THE FOLLOWING:
- Silverlight.2.0_Developer.exe
- msiexec /p VS90-KB949325.msp /L*vx VS90-KB949325.log REINSTALL=ALL
(gave me an error -- ok, I'm skipping it!) - silverlight_sdk.msi
- VS_SilverlightTools_Beta2_Setup.exe
Sat here for quite a while!
devenv.exe seems active though.
Install Completed!
This is also a good resource:
http://www.microsoft.com/downloads/details.aspx?familyid=50A9EC01-267B-4521-B7D7-C0DBA8866434&displaylang=en
This seems to only be a problem if you were an early adopter of the Visual Studio 2008 SP1 Beta 1 fix.
A few weeks ago I gave a presentation on Silverlight at the Utah Code Camp. I was really impressed by a presentation I saw last time on Ruby and everyone really liked the cheat sheet that was provided. For my presentation this year I created an XAML Cheat Sheet. For those who are learning XAML it is a pretty good resource but it's most helpful for me when I know how to do something but I can't remember the syntax. This is a work in progress so keep on checking back. It can be downloaded in the Media section of this site or by clicking here.
Please drop me a line if you think you have something useful to add or you want to thank me for the hard work it took to put this thing together!
Nathan Zaugg
As promised by Microsoft when Visual Studio 2008 launched late last year, there is a service pack for available both for the .Net framework 3.5 and Visual Studio 2008. Information about the release can be found on ScottGu's Blog and mostly include bug fixes and performance enhancements, but the points of interest for me are:
- ASP.NET Routing Engine which gives you the ability to map URL's to route handlers. For example the URL http://www.mysite.com/myapp/data/234/editComment
- ASP.NET AJAX Back/Forward Button History Support gives you the ability to control the forward & back button clicks on the browser. This will be very useful for "single page" ASP.NET AJAX implementations.
- Performance improvements on the web editor in VS 2008.
- JavaScript Formatting Settings
- CLR performance improvements including startup times that are 40% faster and faster ASP.NET requests (up to 10% faster).
- WPF New Features and Performance Enhancements!
- Performance enhancements using GPU
- New "WritableBitmap" which allows for tear-free bitmap updates.
- ListBox, ListView, and TreeView now support "item container recycling" and virtualization which results in better performance. This will have a huge effect on large amounts of data.
- Deferred Scrolling which doesn't render until the mouse up on a scroll event. This will can have a enormous effect on huge data sets.
- StringFormat support within binding expressions
- New Alternating Rows support for controls derived from ItemControl
- Events tab support within the property browser in VS 2008
- Go to Definition and Find All References now support things declared in XAML
- SQL Server 2008 Support
- The long awaited ADO.NET Entity Framework which includes integration with any database
- Improvements in WCF including scailability, ADO.NET Entities in service contracts, and Improved Debugging support for WCF.
- Improvements to C#; The C# code editor now identifies and displays red squiggle errors for many semantic code issues that previously required an explicit compilation to identify. The debugger in VS 2008 SP1 has also been improved to provide more debugging support for evaluating LINQ expressions and viewing results at debug time
- Fixes to TFS
Installation Cautions
- If you are running this on Vista, be sure Vista SP1 is installed!
- If you have installed the VS 2008 Tools for Silverlight 2 Beta1 package on your machine, you must uninstall it - as well as uninstall the KB949325 update for VS 2008 - before installing VS 2008 SP1 Beta
- If you are running anything earlier than Expression Blend 2.5, then you need to update it to the latest. Earlier versions will cease to run.
- This is still beta software -- Install at your own risk!
A direct download link can also be found here: http://msdn.microsoft.com/en-us/vstudio/products/cc533447.aspx
--Nathan Zaugg
Code camp snuck up and bit me this year! I have been so busy with the new cuegle.net/cuegle.com site that I didn't get the preparation time I wanted. Although turnout was mixed (a little less than we had last time, but still pretty good) the sessions were pretty good this year.
The Good:
- I got the large room again this time and a lot of people attended my presentation
- There was a lot of good presentations on lots of different technologies
- The speakers used up all of the time this year
- The speakers were better this year
- The topics were all very interesting!
- The emphasis was on the code, not the power-point!
- I think it got people energized about the new technologies
- We had wireless and even if slow, it worked!
- People didn't get locked out of the building this year.
- My friend, Phil won a prize (I drew his name)
- THERE WAS ENOUGH PIZZA THIS YEAR! YAY!
The Bad:
- One of the presentations was supposed to be about how cool it is to have Iron Ruby-- instead we got a bunch of propaganda from a Ruby fan-boy about how we only use Microsoft technologies because "we fear anything else -- especially something that has power".
- I wasn't as prepared this time around. I was doing Silverlight and didn't get my sample application working until the exact moment I was to setup for my presentation. As a result I wasn't really mentally prepared for the presentation and didn't present it well.
- I didn't get to pay as much attention to the other presentations as I wanted or get to go to the ones I wanted because I was working on mine!
- I didn't have near enough handouts (sorry, my printer is out of toner!)
- I have some bugs in my XAML Cheat Sheet that I just distributed to 100+ people!
- I've moved to Kaysville now and I'm quite a ways from Neumont University.
- A lot of my "support" wasn't able to attend. I know it sounds crazy but when I give a presentation be it Code Camp or a Users Group I like to have 2-3 of my friends helping me get going. It allows me to focus my energy on mentally preparing.
- I drew myself in the drawing which disqualified me! DANG!
The Future
- There is a lot of feedback and a lot of talk around having this during work hours. This is probably the #1 request among developers. This is a two-sided sword we're very mixed about it. It is against the code camp manifesto that we loosely follow. Though, I wouldn't be surprised if we started doing it on a weekday.
- I still think we need to have more than 3 rooms going at a time and that some presentations should be given twice.
- It would be cool to involve people in another way, too. Have a code competition or a Linux Installation room, etc. Add some dimension to the conference.
- Let's go commercial! Let some big-name sponsors in and give everyone free stuff! Why not a free PDA for those who attend?
Special thanks goes out to Phil Gilmore who helped me immensely in preparing for my presentation. He also took the photo's posted below.
Myself presenting Silverlight 2.0 Beta 1
Discussing the difference between Silverlight 1.0 and 2.0
They audience looks enthralled!
There I am drawing my own dang name!
The grand prize winner (Rock band)
Winner-Winner Chicken-Dinner!
Double score!
--Nathan Zaugg
Photo's courtesy of Phil Gilmore.
While I am a big fan of unit testing I often try to point out that Code Coverage tells us little more than “are there unit tests” not “are we unit testing”. The former indicates that the code is indeed being executed. The latter indicates that the code is being exercised.
Scott Hanselman describes this in a podcast available here:
http://www.hanselman.com/blog/HanselminutesPodcast103QuetzalBradleyOnTestingAfterUnitTestsAndTheMythOfCodeCoverage.aspx
I personally think testing really needs to be evolved to the level our architecture and code. Testing frameworks are clumsy and it is very difficult to all of our code. Most people are quick to point out that unit testing UI is difficult and usually yields the least amount of benefit. Depending on the architecture and implementation of that specific application, that may well be true.
Two other major hurtles that are well known. First, when we unit test we want as “authentic” of a test as we can get. This includes the data. How much time and effort do we spend getting authentic data to our unit tests or providing an abstracted data layer for our code to operate on? This is a big problem not just because of the hours it takes to provide such data abstraction implementations but because the time it takes to run Unit Tests is greatly increased. I argue this because I believe that when unit testing takes 30 min’s to complete, developers are not going to run them before checking in, and that I believe is an issue.
The other major unit testing hurdle is the concept of the “Oracle” as described by Scott. This term simply refers to the ability of a piece of code to assert that a test has indeed run in such a way that an end user is going to get what they expect. Some of these sceneries can be very complex and it is impossible to test for all permutations of a given rule.
What do I think is needed in Unit Testing?
- The disparity between code and “UI” needs to be removed! It needs to be no harder to test a piece of UI then a API Class Library.
- Unit testing “language” needs to be more declarative and functional. I need to be able to test a wide variety of test scenarios quickly and easily. Functional languages are good at this. I do not suggest that we adopt Python for unit testing but maybe a testing language developed for this purpose.
- There needs to be a healthy attitude toward testing in the industry. I watch every day at any place I go where people “fix” the code in production. If they had better testing before the code went to production they could have easily caught these errors! A production line is down for an entire day, but if you ask -- no one can afford
- There needs to be more automation in Unit Testing. For example, can’t the computer automatically generate tests for the edge cases? Coulden’t it generate input that would “execute” every branch of code possible? It may be of marginal usefulness but it would be free.
- Unit testing needs to run faster! I have long mentioned the need to use multiple threads to execute tests – I even modified a release of nUnit to run on the ThreadPoool, but these need to be adopted and standardized
- There needs to be a better way to test with real data.
Of course, the best sets of testing will not eliminate all bugs. In fact, you must adopt your own testing philosophy when in charge of a development effort. The question you have to ask yourself is two-fold. First, How important is it that my code has very few bugs. Would a bug in your software, if unchecked, cost your company a lot of money? Could a potential bug cost the company thousands, millions, billions, or even someone's life? If so, then your testing should reflect that. It's good risk management to spend 2x or more the cost of software development to avoid this type of costly loss. On the other hand, if software has little consequence other than user annoyance when a bug is present then perhaps less of the development dollars need go toward testing. In some sceneries it is completely fathomable that none of the software budget is spent on automated testing.
The other question you will need to answer is will adding automated testing to my project yield a return on investment? Before you make a snap judgement on this second point I may add that systems that have good testing in place (automated testing in particular) have a much cheaper lifecycle than applications that do not have this type of safe-guard. Automated testing is a lot like life insurance in this way. Having critical pieces of your code unit tested will save your bacon almost guaranteed!
I have been meaning to do this for years! I actually purchased this domain back in 2004--lost it for a few years and recently purchased it again. Me and some of my friends plan to support an online community where we can answer questions and post content that is useful to the community.
I have been doing Code Camp for over a year now (that may not seem like a lot but this will be my 3rd time presenting there). Code Camp is awesome! It's a full day of free training in a verity of topics that most developers find interesting.
Topics Include:
- Silverlight 2.0
- Building websites with the ASP.NET MVC framework
- C# 3.0
- Writing Manageable Services in .NET
- Database Performance tuning Part 1
- IronRuby + C# = Awesomeness
- Game Development in XNA for the Xbox360
- Introduction to Mac OS X programming using Cocoa and Objective C
- Developing For Windows Vista
- T-Sql Querying
- Why C# Namespaces are not the same as Java Packages
- An Introduction to CakePHP
You can visit http://www.utcodecamp.com for more information (Yes, I know I put the Microsoft technologies first -- I'm an unpaid unofficial self-proclaimed Microsoft Technologies Evangelist!). Wither you are just starting out in your career or have been doing this stuff for years you will get something out of Code Camp! It's not one of those Microsoft "For developers" class sessions where it's just a giant power point and you didn't get any real substance! In fact, that is part of the Code Camp Manifesto (http://blogs.msdn.com/trobbins/archive/2004/12/12/280181.aspx)
Information:
Date: April 26, 2008 (This Saturday)
Time: 9:00AM - 5:00PM
Please register here.
I will be doing the Silverlight 2.0 presentation so please come!!
Thanks!
Nathan Zaugg
I was creating a payment form recently and I wanted to disable the submit button after the user had clicked it so that there was no chance of them clicking it twice on accident.
This seems like one of those things that just ought to be a slam dunk! Back when I was doing ASP, this was one of the easiest things I ever did. I have done it with asp.net before, but I never needed validation to work as well. It took me a long time, many searches and trial and error but I found a solution that will disable the button and doesn't get in the way of validation.
protected override void OnInit(EventArgs e) { // We need this to get our button event Form.SubmitDisabledControls = true; btnProcessPayment.UseSubmitBehavior = false; // Attach javascript code to the OnClientClick event btnProcessPayment.OnClientClick = "javascript :if ( Page_ClientValidate() ) " + "{this.disabled=true; this.value='Please Wait...'};"; base.OnInit(e); }
The javascript is pretty straight-forward. We are simply calling the validation function ourselves, if it validated we disable the button and change the text. What does the rest of the code do then? Well, normally when a control is disabled it doesn't post data on postback. Fortunantly, ASP.NET makes it easy to override that default behavior. Without this data our button click event would never fire!
You can also do this in the HTML as well.
<form id="form1" submitdisabledcontrols="true" runat="server"> <asp:button id="btnProcessPayment" onclick="btnProcessPayment_Click" runat="server" onclientclick="javascript :if ( Page_ClientValidate() ) {this.disabled=true; this.value='Please Wait...'};" usesubmitbehavior="false" text="Process Payment" />
I hope this is useful to someone! If you end up using it on one of your sites, please leave a comment!
I've come across this several times. You build a custom control inside of your web_code folder and you can't reference it from your project because you can't figure out what assembly your supposed to reference (as your web code doesn't normally have a set assembly name). I finally figured out how to do this recently.
Normally you're tempted to put something like this:
<%
@ Register Assembly="MyWebComponent" TagPrefix="as" %> When really you need to put something like this:
<%
@ Register Namespace="NSM.WebControls" TagPrefix="as" %> The only trick is that you need to make sure you put your custom class inside of a set Namespace. Normally any class inside of a web project doesn't belong to a specific namespace.
Now we can reference our control:
<as:MyWebComponent ID="MyControl1" runat="server" />
You know you've done something wrong when it takes 30+ seconds to run a full-text query. The most annoying part of this bug is the fact that it is something very small and inconsequential that "triggers" the bug. It's a lot like an murder investigation where the killer turns out to be a Nun.
Here is the setup---
- I have a full text index on a text field in my database. This table happens to be our Phrase table (for data localization)
- I do a very simple full text search on that field in a SQL Server proc
- I want to test my proc so I add some fixed input
- Setting a value after the fact causes my full text to be *very* slow!!!
Here is the code
ALTER PROCEDURE [dbo].[AdvancedTrackSearch_TEST] ( @TrackTitleCrit xml ,@TrackDescCrit xml ,@RecordLabelCrit xml ,@CategoryCrit xml ,@ComposersCrit xml ,@TrackDuration int ,@TrackDurationOperator int ,@LangID int ,@PageSize int ,@PageNumber int ,@UserID int ) AS BEGIN -- Temporary input DECLARE @TrackTitleStr nvarchar(2000) SET @TrackTitleStr = '"booty poppin"' -- *********************************** -- * THIS LINE CAUSES THE PROBLEMS * -- * BY SIMPLY REMOVING THIS LINE * -- * THE QUERY TIME WILL GO FROM 30+ * -- * SECONDS TO LESS THAN 1 SECOND! * -- *********************************** SET @LangID = 66 DECLARE @TitlePhrases TABLE ( PhraseID int, DictionaryID int ) INSERT INTO @TitlePhrases SELECT PhraseID, DictionaryID FROM [dbo].[Phrase] WHERE --PhraseID NOT IN (SELECT PhraseID FROM @TitlePhrases) CONTAINS([TEXT] , @TrackTitleStr) --[Text] like @Phrase AND LanguageID = @LangID -- Check the output SELECT * FROM @TitlePhrases END -- ****************************** -- NOW WE TRY TO EXECUTE THE PROC -- ****************************** DECLARE @return_value int EXEC @return_value = [dbo].[AdvancedTrackSearch_TEST] @TrackTitleCrit = NULL, @TrackDescCrit = NULL, @RecordLabelCrit = NULL, @CategoryCrit = NULL, @ComposersCrit = NULL, @TrackDuration = NULL, @TrackDurationOperator = NULL, @LangID = NULL, @PageSize = NULL, @PageNumber = NULL, @UserID = NULL GO
It seems so simple and stupid but setting the LangID (even if we pass null into the actual query) causes the query to take a substantially longer time doing the full text search.
I hope someone finds an explanation!
--Nathan Zaugg
UPDATE:
As a matter of fate, me and my friend Phil Gilmore stumbled on the answer. The trick is to "SET ARITHABORD ON" for one of the first things that you do in the query. This is usually linked to arithmetic exceptions and overflows, but for some reason with out it there is little chance your query will perform. If you look at the difference between the execution plans before setting that variable vs. after you can see that the execution plan changes a lot! After ARITHABORT is ON the execution plans are again identical! Check out my post on MSDN forums.
More Posts
« Previous page -
Next page »