Monday, April 27, 2009
Friday, April 03, 2009
All I wanted to do was take a known http host and combine it relative url. Both our main web developers didn't know the answer and there was no common library to handle it. The best I got was to use Path.Combine and then replace the \ with a /.
I was sure that was a horrible hack and if I ever did that the little Travis in my head would scoff at me and make me feel bad.
So after some more digging I found the answer in System.Web. The VirtualPathUtility.Combine() method does exactly what Path.Combine does, except for non local paths.
Some days avoiding Travis is easier than others, today was a good day.
Friday, February 15, 2008
But on what? On wine, on poetry or on virtue, as it suits you. But get drunk.
And if sometimes, on the steps of a palace, on the green grass of a ditch, in the lonely gloom of your room, you wake up, the drunkenness already abated or completely gone, ask the wind, the wave, the star, the bird, the clock, everything that flies or groans or rolls or sings or speaks, ask everything what time it is; and the wind, the wave, the star, the bird, the clock will answer: 'Time to get drunk. In order not to be the martyred slaves of Time, get drunk. Get drunk ceaselessly. On wine, on poetry, or on virtue, as it suits you.'
Friday, February 01, 2008
The Project Manager refused to commit anything to writing. No documents and no emails. If you emailed him a question the response was “coming right down”, if you cornered him in a meeting he wanted to talk 1:1 afterwards.
The problems started out small, the PM would start sitting next to a dev and having them make UI changes. He then moved to demanding larger and larger feature changes and getting upset at the large estimates.
Eventually the situation devolved into pure madness, with the PM blaming the developers for the fact the product didn’t match what he had been promising management. We finally reached our limit started brainstorming with ways to put an end to the PM’s refusal to commit to requirements.
I proposed solution that I later named Viral Requirements. The idea is actually quite simple, but the effect it had was amazing.
The first step was to setup a MoinMoin wiki and created a new WikiPage for each section of our application. Each WikiPage contained a table with the following columns:
This was a single, testable requirement. The rule was that each requirement was specific enough that it could be tested and could only have one possible valid result.
This was the current state the requirement was in. There were only three real states: Approved, Needs Approval and DEV. The DEV state was used when the Product Manager was too busy to look at the requirements and approve them. The idea was the Dev would give the PM 2 days to approve a requirement before he went ahead and implemented it.
The PM was required to put his initials and a date into this field when he approved the requirement.
So the workflow looked like this:
- Dev writes requirements in Wiki
- Dev emails PM with link to pages that need approval
- Dev waits for PM to approve (or after 2 days marks the State as DEV)
- Dev checks in code and submits to QA
- QA tests against requirements
- QA refuses to signoff on the change until all requirements are approved.
That last line is very important, because of the 'State: DEV' the development and testing could continue, but I could refuse to release the product until the PM signed off on all the requirements. This prevented the PM from blaming the team for any delays and forced him to digitally sign his name next to each requirement.
Within a month, all blame games had stopped and things were moving much smoother. Eventually the PM just rubber stamped everything anyway or had the Business Analyst review the items for him.
A nice side effect of this is the once word got out what we had to do to handle requirements, it brought a lot of scrutiny to the PM and how he was running things. When that company was bought out, he was let go.
He was a really nice guy, but he had obviously checked out a long time ago.
Wednesday, January 09, 2008
I had an interesting experience back in December. One of the Product Managers was responsible for taking an out of date spec and updating it to reflect the current product. She sent out a SharePoint link to the Functional Spec she wanted reviewed.
There were tons of errors and it looked like she had only made some cursory changes. I decided to give up reviewing the spec and I sent her a quick email that said her spec was still very out of date and that I only got halfway through the document.
Here is the email thread after my first email:
I’m not sure you read the right spec. I’ve now spent all day correcting issues in yours only to realize that the version TFS that I put up on Friday afternoon had more stuff in it than the one you commented on.
I double checked that I am using the correct spec. SharePoint has a history option for documents and you can see what changes were made, by who and when they were saved.
The link you sent ( also in this thread ) is the one I commented on. It was added to Sharepoint by you Friday 12/14/2007 at 3:26 PM.
I downloaded the file again (this time to a new location) and manually walked through every single comment I made to make sure the text I commented on was the same in the linked document you provided. Everything is the same.
I am not sure what document you are working on, but it is not in Sharepoint at the link you sent out. Maybe it is a local copy?
Please come see me if you have any other questions or need me to walk through what I did.
So this looks cut and dry right? I clearly documented the version of the document I was looking at and all my ducks were in an order.
The conversation continued, in email and then in person. She was positive I was looking at the wrong version and I was positive that I wasn’t. When I went up to her desk to walk her through what I did, I ran into a problem.
The SharePoint history for that document no longer matched what I had seen only 10 mins before. The 12/14 version I had referenced now said 12/13. This confusion only reinforced the PM’s belief that I somehow downloaded the wrong version of the spec.
I was baffled. I knew what I had seen, I had double checked. What could have happened? I went back to my machine and luckily I still had the SharePoint window open and I could clearly see that I was not crazy. In the following screenshot you can the same document (notice the GUID in the URL) had mysteriously changed dates and timestamp.
It took me about 30 mins to Google and troubleshoot the issue, but the conclusion we came up with was that she had checked out the document, made changes and never checked it back in. So when she sent out the link it still pointed at the old version.
This turned out to be a known issue/feature with SharePoint that has tripped a few people up.
This was a good reminder to me to always approach any contentious issue with an open mind. If you respect the person you are talking to, there is likely a good reason they have formed a differing opinion and you should always be aware that things my have changed, even in the time it takes to walk over to demo something.
Wednesday, January 02, 2008
My new company implemented TFS about 6 months before I took over the QA Department. Unfortunately I had no say in the setup and subsequent customizations that occured. While I am sure everyone had the best of intentions the end result was a nightmare.
The original plan was to take full advantage of TFS; Sharepoint, VSS, Iterations, Tasks, Requirements and Bugs. They were even going to migrate all existing requirements from Word documents into TFS and link everything up to Tasks and Bugs.
Sounds like a solid plan, with a little hard work and planning, what could go wrong?
Unfortunately this major shift in process was treated like a skunk works project. One QA guy put together a sample installation, got approval and then worked with the Dev team to set something up.
My opinion of the whole adoption phase was that everyone was focused on hammering out (on paper) a series of workflows that felt familiar to what they were used to using. And then when time was running out they tried to hammer custom workflows into TFS.
Here are some of the decisions that were made:
- Heavily customized the CMMI process template.
- All workflows modified
- Added new fields and changed the meaning of existing fields
- Created custom triggers to enforce a very complex 'Assigned To' scheme (Dev, QA , Business)
- Created a single TFS project, using the company name, then created sub projects (using Areas) under that one project.
- Placed all code under a single VSS branch, created one giant build for all code.
- Ignored the recommended SharePoint setup and created a custom hierarchy of projects to store documents in.
- After about 3 months of use they switched the layout around, so now there are ‘historical’ documents in an old area and newer documents in the new area.
Here is the current state:
- Product Managers pulled requirements back out of TFS into new word documents because TFS was too difficult to use.
- Bugs are tracked and assigned to developers using a spreadsheet with TFS IDs.
- There are functional bugs inside TFS with the custom 'Assigned To' workflow logic that was added.
- All of the default Reports were broken, new reports have not been able to be created due to the complexity of the changes.
- TFS is used only to store code/documents and file bugs. Requirements and Tasks are no longer tracked in TFS.
- Everyone thinks TFS sucks are are 100% resistant to making changes or trying to fix the issues.
So this is the problem that I have to fight the unpopular fight to fix. When I get some time I will blog about my proposed solution (I do have one that I am working on) as well as track my progress.
Saturday, January 13, 2007
What I will try to do is identify the single most significant reason most automation projects fail, and then offer an alternative. I want to warn you ahead of time that what I am proposing is not the ‘right way’ to implement automation, but I do believe it has a place with ‘young’ QA Departments under pressure to roll out an automation solution.
The Single Most Significant Reason Automation Fails:
Automation is a Development Project.
Successful automation is based on the same principles as successful development projects: Reuse, Extensibility, and Maintenance.
If you skimp on the design and quality of your company’s software development, you end up with crappy software. Once again the business cost of doing a poor job designing software is well documented so I won’t go into it here.
If you know you’re not qualified to design and implement a robust, reusable, extensible and maintainable QA Automation Framework, what can you do when the company refuses to assign development resources to the project and it ends up being all you?
You punt. In American football the punt occurs when things are looking badly for the team and the decision is made to give up your current drive and kick the ball to the other team. The idea is to buy some time to regroup instead of taking a big risk and ending up in a worse situation.
All Management wants is ‘Automation’. It’s all very clear here on this checklist.
- Product Design
- Product Development
- Ship to QA
As you can see, you are the only thing standing between the company and its profit. Not the best time to try and explain how automation is really a development project.
Since we have established that it is unlikely you are going to be able to implement Automation the ‘Right Way’. Don’t take the big risk; buy some time for your department by using Automation to enhance your manual testing. After’Automation’ is checked off the list, you will have the time (and now experience) to take a stab at a better solution.
Solution – Record and Playback:
As we all well know, ‘Record and Playback’ is the single most vilified method of automation. Its critics say it is too simplistic, it is not reusable, it is not extensible and it is certainly not maintainable. All the scripts generated are useless once the UI Changes! Sounds perfect for our project!
The idea is to reduce the number of hours spent retesting the entire application every time a change is made. We all have been burned when some obscure code change breaks a feature in another part of the application. We can use this method to automate all those mundane tasks and let us focus on the more important areas.
Step 1 - Make Test Cases
In order to do any automation, you have to define the test cases you want to cover. Use whatever tool you want (Excel works fine) and list out all of your test cases. Each test case should meet the following criteria.
- Test cases must be written so that they are atomic and each one has only a single possible success path.
- The positive and negative test cases must be separated.
- The test case must have the same result every time it is manually run.
- Avoid hard tests; the power of this method is in automate the mundane tests. If it is hard to test something, it will be even harder to automate it.
- Don’t try to identify everything at once. Start out with just 1-2 test cases for every major feature. You want to cover all the major areas, then fill in the gaps later.
Step 2 - Record Manual Testing
When you begin the manual testing, sit in front of the automation tool, turn on the recorder and record yourself as you execute a single test case. Once you have completed the test case and it passes, then you save the script and move on to the next test.
Step 3 -Playback the Tests
When a new drop of the product makes it to QA (Patch, DevDrop, Daily Build) a QA Engineer sets up the test environment (installs new version, resets Database, etc..) and starts the automated tests. After the test starts they move on to their normal manual testing.
Step 4 –Failing Tests
After all the tests are executed a QA Engineer needs to review the report. Any failing tests should be retested manually by QA. If there is a defect it is entered, if the test is broken (UI changed, new functionality) then the QA Engineer DELETES the test script, and re-records it again manually.
Notice I said delete the script. It is important to avoid the temptation to start modifying the scripts. It will always be much faster (and safer) to re-record it. Eventually you will get so sick of fixing the same script issues that you will want to add some helper scripts, manage variables, and load values from a config file. Now you are spending all your time building a crappy framework and not testing.
Step 5 – Present Results to Management
At this point, I hope you see the benefit for yourself. You are not spending a significant amount time on the automation process, so you have more time for testing. The time previously spent retesting the common features, is now spent tracking down the more difficult defects.
As far as management is concerned you have done a great job. You added automation, you have X number of test cases and hopefully you are actually getting more QA done on the project than before.