Wednesday 28 April 2010

Web 2.0 & CICS

Just been putting the finishing touches to a demo that I want to present at Impact 2010. I'm presenting a piece about web 2.0 and how it relates to CICS. Once part of the web2.0 story is the ability to render data as ATOM feeds and expose that feed to external consumers.

I wanted to show how easy it was to expose a CICS file as a ATOM feed and then consume it within a AJAX web page. This would allow me to respond to a users request via a web page without having to reload the entire page. Making the UI cleaner, more efficient and well 'sexier'!

The demo took very little time to create. I took the CICS File A sample and quickly created the necessary CICS resources to allow CICS to expose it as a ATOM feed. A quick test in the browser showed me that all was working well and that I now had a REST style interface for my file. CICS still handled the file and ensured that authentication was properly handled. My existing demos that use that file didn't have to change, BUT I had managed to levarge this file to new consumers. Ah my next point - writing that AJAX web page.

I am not an AJAX (Asynchronous JavaScript and XML) programmer. To be honest I haven't written anything 'webby' for quite a while so my skills were a little rusty. However I had a quick google and found lots of snippets of code that I could use to allow me to make a asynchronous call to CICS to get some data:

The nice thing was a I was able to use standard AJAX calls to interact with CICS. I didn't need any special IBM framework just standard Javascript. Also I didn't need to be an amazing javascript programmer. I just bolted some code snippets together and got something really cool to work.

In a real company this would mean that we would be able to create web 2.0 components that connected directly to CICS. This would provide a up to date interface that customers expect. You wouldn't even think that CICS was involved.

Monday 26 April 2010

Hobbit at IBM Impact 2010

Next week I will be attending IBM's premier conference for IT and business leaders 'Impact 2010' at the Las Vegas Venetian Hotel.

As I am an IBMr I am really proud to have been asked to speak at this conference and am really looking forward to meeting our customers and showing them some of the cool functions of our software.

Over the week I will be presenting on:
Old Stager Vs Greenhorn - this wasn't my title but during the session I will be comparing application development environments on the mainframe and on other related platforms. I will be showing where modern application development paradigms fit within the mainframe and where they don't. As a point of fact I will be showing Rational Application Developer for System z. This tool forms part of the IBM Software development platform and allows mainframe based application developers to write applications that will allow them to make the most of the mainframe.

Securing CICS Web services. Did you know that CICS can be a first class participant in SOA and webservices? well it can and it does the job with usual CICS QoS. One question that we are often asked is how can we strategically propagate a users credentials with the webservice to ensure that the CICS application uses the correct ID to access confidential data. This session is going to get very technical. However by the end you will understand what options are available to you and which may be the right option for your enterprise.

Simple Sample web 2.0 example. Web 2.0 is it just a buzzword or does it really exist? was there a web 1.0? were there bug fixes between version 1 and 2? If the answer is yes to any of the above questions what does that mean to CICS. Does CICS need to play in web 2.0 and why would it be useful. In this session I will be picking through the buzzwords to understand what CICS can do to allow Web 2.0 applications to access your data in a way that reduces risk to your existing workload and allow you to create situational applications quickly.

Why I am speaking about these topics. Surely it would be better to get a development lead to talk instead of the tester. However as a system tester it is my job to understand the value a customer would receive from the new function. Thus I have experience in applying the technologies to other parts of the enterprise.

Also Please remember I am not a salesperson so each of my sessions will be from my PoV giving you real scenarios that I have created.

If you are planning to come to impact and would like to have a more indepth chat about any of the technologies then please:
Catch me at the end of a session
Speak to any IMPACT organiser who can contact me for you
Leave a comment on this blog
Email hobbit1983@googlemail.com


Look forward to seeing you at Impact

Wednesday 21 April 2010

What I learned today 21st April

When using the Websphere MQ Java API in CICS you don't need to know the queue manager that you are connected to.

A single CICS region can only connect to a single queue manager or a single queue sharing group. Thus we can always assume that the queue manager that you want to connect to is the same one that CICS is connected to.

However since the java MQ api requires you to specify a queue manager name in the constructor, as a developer this can make you think "err where do I get the queue manager name from". The JCICS api doesn't contain a way of getting to it.

The easiest solution is to construct a queuemanager object by passing an empty (not null) string into the constructor. When you later attempt to get or put a message to that queue, CICS will ensure that we use the correct queue manager.

That's what I learnt today and it made writing my CICS java MQ application a lot easier

Follow me: http://twitter.com/hobbit1983
Blog: http://outspokenhobbit.blogspot.com/

Tuesday 20 April 2010

Quis custodiet ipsos custodes?

"Quis custodiet ipsos custodes?" or "Who will guard the guardians?" or in my case as a software tester - who tests the testers?

As a system tester I am the final link between software being 'in development' and it being live for real users to use. We are often seen as the guardians of the users, ensuring that before it is made generally available the software is as bug free as we can possibly make it.

But who's job is it to test the system testers? who are we culpable to? or to paraphrase who "invites me for tea, biccies and a little chat" when things go wrong?

During my day job at IBM I provide the development organisation with High level plans of what testing we plan to inflict upon their software. This plan is reviewed by both them and my test colleagues. During test execution we report how well we are doing against the plan.

So our intentions are reviewed and approved by the development stakeholders. However testing is not a exhaustive activity. We can never reach 100% completion and state that the product is bug free!

The people that get hit are the customers of the product. Using some configuration, use case, or pattern that we didn't think of first, the customer found a bug. Personally I hate it when a bug is found in a product that I was testing. I will ask myself; why didn't I find that, why didn't I try that configuration etc.

Apart from the moral pain it may cause the tester, for the customer it may mean delays to their work schedule as they workaround or wait for a fix to the problem. But there is another organisation that will also get impacted.

Most (if not all) software houses will have a support organisation whose job it is to fix bugs that customers have found in the product. They bear the brunt of a customer complaint when the software does not behave as expected. This group of people more than test or development understand what customers are doing when the software fails. They understand the usage patterns that a customer is using when they notice a bug in the software.

It is the service team that 'pay the price' along with the customer when things go wrong. Perhaps it is this position that means that they should be allowed to guard the guardians and have close scrutiny of test plans to ensure that the tester has thought of common usage patterns that a customer may use.

I'm not suggesting that this approach will mean we hit 100% bug free software but perhaps it will mean that customers will not hit bugs when they are doing something that 'is normal for them' and that has to be a good thing.


When software doesn't just work but delight

As users the software we use is often just 'OK', I mean it does what we ask of it, doesn't break too often and helps us to complete a task. This unfortunately is the norm for most software.

So what can change what can we expect software to do that is better than the above? I think software should delight users not just behave as expected. I know of no piece of software that delights in all of its functions. However occasionally a piece of software does delight me.

What do I mean by being delighted by software. I mean the software almost second guessing my intentions and then making intelligent choices about what options are available to me and changing settings to support what I am doing. Or it can mean software that streamlines a set of tasks required to complete a goal without much intervention by me. Software that has delighted me makes me stop what I am doing and smile back at the computer.

When I say that software should second guess my intentions I add a caveat that the this 'guessing' should be intelligent and treat me with respect. Making non-intelligent guesses about my intentions only makes using the software VERY difficult and annoying. A perfect example is the paperclip from MS office. It tried to guess what I was doing and gave me a list of tasks that I might want to use. The problem was that it often incorrectly guessed my intentions and thus annoyed me with a list of tasks that I DIDN'T want to perform.

However today I was impressed by software so will sing it's praises here. Today my eclipse environment decided to remove all of my packages from the package explorer view. The files still existed in the workspace but I couldn't see them through eclipse, thus I couldn't work. eclipse fail!

I did manage to re-import the projects back into the workspace which added them back to the package explorer view. However none of the projects had been connected back into Rational Team Concert. This meant that I wouldn't receive any changes from my colleagues and any changes I made would not be reflected back in the repository.

I tried to 're-share' a project back into the repository and this was when RTC delighted me. It prompted me that the project was already shared in the repository and that the best thing to do would be to re-link it to the repository. Excellent that was what I wanted to do. RTC then prompted to say that it had found 50 other projects that were in a similar disconnected state and that it could fix these projects at the same time. How amazing is that. I felt that the software had gone out of it's way to actually work out what I was trying to do and then search for other similar options that I might need to do as well.

I am a massive fan of Rational Team Concert but today's incident has made me love it even more as it has now made that rare step and started to delight me rather than just meet my expectations. Well done IBM rational jazz team.

Thursday 8 April 2010

Wishing I had ...

My wife and I have been doing a lot of exercise to try and lose weight recently. As part of this we record how much weight we have lost everyweek and how much exercise we have done.

Now the key to losing weight is to simply use more energy than you eat, simple surely?

Working out the energy in value is easy, just use food wrappers and the information is there. However calculating the energy used is a lot harder. Gym equipment can help but isn't completly accurate, also what about energy burned when not connected to a treadmill? Just writing a blog - that must use energy!

So I wish that the human body had a data access port (USB would be my preference) to allow me to download how much energy I have used and such like. This would also make a docotrs job a lot easier as they would never need to run any diagnostic tests, just connect you to their laptop and get a complete view of your health.

We could even write software to interpret the data, similar to the jvm health checker. I think this could be a really good idea.

Alas since we don't have such a wonderful port I guess that I will just have to go and get weighed again tomorrow and hope that I am a little thinner and lighter than I was a week ago
Follow me: http://twitter.com/hobbit1983
Blog: http://outspokenhobbit.blogspot.com/

Wednesday 7 April 2010

Function Vs System Tests

I'm often asked "what is the difference between a functional and a system test - is a system test just a larger more complex functional test?"

A functional test (FVT) test proves that a particular isolated function of a system works as designed. The function is given a particular input and the response from the system is checked against the 'known' good response. These tests will test what happens when incorrect data is used as input and input data that is on the edge of 'good' and 'bad'. I suppose the key phrase is that an FV test is quantitative. The input and ouput data can be described mainly because the function under test has been isolated from the rest of the system and is simple enough to be tested in this way.

System Testing (SVT) is often seen as being 'bigger better FV testing'. People often see SVT like this because a lot of effort has gone into developing the FV tests and running them again but harder and faster seems like an efficient use of an existing resource. However such a concept misses the real point. Customers do not use the functions of a system in isolation. You can see this in domestic application as well as large scale enterprise software. Take a word processor for example. A user does not use the save function in isolation. They use it alongside a document that; contains pictures, is in a different format, is actually stored on a network drive etc. thus just testing the function in isolation is not what your customer will do.

The real aim of SVT is to test the System as a whole in the way a customer will use it. This means using all useful combinations of the functions of a system to perform a particular task. When doing this we are looking to see if the system behaves in a 'reasonable' way. We can't use a specific input and expect a specific output as in a good system test there are just too many different options and circumstances for this to be a efficient method.

Tuesday 6 April 2010

An Explosion of bytes

I haven't updated this for a while but since my wife has started blogging over on http://weightlosslou.blogspot.com/ I thought I would update here if I ever felt like I had something worthwhile to say.

I'm lunching at my desk at work and started the weekly download of podcasts I listen to:
Chris Moyles podcast (http://www.bbc.co.uk/podcasts/series/moyles)- I'm a big fan of his, not always the music that he plays but the banter he has with the studio team and with the nation. Thus the podcast is always a good listen (all chat no music)

The God Journey (http://thegodjourney.com/) Although I am a Christian I have reserved feelings for the way 'church' is managed and run. This is another blog topic for the future I feel. However Wayne and Brad often express the same feelings that I have had in the past

The Friday night comedy podcast. Currently this is the now show which is a VERY funny show.

Anyway - why an explosion of bytes? well I have downloaded the above 3 podcasts onto my Blackberry device to play on my drive home from work during the week. In total 60meg of data.!!!

Now I am not a ludite by any means but I do remeber when 60meg was a significant amount of data to obtain (esp through a 36kbps modem) and to transfer about (just over 40 1.4mb disks) However now I have downloaded and transferred the lot in a couple of minutes.

So in todays terms what is a considerable amount of data are we talking terrabytes, petabytes ???