JSON or XML??

A recurring argument currently doing the rounds at work and amongst the developer community is whether or not to use JSON or XML when transferring data between systems, web services or databases. Now I firmly believe that you will always have a preference for one or the other and we are guilty of using our preferred data exchange method rather than using the one most suited to the requirements. Both are easy to read, both are easy to parse and both can hold a lot of very useful data at very low memory resource cost. But which is better? Which is the best? Which one is officially top dog? The simple answer is that they are both just as good as each other.

XML allows you to logically and hierarchically structure your data and you can even ensure that it is typed through the use of XML schemas.

Xml example

Sample image of XML

http://www.w3.org/XML/

This allows you to quickly validate the content of an XML document to make sure that the XML is indeed schema valid and that required fields are present and that data is of the correct data type. Also through the now standard xpath , (http://www.w3schools.com/xpath/ ) version 2.0 these days I believe, you can search through your XML documents quickly and efficiently. Also through using XSL and XSLT (http://www.w3.org/Style/XSL/ ) you can transform your XML data in to text, HTML or even another XML format, this is another contributing factor that makes XML so flexible and user friendly. Also there are libraries for parsing and creating XML documents in almost every single programming language available.

JSON is more suited towards systems where non typed or structured data is prevalent and because of such JSON is used extensively by the majority of the ‘big data’ \ ‘No SQL’ databases like Mongo DB.

{“menu”: {

“id”: 1,

“value”: “File”,

“popup”: {

   “menuitem”: [

    {“value”: “New”, “onclick”: “CreateNewDoc()”},

     {“value”: “Open”, “onclick”: “OpenDoc()”},

     {“value”: “Close”, “onclick”: “CloseDoc()”}

   ]

}

}}

http://www.json.org/

Because data is stored in name value pair format within JSON is makes it extremely easy to search through the data or write a parser in any programming language to retrieve and update data. It does however force the onus on to your programming code of choice to ensure that your business logic and validation is applied to JSON data.

So which is better? I think both are as useable and suitable for serialisation of data these days. So the right answer is when it comes to XML or JSON use which ever you, as the developer \ business \ data provider, feel most comfortable with.

Redundant and unused code? Don’t be afraid to remove it!

As a software developer at some point in your career you will have come across active live and released projects which contain large sections of commented out source code. Why is that code still there? Why if it obviously no longer used should it remain in the actual program? More often than not the code remains there due to fear!

DeleteCode

Yes the majority of programmers are guilty of leaving commented out code in source code for programs that they are working on. Some of the excuses I have heard are absolute classics. One of the usual suspects is that ‘It was like that when I first looked at it’. Another favourite is ‘I do not know what it is doing’, which of course is ‘Nothing! Its commented out’! I have even heard of a colleague once saying ‘I have no idea why or who commented it out but I have left it there in case it is important in the future’. How can this be important in the future? This code is currently live and unless you have the phone ringing off the hook or one of the powers that be standing at your desk looking worried or upset the code is working as intended. Or at least to how it is required to by the users!

For the sake of clean and easy to read code I say it’s time to remove obsolete and unused commented out code. If you think that, for whatever reason it may be important, then this is where software source code versioning comes in to effect. Whether you use GITHub, Subversion, TFS or any other source code repository software you can have the older version of the code to look back on if need be. In the here and now though, it’s time to be brave and get rid of that spurious code! It will reduce the amount of code in the actual source file for one and generally lead to the developer being able to quickly read through the code and aid debugging without have to be led down a wrong investigation avenue by seeing commented out code and wondering ‘why is this commented out? Could this be causing the problem? What does this code do etc’ before finding out , after wasting time in the process, that it actually does nothing and in no way causing the problem they are trying to get to the bottom of. Anytime saved when looking at fixing a bug or making a quick enhancement is precious time to a developer.

However I know we cannot go about removing ALL commented out code, I merely think those whole methods or vast chunks within a method. There are always those single lines or few lines of code commented out that can sometimes cause issues and raise unnecessary questions.

OneLineCommentedOut

In the example of the image above why has that line been commented out? Is the program missing some key functionality which may be causing issues elsewhere in not only this program but the system it is part of? In this particular instance if a single line is commented out yet deemed important enough to remain in the program, it could be temporarily commented out because it’s not required at this precise time for example, then it is down to the developer to leave a clear and straightforward REASON WHY directly above the commented out line. Again this will help save time when a developer unfamiliar with the code comes across it for the first time.

At the end of the day, if code is not doing anything or is not being used and the program is functioning correctly then fellow developers be brave………and remove it!

Xamarin Development Suite

Xamarin – Develop An IOS, OS X or Android Application On Windows Using C#!!

Now with all the excitement surrounding SWIFT as a new language, aimed at specifically developing IOS and OS X applications, starting to die down a little we can start to get all excited about a tool I recently came across called ‘Xamarin Studio’. This has two huge draws for me as a software developer. Firstly it allows you to develop IOS applications on a Windows operating system using a comprehensive SDK and IOS API wrapper classes. Secondly, it allows you to develop these applications using C# rather than C++, which for most .NET developers is an instant bonus and increases the appeal of using this tool significantly.

You should take a look at the Xamarin website which can be found at :-

http://xamarin.com/welcome

This development suite has a sophisticated designer tool integrated in to it so that you can quickly and easily create IOS forms, add graphics, user menus and even custom controls developed in C# code yourself. It even allows you to build as well as visualise your application as you go so that you can see how it is going to look on an IOS or Android device.

Designer_New1

The website has plenty of step by step guides and ‘How to’ type tutorials to help get you started and the API documentation is very detailed. Now I have always been curious about developing IOS applications but haven’t really pursued it due to the fact that an Apple MAC laptop or desktop was required along with XCode or Cocoa. Now the fact that I can now develop on my windows laptop using a language I am already very comfortable with has me wildly thinking about what type of application to develop!

So if you’re a software developer like me who is keen to get involved with IOS or Android applications without having to use C++ or learn Objective C or Swift then Xamarin is the tool you should look at today to start building your new IOS app tomorrow!

http://developer.xamarin.com/

http://iosapi.xamarin.com/

http://developer.xamarin.com/samples/ios/all/

 

No SQL? No Problem!

Whilst working on a project recently I took the opportunity to finally delve in to the world of ‘Big Data’ and ‘NoSQL’ databases. This opportunity came around as we wanted to improve a reporting tool which was taking data from text files produced by a third party system. The quality of the data in these text files was dubious and on the whole pretty poor. It was stored in a logical format and to perform ‘cross file’ analysis was painful. The structure of the generated text files did could lend itself to import in to a relational database but the task of cleansing the data and writing the import logic was not a small one! So I thought, why not try and utilise something more fit for purpose, something that doesn’t care if the data is in a logical or rationalised format. So I started to look at one of the most popular NoSQL databases available MongoDB, if for nothing other than as a proof of concept project at the very least but really to satisfy my curiosity of a non-relational document\collection based database.

data-model-normalized

You can find out all about the architecture of MongoDB at it’s open source website http://docs.mongodb.org/manual/core/introduction/ .

One thing I noticed about MongoDB was how simple it was to get up and running. The relatively small install files and the ease of set up was quite appealing in itself. I was using a Windows OS so I setup the MongoD executable to run as a windows service. The second positive was that MongoDb structure is VERY simple and an optimistic and flexible data storage facility rather than relational databases like MySQL or SQL Server which are very rigid and strict in their structure. Of course we know that is deliberate which is why the quality of data in a well-designed and rationalised MySQL or SQL Server database is usually of a very high standard. Very rarely do you get a ‘rubbish in, rubbish out’ scenario. However when the quality of data, especially the format, is not of a high standard it is better to utilise a NoSQL approach.

The learning curve for MongoDB is also not steep at all. You can quickly get to grips with the required commands to create a database and start populating ‘collections’ of documents without having an in-depth database knowledge or even DBA experience of any kind. This makes the development of a data access layer for your application using MongoDB both flexible and speedy. Also the amount of available API’s to help get you developing the aforementioned data access layer is significant and on the whole very impressive.

My Proof of concept application was written in C# as most of my work is done on the .NET platform and the available MongoDB data access driver which comprises of two .DLL files. Within a very short time I was able to write some code, manipulate source data from text files and create document collections in MongoDB. In fact a total of approximately 50 text files, varying from 3 lines of data to several thousand, was ‘imported’ in to the database as documents in under ten seconds. Retrieving the data was also as quick as lightening. However there is more responsibility on your application code to handle the data and make it fit for purpose for display or as part of your business logic processing. So in many essences it was perfect for the project I was working on where there was going to be a huge reliance on knowledge of the data in the first place.

So for my first look at a No SQL ‘Big Data’ type database I can honestly say that I can expect its popularity to only grow. I can already see the relational database big boys like Microsoft and Oracle looking somewhat nervous in the corner…….

Software testing

Test, Test and Test Again! Did I mention you should test your code?

I think that it’s a pretty well-known fact that software developers aren’t the greatest people when it comes to testing software. Programmers tend to focus on testing the new functionality, methods or bug fixes that they are currently working on rather than going and doing a complete retest. This level of focus on specific aspects of code can sometimes lead to the little things being overlooked. And these can turn into BIG problems! Even with comprehensive unit tests, which as everyone knows are only good for A-B testing of functional code and not real life software scenarios, the need for good quality software testing is essential for any project implementation. This can be emphasised by those previous software blunders and disasters that have occurred in the past. And boy have there been some real doozies!

Ladies and Gentleman I give you the ‘Software Development and Testing Blunder List’ !

  • In 1999 one disastrous lack of software integration testing lead to one almighty and expensive software disaster, on a project developed and run by NASA of all people. The Mars Climate Orbiter, after ten months of travel through Space from Earth to Mars burnt up when attempting to enter the atmosphere of the planet Mars. Why? Was this just an unfortunate occurrence? No, this was due to one almighty software testing blunder. The testing of the different systems used for the project, namely the software used by the ground crew and the software installed on the Mars climate Orbiter spacecraft, was woefully inadequate. The ground-based computer software which produced output in non-si units of pound-seconds (lbf×s) and passed this to the spacecraft software which was expecting input in metric units of newton-seconds (N×s). Oh dear. This dramatic miscalculation caused the spacecraft to pull far too close to the planet and therefore disintegrate upon impact with the atmosphere due to the entry trajectory being completely incorrect. Comprehensive software end to end testing would have highlighted this issue way before the project had even gone ‘live’ and saved a vast amount of time, effort and money. Even worse was that this wasn’t spotted in the 11 months when the spaceship was on route to Mars as there was time to make amendments to the software used by the ground crew to change the output from pound-seconds to newton-seconds.
  • 1983 most people were soundly asleep in their beds blissfully unaware of how close the world came to World War 3! At the height of the cold war a bug in the Soviet Union’s early warning missile detection system thought that the US had launched 5 nuclear missiles. These missiles were in fact reflections of sun light off cloud tops, which were fast moving, mimicking the movement of missiles. Disaster was averted due to the level headed actions of a Soviet command centre officer who reasoned that if the US was attacking they would not just fire 5 missiles in one location area. The software was rapidly amended to cater for this ‘unforeseen’ scenario and testing was stepped up significantly.
  • Who said ‘load bearing’ testing can be a waste of time? In 1990 AT&T’s phone network experienced a nine hour total shutdown due to a single line of buggy code in their latest software update. Basically this was supposed to speed up calling by calculating where to route the call through the network couldn’t cope with demand and the software duly fell over and crashed. Causing chaos. Testing had not been done to any significant level of demand prior to the release of the software upgrade.
  • Again we have another example of poor software interface testing. The Ariane 5 rocket, launched in 1996 by the European space agency, was destroyed after take off due to the sideways rockets velocity calculation by the guidance computer going horribly wrong. The guidance computer tried to convert the velocity from a 64-bit format to a 16-bit format, subsequently the number was too big for the program to handle and an overflow error occurred causing the guidance system program to crash and shutdown! A costly error. Which also not only destroyed the rocket but its cargo of 4 satellites.
  • The infamous ‘Y2K bug’. Now was this really a bug or just a ‘by design’ offshoot. In order to save storage space older legacy systems stored years in two digit format so when the year 2000 came around software would potentially see the year as ‘00’ and therefore revert to ‘1900’. This caused mass paranoia and led to many ‘world ending’ scenarios being dreamt up by scaremongers, journalists and conspiracy theorists. Now this is where good and robust testing of software could have alleviated fears of disastrous events occurring at the turn of the century.

There are of course many more examples of software disasters that could have been prevented had it only been for more thorough and comprehensive testing. You can read more at various blogs and websites but the key point of this article is too NEVER under estimate the importance of full system testing!

C++Image

C++ – It’s still here!

With all the new programming languages and frameworks that have become available recently some proven, solid and established languages have tended to be overlooked or dare I say it deemed to be obsolete or just plain unfashionable. One of these languages that has been affected by this recent trend is good old C++. Like a close family member us developers can love it and yet dislike it in equal measures but one thing you can’t ignore is that used properly it is still the most robust and powerful of the programming languages available to us these days.

C++ is still a highly effective cross platform language. Regardless if you develop on Windows, Unix, Linux or Mac you can use C++ to create a piece of software that fulfils your requirements. One thing that tends to put off developers is the myth that C++ can be fiendishly complex. It can and yet it can be quite logical and easy to pick up, you just have to pick up your knowledge as time goes on, you just can’t go rushing in to complicated development without knowing the basics. As with all things learn the basics of the language and learn them well and things will come to you.

I am going to be a bit lazy here and recommend an article to read. This is a great article aimed at introducing C++ to Objective C and Swift iOS and OS X developers. However is still easily applicable to C# and Java developers looking to learn C++ as well.

http://www.raywenderlich.com/62989/introduction-c-ios-developers-part-1

There are still loads of great reference sites for C++ out there for all levels of abilities to help you get to grips with the language.

http://www.cplusplus.com/

http://www.learncpp.com/

A Beginners guide – http://www.cprogramming.com/tutorial/c++-tutorial.html

 

Also these days there are loads of great IDE’s available across all platforms to use.

CLion – http://www.jetbrains.com/clion/ – Windows, Linux and OS X

Code Blocks – http://www.codeblocks.org/downloads/26 – Windows , Linux and OS X

Visual Studio Express – http://www.visualstudio.com/en-us/products/visual-studio-express-vs.aspx – Windows

 

So if your curious about what C++ is all about,or have been put off in the past, put some time aside and give it a try!

C# development

.NET developers C# 6 has arrived!

Forget about any other faddy new programming languages, put Swift to the back of your mind, cast Java away out of sight ………C# 6 is here folks!! This will be available and ship with the next version of Visual Studio, aptly provisionally named 14 pre official release. You can download the ‘Community Technology Preview’ edition here http://www.visualstudio.com/en-us/downloads/visual-studio-14-ctp-vs.aspx if you want to start playing with some of the new language features.

You can download the summary document of all new features of C# 6 here https://www.codeplex.com/Download?ProjectName=roslyn&DownloadId=894944 but in this brief article I will go through some of the key features, some of which .Net developers will find very useful.

  1. There are now auto property initialisers. You can now easily declare and initialise your class property members with default values!

public string ShapeName {get ; set ; } = “Square” ;

public int Height {get; set; } = 20 ;

public int Width { get ; set ; } =20 ;

You can make these get only properties by simply omitting the ‘set’. Also you can assign the get to a private class variable should you wish, which is usually the practice.

private int m_h = 20  ;

public int Height {get;} = m_h ;

  1. Primary class and struct constructors have been introduced.

public class Shape(string name,int height, int width)

{

                public string ShapeName {get ; } = name;

public int Height {get; } = height ;

public int Width { get ; } =width ;

}

This can help reduce code bloat significantly and also be highly efficient. You can still include some code within the constructor body should you wish, as you would in a typical constructor.

public class Shape(string name,int height, int width)

{

                //primary constructor body

                {

                                If(string.IsNullOrEmpty(name) )

throw new ArgumentException(“Invalid name parameter”) ;

                }

                public string ShapeName {get ; } = name;

public int Height {get; } = height ;

public int Width { get ; } =width ;

}

You can still of course include a traditional constructor within the class, especially as you might want to implement several constructor overloads, but now you have the added benefit of being able to call the primary constructor as well as easily as you would the ‘base’ constructor.

 public Shape(string name, int height,int width) : this(name,height,width){}

  1. I am not sold on this one as I am not sure about the significant benefits but C# 6 now implements Exception filters.

try{ }

catch(Exception err) if(myFilter(err))

{

                //handle the exception as filter evaluated to true

}

So essentially if the ‘myFilter’ method (could be private, public, static or delegate) evaluates to true then the catch code gets executed. Straightforward enough!!

  1. Now this is quite exciting, C# 6 has introduced null conditional operators. This should help reduce the amount of null checking code that you have to type. My fellow developers you have to admit when you have been writing your validation routines for code you have got sick of typing line after line of code to check for null’s and then throw an appropriate exception.

int? ourWidth = m_shape?.Width ; //ourWidth is null if m_shape is null and the Width property is not accessible

Best thing is we can also assign a default value to ourWidth if m_shape is null by using the null coalescing (??) operator!

int? ourWidth = m_shape?.Width ?? 0 ; //ourWidth is assigned the value 0 if m_shape is null

This null conditional operator can be used when attempting to execute a class method as well, if the class is null the method will not get called

Int? ourArea = m_shape?.CalculateArea() ; //ourArea is null if m_shape not instantiated otherwise the CalculateArea method is called

  1. Dictionary Initializer. This allows you to declare and assign values to a dictionary.

var numbers = new Dictionary<int, string> { [7] = “seven”, [9] = “nine”, [13] = “thirteen”} ;

Now I am sure you will agree that some of these enhancements will be worthwhile and allow us to continue to develop cleaner and more efficient code. Roll on VS 2014!!! Or whatever they decide to finally call it!