GUI Testing

GUI Testing

When I read the definition about one of the course topics, Test Automation, on Wikipedia, Graphic User Interface (GUI) testing had made me curious since I hadn’t tested the GUI “formally” before. In the past, I only tested the basics like the behaviors of the components, the expected outputs, and the layout of the GUI. In order to understand more what I should do when testing GUI, I chose a blog that had a complete guide about GUI testing. I hoped that after reading this blog, I could learn the ways to test the GUI correctly. Below is the URL of the blog.

https://www.guru99.com/gui-testing.html

This blog explained the definition of GUI and GUI testing, the purpose, and the importance of GUI testing. It also provided the checklist to ensure detailed GUI testing: checking all the GUI elements, the expected executions, correctly displayed Error Messages, the demarcation, the font, the alignment, the color, the pictures’ clarity and alignment, GUI elements’ positions for different screen resolution. Moreover, it mentioned about the three approaches of GUI testing: Manual Based, Record and Replay, and Model Based. It also included a list of test cases and testing tools for GUI testing and its challenges.

After reading the checklist, I recognized that there is one test case that I never paid attention before: checking the positioning of GUI elements for different screen resolution. This mean that neither images nor content should shrink, crop, or overlap when the user resized the screen. When I thought about it, it really made sense that users probably would not use the application again if it was hard to read the content or see the pictures just because their phones/PC/laptops did not have the same screen resolution as the default screen resolution the developers set up. Therefore, it was important to remember to test the GUI elements’ positions for different screen resolution. I would remember it and apply it the next time I tested GUI.

About the approaches of GUI testing, I only had done manual based testing and model based testing. Record and Replay approach was interesting in my opinion, where I could “record” all the test steps at the first time, then “replay” it whenever I wanted to test the application again afterward. The blog did not provide a lot of information about this approach beside the brief introduction and a link redirected to one of the tools used for this approach, which was called QTP. Since I had not used it before, I did not know about its advantages and disadvantages. Therefore, I would have to research and try it first before deciding to use it in future.

 

Advertisements

Integration Testing

Integration Testing

When I read the blog entry about unit testing last week, the author had mentioned about integration testing as a tool to detect regressions. Since the topic of that blog entry was unit testing, I did not have a chance to research this type of testing. Therefore, I decided to choose integration testing as the topic for this week’s entry. Because I did not know much about this type of testing, I found a blog entry that had detailed introduction of integration testing. This blog was written by Shilpa C. Roy, a member of STH team, who was working in software testing field for the past nine years. Below was the URL of the blog entry:

http://www.softwaretestinghelp.com/what-is-integration-testing/

In this blog, Shilpa introduced the definition of integration testing, its approaches, its purpose along with the steps to create an integration test. In Shilpa’s opinion, integration testing was a “level” of testing rather than a “type” of testing. She also believed that the concept of integration testing could be applied in not only White Box technique but also Black Box technique. Beside the two approaches of this testing, which were Bottom Up and Top Down, she also mentioned another approach called “Sandwich Testing”, which combined the features of both Bottom Up and Top Down approach. Moreover, Shilpa gave an example how integration testing could be applied in Black Box technique.

I thought that the “third” approach called Sandwich testing was interesting. I always thought I could only test either top down or bottom up, but this really change my way of thinking. By starting at the middle layer and moving simultaneously towards up and down, the job was divided into two smaller parts, which was more efficient. Unfortunately, this technique was complex and required more people with specific skill sets so I really doubted if I could use it in the future. But the general idea that I could start the process at the middle part would be useful.

According to Shilpa, validating the XML files could be considered as part of Integrating testing for a product that we only knew its architecture. Since the users’ inputs would be converted into an XML formats then be transferred from one model to another, validating the XML files could test the behavior of the product. In my opinion, this example was easy to understand how integration testing could be applied in Black Box technique. I always thought that it could only be available for White Box technique, but this example had proved that I was wrong. Now that I knew about it, I could try to apply it the next time I encountered similar scenarios.

Writing Great Unit Tests

Writing Great Unit Tests

Unit testing was one of the types of testing that I had played a little bit before. Therefore, when I found this blog entry about Writing Great Unit Tests: Best and Worst Practices, I decided to choose it to understand more about unit testing and check if all the unit tests that I wrote in the past were good or bad. Below was the URL of the blog entry:

http://blog.stevensanderson.com/2009/08/24/writing-great-unit-tests-best-and-worst-practises/

In this blog, Steve Sanderson discussed about unit test and the tips to write great unit tests. He mentioned that it was overwhelmingly easy to write bad unit tests that add very little value to a project while inflating the cost of code changes astronomically. In his opinion, unit testing was not an effective way to find bugs or detect regressions. Unit testing was more about designing software components. He also compared good unit tests with bad unit tests, and provided some tips to write great unit tests.

Steve explained the reason he thought that unit testing was not an effective way to detect bugs or regressions. I agreed with him that only proving component X and Y both worked independently did not prove that they were compatible with one another or configured correctly. Therefore, bugs still might occur when the application was run. This absolutely reduced the effectiveness of bugs detection. Before reading this blog entry, I never realized that unit testing was not that suitable to find bugs like that. Furthermore, Steve provided a table to identify which type of test we should use for different purposes like finding bugs, detecting regressions, or designing software components.

Steve also gave some tips about writing good unit tests. In my opinion, some of them were very basic, but Steve still included them. I thought that was because he saw a lot of bad unit tests had those mistakes. He recommended not to make unnecessary assertions because unit tests were a design specification of how a certain behavior should work. Including multiple assertions only increased the frequency of pointless failures. Moreover, the unit tests’ names should be clear and consistent. Personally, I liked the way he used to name the example unit test. It helped you quickly identify the subject, the scenario, and the result of the unit test. This way of naming was more descriptive than the way I always used. I would apply it the next time I wrote tests.

 

Effective Code Reviews

Effective Code Reviews

On my way of finding a podcast for my first assignment, I passed by this podcast about Code Review. I decided to choose it for my first blog entry because it had some interesting opinions about code reviews. By listening to this, I learned about other benefits of code reviews, beside bugs detection and code improvement. Below was the link of the podcast:

https://talkpython.fm/episodes/show/102/effective-code-reviews.

In this podcast hosted by Michael Kennedy, Dougal Matthews shared his thoughts and experience about the benefits of code reviews and the elements effective code reviews should have. Dougal gave an interesting scenario that code reviews could save the day. For example, there were two persons, one knew more about C++, one knew more about Python. Even though they might not understand deeply what the other person is doing, they should have a code review with lighter level in case that one of them was ill or left the company. He also brought up one of the reason he thought that many people did not like doing code reviews. They expected bugs detection for their codes, but they usually received code improvement more than bugs for their valid codes. Michael and Dougal also had some interesting ideas for an effective code reviews.

To be honest, I haven’t thought that code reviews could be used as a tool to back-up basic information of a project like Dougal mentioned. Usually, the first thing that came to my mind when hearing code reviews would be bugs detection. But his scenario pointed out a special case, which some small groups might have, that code reviews could help fixing it.

About one of the reasons many people did not like doing code reviews, I understood their feelings when they just wanted to test if their codes had bugs or not, not to re-do the whole project again just because they received a better solution to solve their problems. But I thought that we should be more flexible about that. If we had another solution, which was better than our original one in one or many ways, then we should choose that solution. Beside improving the code, we also gained experience from it. The next time we countered something that was similar, we could jump directly to the better solution and reduce the time we might spend for the worse solutions that we had used before.

I thought that the code review checklist which was mentioned by Michael and Dougal would be very helpful. Like they said, it kept the reviewers and the developers on the same page. This certainly reduced the time everyone spending to ask around the same question about the progress. I also agreed with their idea about having more than one people doing code review for a same project. Different people had different points of view, and people might make mistakes. By having more people involved in the reviewing process, we could increase the quality of the review, and share our knowledge to each other.