Before it gets lost in history, I want to riff off Alan Page once again, who made some excellent points. But as someone who has been designing and creating GUI (browser) tests for a long time, I'd like to address some of those and also point out some bits of ROI that Alan missed.
Making UI tests not fragile is design work. It requires an understanding of the architecture of the application. In the case of a web app, the UI test designer is really required to understand things like how AJAX works, standards for identifying elements on pages, how the state of any particular page may or may not change, how user-visible messages are managed in the app, etc. etc. Without this sort of deep understanding of software architecture, UI tests are bound to be fragile.
I've said before that UI tests should be a shallow layer that tests only presentation, and that rarely tests logic, math, or any underlying manipulation of data. If tests are designed in this way, then they will be robust and maintainable over the life of the app.
UI test design is a skill. Designing such tests is no harder or easier than any other activity that requires skill and understanding. The tools with which I am familiar provide enough power to create reasonable, robust, maintainable tests.
Finally, I think Alan is missing one aspect of automated UI tests that I find the most valuable of all.
From green-screen mainframe systems to bleeding-edge web applications, in my experience every software system suffers from one particular sort of error that is always extremely difficult to see when testing manually: when something goes missing.
A search that used to return results no longer does. A widget that used to be on the page no longer is. A bit of text critical to the user's work goes missing.
Actions that cause errors are easy to find when testing manually, as are errors of presentation. Elements and functions that used to exist but no longer do are difficult to find: it is not easy for a human being to see the absence of a thing, but such errors stick out like sore thumbs in an automated UI test suite.
In my experience, this is one of the most valuable aspects of automated UI testing, and one of the best reasons to invest in UI test automation. The absence of a thing in the UI is simply not detectable with unit tests or with integration tests. That critical bit of function that doesn't manage to cross the last interface to the UI is only detectable at the UI itself, and automated UI tests are very, very good at detecting errors where something has gone missing.
Making UI tests not fragile is design work. It requires an understanding of the architecture of the application. In the case of a web app, the UI test designer is really required to understand things like how AJAX works, standards for identifying elements on pages, how the state of any particular page may or may not change, how user-visible messages are managed in the app, etc. etc. Without this sort of deep understanding of software architecture, UI tests are bound to be fragile.
I've said before that UI tests should be a shallow layer that tests only presentation, and that rarely tests logic, math, or any underlying manipulation of data. If tests are designed in this way, then they will be robust and maintainable over the life of the app.
UI test design is a skill. Designing such tests is no harder or easier than any other activity that requires skill and understanding. The tools with which I am familiar provide enough power to create reasonable, robust, maintainable tests.
Finally, I think Alan is missing one aspect of automated UI tests that I find the most valuable of all.
From green-screen mainframe systems to bleeding-edge web applications, in my experience every software system suffers from one particular sort of error that is always extremely difficult to see when testing manually: when something goes missing.
A search that used to return results no longer does. A widget that used to be on the page no longer is. A bit of text critical to the user's work goes missing.
Actions that cause errors are easy to find when testing manually, as are errors of presentation. Elements and functions that used to exist but no longer do are difficult to find: it is not easy for a human being to see the absence of a thing, but such errors stick out like sore thumbs in an automated UI test suite.
In my experience, this is one of the most valuable aspects of automated UI testing, and one of the best reasons to invest in UI test automation. The absence of a thing in the UI is simply not detectable with unit tests or with integration tests. That critical bit of function that doesn't manage to cross the last interface to the UI is only detectable at the UI itself, and automated UI tests are very, very good at detecting errors where something has gone missing.