Will Woods wrote:
Well said! I'd say CBI does interactive testing (with nicely automated results reporting), while the automated test suite is non-interactive.
Agreed. It's also worth pointing out that a hand-crafted test suite typically has both known inputs and known correct outputs. CBI doesn't know what the user's inputs will be, and therefore uses a simpler form of known correct output: namely, that crashing is incorrect no matter *what* the input was.
Of course, crashing is just the simplest common baseline that everyone can agree is bad. If you have a more specialized way of recognizing bad program behavior we can use that too: assertion failures, g_error() calls, malformed output, etc. All we really need is a way to label a given run as "good" or "bad".
This is brilliant. I'd love to get you guys involved more directly in Fedora testing. What can we do to help?
Thanks! We'd welcome the opportunity to partner up with Red Hat / Fedora in some way. I think what would be needed is someone inside the Red Hat / Fedora organization who wants to champion this. Someone who is enthusiastic about the idea and able to convince others to buy in. The CBI team is small, but we can provide expertise and help you work though any problems that may arise, since we feel highly motivated to see this approach get adopted more broadly. What we'd really need is an inside champion to work with on this project. Will, are you that person?