I've been doing a bit of test case work the past day. While doing
this, I came across a very bad pattern. There were test cases committed to ofbiz, that *never* worked. If you are going to write a test case, or commit a patch that includes a test case, make 100% absolutely certain without a doubt this is an ultimatum that the test case *WORKS*. All of the accounting test cases failed when they were initially checked in. They modified entities that *DID NOT EXIST* in any seed/demo data. This is just so very bad. I'm having to go back thru and *reopen* the bugs that contained these test cases. ps: I'm very frustrated by this. If you don't understand something, then do *not* commit it. Please. With sugar on top. |
Adam,
I appreciate your efforts in the recent commits to clean up test case and its relative code. I am happy now people are getting off sleep and want to contribute to this important part of the OFBiz. So just you know, almost every test case in accounting require certain pre-condition that need to fulfilled before a test case actually run. They can be in any various form like either running the pre-condition manually, step by step from the OFBiz UI or through defining a demo data (this is the best way to go with). To make it clear, every test case was committed after it WORKS. AFAIK, Most of the issues relative to JUnit test case are now closed. We were working on in improving JUnit test case some time back so that they run successfully and independent of each other, for that we need to define big pile of demo data. In that direction, there are certain patches uploaded to improve them and the recent patches contain the demo data. But unfortunately these are uploaded on closed issue. I wasn't have time to look into them and fix them (priorities). I hope now you can take on some issues. Vikas On Mar 5, 2009, at 8:36 AM, Adam Heath wrote: > I've been doing a bit of test case work the past day. While doing > this, I came across a very bad pattern. There were test cases > committed to ofbiz, that *never* worked. If you are going to write a > test case, or commit a patch that includes a test case, make 100% > absolutely certain without a doubt this is an ultimatum that the test > case *WORKS*. > > All of the accounting test cases failed when they were initially > checked in. They modified entities that *DID NOT EXIST* in any > seed/demo data. This is just so very bad. > > I'm having to go back thru and *reopen* the bugs that contained these > test cases. > > ps: I'm very frustrated by this. If you don't understand something, > then do *not* commit it. Please. With sugar on top. smime.p7s (3K) Download Attachment |
Vikas Mayur wrote:
> I appreciate your efforts in the recent commits to clean up test case > and its relative code. I am happy now people are getting off sleep and > want to contribute to this important part of the OFBiz. > > So just you know, almost every test case in accounting require certain > pre-condition that need to fulfilled before a test case actually run. > They can be in any various form like either running the pre-condition > manually, step by step from the OFBiz UI or through defining a demo data > (this is the best way to go with). If it requires a pre-condition, then it's not a test case, and is just some code that does something. Test-cases are supposed to be *automated*. > To make it clear, every test case was committed after it WORKS. How did it work? I reverted back to 660193, the last patch for OFBIZ-1790, and the accounting tests failed. If they worked in the past, I'd like to know when. If so, then that means something since then has caused them to break, and I will more than gladly track that down. However, if they have never worked(which is what I'm strongly suspecting), then I stand by my original assessment. |
On Mar 5, 2009, at 9:14 PM, Adam Heath wrote: > Vikas Mayur wrote: > >> I appreciate your efforts in the recent commits to clean up test case >> and its relative code. I am happy now people are getting off sleep >> and >> want to contribute to this important part of the OFBiz. >> >> So just you know, almost every test case in accounting require >> certain >> pre-condition that need to fulfilled before a test case actually run. >> They can be in any various form like either running the pre-condition >> manually, step by step from the OFBiz UI or through defining a demo >> data >> (this is the best way to go with). > > If it requires a pre-condition, then it's not a test case, and is just > some code that does something. > > Test-cases are supposed to be *automated*. Just rephrasing again. > > >> To make it clear, every test case was committed after it WORKS. > > How did it work? I reverted back to 660193, the last patch for > OFBIZ-1790, and the accounting tests failed. > > If they worked in the past, I'd like to know when. If so, then that > means something since then has caused them to break, and I will more > than gladly track that down. > > However, if they have never worked(which is what I'm strongly > suspecting), then I stand by my original assessment. > smime.p7s (3K) Download Attachment |
Vikas Mayur wrote:
>> How did it work? I reverted back to 660193, the last patch for >> OFBIZ-1790, and the accounting tests failed. >> >> If they worked in the past, I'd like to know when. If so, then that >> means something since then has caused them to break, and I will more >> than gladly track that down. >> >> However, if they have never worked(which is what I'm strongly >> suspecting), then I stand by my original assessment. >> > Do not know why it is not working for you and I have no idea/solution > for this. If you run the test individually, and follow the instructions in the file, it'll probably work. However, that's not how things are done. All tests are run together. Every testdef/*.xml file that is in any ofbiz-component.xml is run one after the other, with no chance for any manual setup between each test. In this circumstance, they do not work, and never did work. It is this circumstance that an *automated* test case must work. |
On Mar 7, 2009, at 2:01 AM, Adam Heath wrote: > Vikas Mayur wrote: > >>> How did it work? I reverted back to 660193, the last patch for >>> OFBIZ-1790, and the accounting tests failed. >>> >>> If they worked in the past, I'd like to know when. If so, then that >>> means something since then has caused them to break, and I will more >>> than gladly track that down. >>> >>> However, if they have never worked(which is what I'm strongly >>> suspecting), then I stand by my original assessment. >>> >> Do not know why it is not working for you and I have no idea/solution >> for this. > > If you run the test individually, and follow the instructions in the > file, it'll probably work. > > > However, that's not how things are done. > > All tests are run together. Every testdef/*.xml file that is in any > ofbiz-component.xml is run one after the other, with no chance for any > manual setup between each test. > > In this circumstance, they do not work, and never did work. It is > this circumstance that an *automated* test case must work. again. I agree to your point of making test automated and lot of people have complaint about this in past but no one really come forward for the contribution. Its really useless point to discuss on that these things in the trunk are making you frustrated because they are not written properly so why not complain early in the process and not after a YEAR or so. Sorry man, no time to look back and why not fix them by yourself if you see issues. smime.p7s (3K) Download Attachment |
In reply to this post by Adam Heath-2
I've been a committer on a number of xxxUnit projects in the past and grew up as one of the people bringing the agile development processes to many different organizations, so I'd like to think that I'm pretty savvy on this stuff. That being said, I've never been happy with the way the testing frameworks work in OFBiz - some because of my ignorance, but mostly because of the dependencies. I've built code in a test-driven environment and let me just say that we had few bugs that weren't caught, so when people added stuff, we knew just about each and every time when there were side effects and were able to fix them quickly.
What I'd like to see sometime soon is something that works like this: 1. Each test (note I did not say component or test suite or test group, I said test) is totally independent. 2. Each test utilizes entity engine XML files to load the appropriate data necessary for that test. -- Sometimes this will mean loading the same or similar XML files a few times. -- That's ok :) 3. Each test puts the db back in exactly the same state as it was before the test. -- I used to use DbUnit to do this in the past. -- Did this for both WebTest tests (functional) and normal JUnit tests. -- Worked like a charm. -- This should be even easier for us because the Entity Engine can keep track of all we do and roll it all back. -- I know that Scott Gray has been working with this for a bit - and it would be a HUGE win IMHO. 4. Utilizing the Entity Engine for better testing. -- This is alluded to in #3 above about the roll backs. -- It would also be cool if it could keep track of all you and BUILD an entity engine XML file and save it if you like. -- -- This should be super easy as well. -- Then you could use these files you're generating in these tests for future tests. Anyways, that's my wish list and something that if we start to get into place, I think we can build TONS of new unit tests around the existing work. It will make each everyone's lives easier and the project even more viable long term. Looking forward to feedback whenever you guys get a chance, but I really feel this is the way we should go. Cheers, Tim -- Tim Ruppert HotWax Media http://www.hotwaxmedia.com o:801.649.6594 f:801.649.6595 ----- "Vikas Mayur" <[hidden email]> wrote: > On Mar 7, 2009, at 2:01 AM, Adam Heath wrote: > > > Vikas Mayur wrote: > > > >>> How did it work? I reverted back to 660193, the last patch for > >>> OFBIZ-1790, and the accounting tests failed. > >>> > >>> If they worked in the past, I'd like to know when. If so, then > that > >>> means something since then has caused them to break, and I will > more > >>> than gladly track that down. > >>> > >>> However, if they have never worked(which is what I'm strongly > >>> suspecting), then I stand by my original assessment. > >>> > >> Do not know why it is not working for you and I have no > idea/solution > >> for this. > > > > If you run the test individually, and follow the instructions in > the > > file, it'll probably work. > > Yeah, I think so. > > > > > > > However, that's not how things are done. > > > > All tests are run together. Every testdef/*.xml file that is in > any > > ofbiz-component.xml is run one after the other, with no chance for > any > > manual setup between each test. > > > > In this circumstance, they do not work, and never did work. It is > > this circumstance that an *automated* test case must work. > > I do not know what is the point here to discuss same thing again and > > again. I agree to your point of making test automated and lot of > people have complaint about > this in past but no one really come forward for the contribution. > > Its really useless point to discuss on that these things in the trunk > > are making you frustrated because they are not written properly so why > > not complain early in the process and not after a YEAR or so. Sorry > man, no time to look back and why not fix them by yourself if you see > > issues. |
In reply to this post by Adam Heath-2
I haven't worked on it for a few weeks but I do have some code that can track changes on the GenericDelegator and then reverse them when requested. At the moment it makes the test independent at the component level, mostly because the was the easiest place to do it. I've tested it by exporting the data from a fresh install, running the tests, exporting again and comparing the differences and at the moment the only data that gets left behind is anything coming from async service calls.
I'll try and make some time for getting it to work at the test level over the next couple of days and then put a patch in jira for review. Of course the problem with committing it is that a large percentage of the tests will fail because they depend on the tests that came before them. Regards Scott HotWax Media http://www.hotwaxmedia.com 801.657.2909 ----- Original Message ----- From: "Tim Ruppert" <[hidden email]> To: [hidden email] Sent: Saturday, March 7, 2009 1:13:26 PM GMT -07:00 US/Canada Mountain Subject: Re: how to write a test case I've been a committer on a number of xxxUnit projects in the past and grew up as one of the people bringing the agile development processes to many different organizations, so I'd like to think that I'm pretty savvy on this stuff. That being said, I've never been happy with the way the testing frameworks work in OFBiz - some because of my ignorance, but mostly because of the dependencies. I've built code in a test-driven environment and let me just say that we had few bugs that weren't caught, so when people added stuff, we knew just about each and every time when there were side effects and were able to fix them quickly. What I'd like to see sometime soon is something that works like this: 1. Each test (note I did not say component or test suite or test group, I said test) is totally independent. 2. Each test utilizes entity engine XML files to load the appropriate data necessary for that test. -- Sometimes this will mean loading the same or similar XML files a few times. -- That's ok :) 3. Each test puts the db back in exactly the same state as it was before the test. -- I used to use DbUnit to do this in the past. -- Did this for both WebTest tests (functional) and normal JUnit tests. -- Worked like a charm. -- This should be even easier for us because the Entity Engine can keep track of all we do and roll it all back. -- I know that Scott Gray has been working with this for a bit - and it would be a HUGE win IMHO. 4. Utilizing the Entity Engine for better testing. -- This is alluded to in #3 above about the roll backs. -- It would also be cool if it could keep track of all you and BUILD an entity engine XML file and save it if you like. -- -- This should be super easy as well. -- Then you could use these files you're generating in these tests for future tests. Anyways, that's my wish list and something that if we start to get into place, I think we can build TONS of new unit tests around the existing work. It will make each everyone's lives easier and the project even more viable long term. Looking forward to feedback whenever you guys get a chance, but I really feel this is the way we should go. Cheers, Tim -- Tim Ruppert HotWax Media http://www.hotwaxmedia.com o:801.649.6594 f:801.649.6595 ----- "Vikas Mayur" <[hidden email]> wrote: > On Mar 7, 2009, at 2:01 AM, Adam Heath wrote: > > > Vikas Mayur wrote: > > > >>> How did it work? I reverted back to 660193, the last patch for > >>> OFBIZ-1790, and the accounting tests failed. > >>> > >>> If they worked in the past, I'd like to know when. If so, then > that > >>> means something since then has caused them to break, and I will > more > >>> than gladly track that down. > >>> > >>> However, if they have never worked(which is what I'm strongly > >>> suspecting), then I stand by my original assessment. > >>> > >> Do not know why it is not working for you and I have no > idea/solution > >> for this. > > > > If you run the test individually, and follow the instructions in > the > > file, it'll probably work. > > Yeah, I think so. > > > > > > > However, that's not how things are done. > > > > All tests are run together. Every testdef/*.xml file that is in > any > > ofbiz-component.xml is run one after the other, with no chance for > any > > manual setup between each test. > > > > In this circumstance, they do not work, and never did work. It is > > this circumstance that an *automated* test case must work. > > I do not know what is the point here to discuss same thing again and > > again. I agree to your point of making test automated and lot of > people have complaint about > this in past but no one really come forward for the contribution. > > Its really useless point to discuss on that these things in the trunk > > are making you frustrated because they are not written properly so why > > not complain early in the process and not after a YEAR or so. Sorry > man, no time to look back and why not fix them by yourself if you see > > issues. |
Thanks Scott - as for the tests failing - that's what happens when they get refactored - we'll have to get some people on fixing them as the fix goes in. I'd rather not leave them at the component level because that's not independent testing - it just isolates the component. Anyways, interested to see what others think, but these mods that Scott's talking about do have the possibility of making this a super powerful tool going forward.
Scott, what do you think of #4 below? Cheers, Tim -- Tim Ruppert HotWax Media http://www.hotwaxmedia.com o:801.649.6594 f:801.649.6595 ----- "Scott Gray" <[hidden email]> wrote: > I haven't worked on it for a few weeks but I do have some code that > can track changes on the GenericDelegator and then reverse them when > requested. At the moment it makes the test independent at the > component level, mostly because the was the easiest place to do it. > I've tested it by exporting the data from a fresh install, running the > tests, exporting again and comparing the differences and at the moment > the only data that gets left behind is anything coming from async > service calls. > > I'll try and make some time for getting it to work at the test level > over the next couple of days and then put a patch in jira for review. > Of course the problem with committing it is that a large percentage of > the tests will fail because they depend on the tests that came before > them. > > Regards > Scott > > HotWax Media > http://www.hotwaxmedia.com > 801.657.2909 > > > ----- Original Message ----- > From: "Tim Ruppert" <[hidden email]> > To: [hidden email] > Sent: Saturday, March 7, 2009 1:13:26 PM GMT -07:00 US/Canada > Mountain > Subject: Re: how to write a test case > > I've been a committer on a number of xxxUnit projects in the past and > grew up as one of the people bringing the agile development processes > to many different organizations, so I'd like to think that I'm pretty > savvy on this stuff. That being said, I've never been happy with the > way the testing frameworks work in OFBiz - some because of my > ignorance, but mostly because of the dependencies. I've built code in > a test-driven environment and let me just say that we had few bugs > that weren't caught, so when people added stuff, we knew just about > each and every time when there were side effects and were able to fix > them quickly. > > What I'd like to see sometime soon is something that works like this: > > 1. Each test (note I did not say component or test suite or test > group, I said test) is totally independent. > > 2. Each test utilizes entity engine XML files to load the appropriate > data necessary for that test. > -- Sometimes this will mean loading the same or similar XML files a > few times. > -- That's ok :) > > 3. Each test puts the db back in exactly the same state as it was > before the test. > -- I used to use DbUnit to do this in the past. > -- Did this for both WebTest tests (functional) and normal JUnit > tests. > -- Worked like a charm. > -- This should be even easier for us because the Entity Engine can > keep track of all we do and roll it all back. > -- I know that Scott Gray has been working with this for a bit - and > it would be a HUGE win IMHO. > > 4. Utilizing the Entity Engine for better testing. > -- This is alluded to in #3 above about the roll backs. > -- It would also be cool if it could keep track of all you and BUILD > an entity engine XML file and save it if you like. > -- -- This should be super easy as well. > -- Then you could use these files you're generating in these tests for > future tests. > > Anyways, that's my wish list and something that if we start to get > into place, I think we can build TONS of new unit tests around the > existing work. It will make each everyone's lives easier and the > project even more viable long term. Looking forward to feedback > whenever you guys get a chance, but I really feel this is the way we > should go. > > Cheers, > Tim > -- > Tim Ruppert > HotWax Media > http://www.hotwaxmedia.com > > o:801.649.6594 > f:801.649.6595 > > ----- "Vikas Mayur" <[hidden email]> wrote: > > > On Mar 7, 2009, at 2:01 AM, Adam Heath wrote: > > > > > Vikas Mayur wrote: > > > > > >>> How did it work? I reverted back to 660193, the last patch for > > >>> OFBIZ-1790, and the accounting tests failed. > > >>> > > >>> If they worked in the past, I'd like to know when. If so, then > > that > > >>> means something since then has caused them to break, and I will > > more > > >>> than gladly track that down. > > >>> > > >>> However, if they have never worked(which is what I'm strongly > > >>> suspecting), then I stand by my original assessment. > > >>> > > >> Do not know why it is not working for you and I have no > > idea/solution > > >> for this. > > > > > > If you run the test individually, and follow the instructions in > > the > > > file, it'll probably work. > > > > Yeah, I think so. > > > > > > > > > > > However, that's not how things are done. > > > > > > All tests are run together. Every testdef/*.xml file that is in > > any > > > ofbiz-component.xml is run one after the other, with no chance > for > > any > > > manual setup between each test. > > > > > > In this circumstance, they do not work, and never did work. It > is > > > this circumstance that an *automated* test case must work. > > > > I do not know what is the point here to discuss same thing again and > > > > > again. I agree to your point of making test automated and lot of > > people have complaint about > > this in past but no one really come forward for the contribution. > > > > Its really useless point to discuss on that these things in the > trunk > > > > are making you frustrated because they are not written properly so > why > > > > not complain early in the process and not after a YEAR or so. Sorry > > > man, no time to look back and why not fix them by yourself if you > see > > > > issues. |
In reply to this post by Tim Ruppert
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1 I like the saving of the test results for another test. I know we are just starting the discussion, but I would like to though out at least for accounting, that some form of testing (not a test) needs to follow from a lot of different inputs , about 10,000, to finish with a compare of known results all ready stored. this is test all the math from beginning to end. just my 2 cents Tim Ruppert sent the following on 3/7/2009 12:13 PM: > I've been a committer on a number of xxxUnit projects in the past and grew up as one of the people bringing the agile development processes to many different organizations, so I'd like to think that I'm pretty savvy on this stuff. That being said, I've never been happy with the way the testing frameworks work in OFBiz - some because of my ignorance, but mostly because of the dependencies. I've built code in a test-driven environment and let me just say that we had few bugs that weren't caught, so when people added stuff, we knew just about each and every time when there were side effects and were able to fix them quickly. > > What I'd like to see sometime soon is something that works like this: > > 1. Each test (note I did not say component or test suite or test group, I said test) is totally independent. > > 2. Each test utilizes entity engine XML files to load the appropriate data necessary for that test. > -- Sometimes this will mean loading the same or similar XML files a few times. > -- That's ok :) > > 3. Each test puts the db back in exactly the same state as it was before the test. > -- I used to use DbUnit to do this in the past. > -- Did this for both WebTest tests (functional) and normal JUnit tests. > -- Worked like a charm. > -- This should be even easier for us because the Entity Engine can keep track of all we do and roll it all back. > -- I know that Scott Gray has been working with this for a bit - and it would be a HUGE win IMHO. > > 4. Utilizing the Entity Engine for better testing. > -- This is alluded to in #3 above about the roll backs. > -- It would also be cool if it could keep track of all you and BUILD an entity engine XML file and save it if you like. > -- -- This should be super easy as well. > -- Then you could use these files you're generating in these tests for future tests. > > Anyways, that's my wish list and something that if we start to get into place, I think we can build TONS of new unit tests around the existing work. It will make each everyone's lives easier and the project even more viable long term. Looking forward to feedback whenever you guys get a chance, but I really feel this is the way we should go. > > Cheers, > Tim > -- > Tim Ruppert > HotWax Media > http://www.hotwaxmedia.com > > o:801.649.6594 > f:801.649.6595 > > ----- "Vikas Mayur" <[hidden email]> wrote: > >> On Mar 7, 2009, at 2:01 AM, Adam Heath wrote: >> >>> Vikas Mayur wrote: >>> >>>>> How did it work? I reverted back to 660193, the last patch for >>>>> OFBIZ-1790, and the accounting tests failed. >>>>> >>>>> If they worked in the past, I'd like to know when. If so, then >> that >>>>> means something since then has caused them to break, and I will >> more >>>>> than gladly track that down. >>>>> >>>>> However, if they have never worked(which is what I'm strongly >>>>> suspecting), then I stand by my original assessment. >>>>> >>>> Do not know why it is not working for you and I have no >> idea/solution >>>> for this. >>> If you run the test individually, and follow the instructions in >> the >>> file, it'll probably work. >> Yeah, I think so. >> >>> >>> However, that's not how things are done. >>> >>> All tests are run together. Every testdef/*.xml file that is in >> any >>> ofbiz-component.xml is run one after the other, with no chance for >> any >>> manual setup between each test. >>> >>> In this circumstance, they do not work, and never did work. It is >>> this circumstance that an *automated* test case must work. >> I do not know what is the point here to discuss same thing again and >> >> again. I agree to your point of making test automated and lot of >> people have complaint about >> this in past but no one really come forward for the contribution. >> >> Its really useless point to discuss on that these things in the trunk >> >> are making you frustrated because they are not written properly so why >> >> not complain early in the process and not after a YEAR or so. Sorry >> man, no time to look back and why not fix them by yourself if you see >> >> issues. > > Version: GnuPG v1.4.6 (MingW32) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFJsuRtrP3NbaWWqE4RAimEAJ99mqdvwl3jclstkXA9cjaBfgV8ugCfdIdP ZBtaJ0YgJ/hXvNUFqa9WzPs= =KXdP -----END PGP SIGNATURE----- |
In reply to this post by Scott Gray-2
I'm still for running tests as a set for each suite. If you disagree with me, take a look at some of the current test suite XML files and explain to me how it makes sense, or is even possible, to run most of them with 100% independent tests. You can't even load or assert data if you run each test case independently... -David On Mar 7, 2009, at 1:40 PM, Scott Gray wrote: > I haven't worked on it for a few weeks but I do have some code that > can track changes on the GenericDelegator and then reverse them when > requested. At the moment it makes the test independent at the > component level, mostly because the was the easiest place to do it. > I've tested it by exporting the data from a fresh install, running > the tests, exporting again and comparing the differences and at the > moment the only data that gets left behind is anything coming from > async service calls. > > I'll try and make some time for getting it to work at the test level > over the next couple of days and then put a patch in jira for > review. Of course the problem with committing it is that a large > percentage of the tests will fail because they depend on the tests > that came before them. > > Regards > Scott > > HotWax Media > http://www.hotwaxmedia.com > 801.657.2909 > > > ----- Original Message ----- > From: "Tim Ruppert" <[hidden email]> > To: [hidden email] > Sent: Saturday, March 7, 2009 1:13:26 PM GMT -07:00 US/Canada Mountain > Subject: Re: how to write a test case > > I've been a committer on a number of xxxUnit projects in the past > and grew up as one of the people bringing the agile development > processes to many different organizations, so I'd like to think that > I'm pretty savvy on this stuff. That being said, I've never been > happy with the way the testing frameworks work in OFBiz - some > because of my ignorance, but mostly because of the dependencies. > I've built code in a test-driven environment and let me just say > that we had few bugs that weren't caught, so when people added > stuff, we knew just about each and every time when there were side > effects and were able to fix them quickly. > > What I'd like to see sometime soon is something that works like this: > > 1. Each test (note I did not say component or test suite or test > group, I said test) is totally independent. > > 2. Each test utilizes entity engine XML files to load the > appropriate data necessary for that test. > -- Sometimes this will mean loading the same or similar XML files a > few times. > -- That's ok :) > > 3. Each test puts the db back in exactly the same state as it was > before the test. > -- I used to use DbUnit to do this in the past. > -- Did this for both WebTest tests (functional) and normal JUnit > tests. > -- Worked like a charm. > -- This should be even easier for us because the Entity Engine can > keep track of all we do and roll it all back. > -- I know that Scott Gray has been working with this for a bit - and > it would be a HUGE win IMHO. > > 4. Utilizing the Entity Engine for better testing. > -- This is alluded to in #3 above about the roll backs. > -- It would also be cool if it could keep track of all you and BUILD > an entity engine XML file and save it if you like. > -- -- This should be super easy as well. > -- Then you could use these files you're generating in these tests > for future tests. > > Anyways, that's my wish list and something that if we start to get > into place, I think we can build TONS of new unit tests around the > existing work. It will make each everyone's lives easier and the > project even more viable long term. Looking forward to feedback > whenever you guys get a chance, but I really feel this is the way we > should go. > > Cheers, > Tim > -- > Tim Ruppert > HotWax Media > http://www.hotwaxmedia.com > > o:801.649.6594 > f:801.649.6595 > > ----- "Vikas Mayur" <[hidden email]> wrote: > >> On Mar 7, 2009, at 2:01 AM, Adam Heath wrote: >> >>> Vikas Mayur wrote: >>> >>>>> How did it work? I reverted back to 660193, the last patch for >>>>> OFBIZ-1790, and the accounting tests failed. >>>>> >>>>> If they worked in the past, I'd like to know when. If so, then >> that >>>>> means something since then has caused them to break, and I will >> more >>>>> than gladly track that down. >>>>> >>>>> However, if they have never worked(which is what I'm strongly >>>>> suspecting), then I stand by my original assessment. >>>>> >>>> Do not know why it is not working for you and I have no >> idea/solution >>>> for this. >>> >>> If you run the test individually, and follow the instructions in >> the >>> file, it'll probably work. >> >> Yeah, I think so. >> >>> >>> >>> However, that's not how things are done. >>> >>> All tests are run together. Every testdef/*.xml file that is in >> any >>> ofbiz-component.xml is run one after the other, with no chance for >> any >>> manual setup between each test. >>> >>> In this circumstance, they do not work, and never did work. It is >>> this circumstance that an *automated* test case must work. >> >> I do not know what is the point here to discuss same thing again and >> >> again. I agree to your point of making test automated and lot of >> people have complaint about >> this in past but no one really come forward for the contribution. >> >> Its really useless point to discuss on that these things in the trunk >> >> are making you frustrated because they are not written properly so >> why >> >> not complain early in the process and not after a YEAR or so. Sorry >> man, no time to look back and why not fix them by yourself if you see >> >> issues. |
In reply to this post by Adam Heath-2
Yeah, I guess I'm going to have to get into the test data in order to disprove this. I just don't see how it could be possible that we cannot load the appropriate data for a single test before and put the db back. Whether or not this is feasible in the sense of timing on these particular tests is another matter. The way it runs now, those other tests must be putting the data in the right state for someone to run the next test - which is tantamount to a data load.
David, please let me know whether this is just my ignorance on this particular data setup or if my assumptions above are incorrect. Cheers, Tim -- Tim Ruppert HotWax Media http://www.hotwaxmedia.com o:801.649.6594 f:801.649.6595 ----- "David E Jones" <[hidden email]> wrote: > I'm still for running tests as a set for each suite. > > If you disagree with me, take a look at some of the current test suite > > XML files and explain to me how it makes sense, or is even possible, > > to run most of them with 100% independent tests. You can't even load > > or assert data if you run each test case independently... > > -David > > > On Mar 7, 2009, at 1:40 PM, Scott Gray wrote: > > > I haven't worked on it for a few weeks but I do have some code that > > > can track changes on the GenericDelegator and then reverse them when > > > requested. At the moment it makes the test independent at the > > component level, mostly because the was the easiest place to do it. > > > I've tested it by exporting the data from a fresh install, running > > > the tests, exporting again and comparing the differences and at the > > > moment the only data that gets left behind is anything coming from > > > async service calls. > > > > I'll try and make some time for getting it to work at the test level > > > over the next couple of days and then put a patch in jira for > > review. Of course the problem with committing it is that a large > > percentage of the tests will fail because they depend on the tests > > > that came before them. > > > > Regards > > Scott > > > > HotWax Media > > http://www.hotwaxmedia.com > > 801.657.2909 > > > > > > ----- Original Message ----- > > From: "Tim Ruppert" <[hidden email]> > > To: [hidden email] > > Sent: Saturday, March 7, 2009 1:13:26 PM GMT -07:00 US/Canada > Mountain > > Subject: Re: how to write a test case > > > > I've been a committer on a number of xxxUnit projects in the past > > and grew up as one of the people bringing the agile development > > processes to many different organizations, so I'd like to think that > > > I'm pretty savvy on this stuff. That being said, I've never been > > happy with the way the testing frameworks work in OFBiz - some > > because of my ignorance, but mostly because of the dependencies. > > I've built code in a test-driven environment and let me just say > > that we had few bugs that weren't caught, so when people added > > stuff, we knew just about each and every time when there were side > > > effects and were able to fix them quickly. > > > > What I'd like to see sometime soon is something that works like > this: > > > > 1. Each test (note I did not say component or test suite or test > > group, I said test) is totally independent. > > > > 2. Each test utilizes entity engine XML files to load the > > appropriate data necessary for that test. > > -- Sometimes this will mean loading the same or similar XML files a > > > few times. > > -- That's ok :) > > > > 3. Each test puts the db back in exactly the same state as it was > > before the test. > > -- I used to use DbUnit to do this in the past. > > -- Did this for both WebTest tests (functional) and normal JUnit > > tests. > > -- Worked like a charm. > > -- This should be even easier for us because the Entity Engine can > > > keep track of all we do and roll it all back. > > -- I know that Scott Gray has been working with this for a bit - and > > > it would be a HUGE win IMHO. > > > > 4. Utilizing the Entity Engine for better testing. > > -- This is alluded to in #3 above about the roll backs. > > -- It would also be cool if it could keep track of all you and BUILD > > > an entity engine XML file and save it if you like. > > -- -- This should be super easy as well. > > -- Then you could use these files you're generating in these tests > > > for future tests. > > > > Anyways, that's my wish list and something that if we start to get > > > into place, I think we can build TONS of new unit tests around the > > > existing work. It will make each everyone's lives easier and the > > project even more viable long term. Looking forward to feedback > > whenever you guys get a chance, but I really feel this is the way we > > > should go. > > > > Cheers, > > Tim > > -- > > Tim Ruppert > > HotWax Media > > http://www.hotwaxmedia.com > > > > o:801.649.6594 > > f:801.649.6595 > > > > ----- "Vikas Mayur" <[hidden email]> wrote: > > > >> On Mar 7, 2009, at 2:01 AM, Adam Heath wrote: > >> > >>> Vikas Mayur wrote: > >>> > >>>>> How did it work? I reverted back to 660193, the last patch for > >>>>> OFBIZ-1790, and the accounting tests failed. > >>>>> > >>>>> If they worked in the past, I'd like to know when. If so, then > >> that > >>>>> means something since then has caused them to break, and I will > >> more > >>>>> than gladly track that down. > >>>>> > >>>>> However, if they have never worked(which is what I'm strongly > >>>>> suspecting), then I stand by my original assessment. > >>>>> > >>>> Do not know why it is not working for you and I have no > >> idea/solution > >>>> for this. > >>> > >>> If you run the test individually, and follow the instructions in > >> the > >>> file, it'll probably work. > >> > >> Yeah, I think so. > >> > >>> > >>> > >>> However, that's not how things are done. > >>> > >>> All tests are run together. Every testdef/*.xml file that is in > >> any > >>> ofbiz-component.xml is run one after the other, with no chance > for > >> any > >>> manual setup between each test. > >>> > >>> In this circumstance, they do not work, and never did work. It > is > >>> this circumstance that an *automated* test case must work. > >> > >> I do not know what is the point here to discuss same thing again > and > >> > >> again. I agree to your point of making test automated and lot of > >> people have complaint about > >> this in past but no one really come forward for the contribution. > >> > >> Its really useless point to discuss on that these things in the > trunk > >> > >> are making you frustrated because they are not written properly so > > >> why > >> > >> not complain early in the process and not after a YEAR or so. > Sorry > >> man, no time to look back and why not fix them by yourself if you > see > >> > >> issues. |
A good file to see this in is servicetests.xml. While all tests in this file can be run together, there are really 3 different sets in the file that could be independent. Anyway, here is one set of test-cases that are meant to be run together: <test-case case-name="load-service-test-data"> <entity-xml action="load" entity-xml-url="component://service/ testdef/data/ServiceTestData.xml"/> </test-case> <test-case case-name="service-dead-lock-retry-test"> <service-test service-name="testServiceDeadLockRetry"/> </test-case> <test-case case-name="service-dead-lock-retry-assert-data"> <entity-xml action="assert" entity-xml-url="component:// service/testdef/data/ServiceDeadLockRetryAssertData.xml"/> </test-case> and here is another: <test-case case-name="service-own-tx-sub-service-after-set- rollback-only-in-parent"> <service-test service- name = "testServiceOwnTxSubServiceAfterSetRollbackOnlyInParentErrorCatchWrapper "/> </test-case> <test-case case-name="service-own-tx-sub-service-after-set- rollback-only-in-parent-assert-data"> <entity-xml action="assert" entity-xml-url="component:// service/testdef/data/ServiceSetRollbackOnlyAssertData.xml"/> </test-case> -David On Mar 7, 2009, at 3:57 PM, Tim Ruppert wrote: > Yeah, I guess I'm going to have to get into the test data in order > to disprove this. I just don't see how it could be possible that we > cannot load the appropriate data for a single test before and put > the db back. Whether or not this is feasible in the sense of timing > on these particular tests is another matter. The way it runs now, > those other tests must be putting the data in the right state for > someone to run the next test - which is tantamount to a data load. > > David, please let me know whether this is just my ignorance on this > particular data setup or if my assumptions above are incorrect. > > Cheers, > Tim > -- > Tim Ruppert > HotWax Media > http://www.hotwaxmedia.com > > o:801.649.6594 > f:801.649.6595 > > ----- "David E Jones" <[hidden email]> wrote: > >> I'm still for running tests as a set for each suite. >> >> If you disagree with me, take a look at some of the current test >> suite >> >> XML files and explain to me how it makes sense, or is even possible, >> >> to run most of them with 100% independent tests. You can't even load >> >> or assert data if you run each test case independently... >> >> -David >> >> >> On Mar 7, 2009, at 1:40 PM, Scott Gray wrote: >> >>> I haven't worked on it for a few weeks but I do have some code that >> >>> can track changes on the GenericDelegator and then reverse them when >> >>> requested. At the moment it makes the test independent at the >>> component level, mostly because the was the easiest place to do it. >> >>> I've tested it by exporting the data from a fresh install, running >> >>> the tests, exporting again and comparing the differences and at the >> >>> moment the only data that gets left behind is anything coming from >> >>> async service calls. >>> >>> I'll try and make some time for getting it to work at the test level >> >>> over the next couple of days and then put a patch in jira for >>> review. Of course the problem with committing it is that a large >>> percentage of the tests will fail because they depend on the tests >> >>> that came before them. >>> >>> Regards >>> Scott >>> >>> HotWax Media >>> http://www.hotwaxmedia.com >>> 801.657.2909 >>> >>> >>> ----- Original Message ----- >>> From: "Tim Ruppert" <[hidden email]> >>> To: [hidden email] >>> Sent: Saturday, March 7, 2009 1:13:26 PM GMT -07:00 US/Canada >> Mountain >>> Subject: Re: how to write a test case >>> >>> I've been a committer on a number of xxxUnit projects in the past >>> and grew up as one of the people bringing the agile development >>> processes to many different organizations, so I'd like to think that >> >>> I'm pretty savvy on this stuff. That being said, I've never been >>> happy with the way the testing frameworks work in OFBiz - some >>> because of my ignorance, but mostly because of the dependencies. >>> I've built code in a test-driven environment and let me just say >>> that we had few bugs that weren't caught, so when people added >>> stuff, we knew just about each and every time when there were side >> >>> effects and were able to fix them quickly. >>> >>> What I'd like to see sometime soon is something that works like >> this: >>> >>> 1. Each test (note I did not say component or test suite or test >>> group, I said test) is totally independent. >>> >>> 2. Each test utilizes entity engine XML files to load the >>> appropriate data necessary for that test. >>> -- Sometimes this will mean loading the same or similar XML files a >> >>> few times. >>> -- That's ok :) >>> >>> 3. Each test puts the db back in exactly the same state as it was >>> before the test. >>> -- I used to use DbUnit to do this in the past. >>> -- Did this for both WebTest tests (functional) and normal JUnit >>> tests. >>> -- Worked like a charm. >>> -- This should be even easier for us because the Entity Engine can >> >>> keep track of all we do and roll it all back. >>> -- I know that Scott Gray has been working with this for a bit - and >> >>> it would be a HUGE win IMHO. >>> >>> 4. Utilizing the Entity Engine for better testing. >>> -- This is alluded to in #3 above about the roll backs. >>> -- It would also be cool if it could keep track of all you and BUILD >> >>> an entity engine XML file and save it if you like. >>> -- -- This should be super easy as well. >>> -- Then you could use these files you're generating in these tests >> >>> for future tests. >>> >>> Anyways, that's my wish list and something that if we start to get >> >>> into place, I think we can build TONS of new unit tests around the >> >>> existing work. It will make each everyone's lives easier and the >>> project even more viable long term. Looking forward to feedback >>> whenever you guys get a chance, but I really feel this is the way we >> >>> should go. >>> >>> Cheers, >>> Tim >>> -- >>> Tim Ruppert >>> HotWax Media >>> http://www.hotwaxmedia.com >>> >>> o:801.649.6594 >>> f:801.649.6595 >>> >>> ----- "Vikas Mayur" <[hidden email]> wrote: >>> >>>> On Mar 7, 2009, at 2:01 AM, Adam Heath wrote: >>>> >>>>> Vikas Mayur wrote: >>>>> >>>>>>> How did it work? I reverted back to 660193, the last patch for >>>>>>> OFBIZ-1790, and the accounting tests failed. >>>>>>> >>>>>>> If they worked in the past, I'd like to know when. If so, then >>>> that >>>>>>> means something since then has caused them to break, and I will >>>> more >>>>>>> than gladly track that down. >>>>>>> >>>>>>> However, if they have never worked(which is what I'm strongly >>>>>>> suspecting), then I stand by my original assessment. >>>>>>> >>>>>> Do not know why it is not working for you and I have no >>>> idea/solution >>>>>> for this. >>>>> >>>>> If you run the test individually, and follow the instructions in >>>> the >>>>> file, it'll probably work. >>>> >>>> Yeah, I think so. >>>> >>>>> >>>>> >>>>> However, that's not how things are done. >>>>> >>>>> All tests are run together. Every testdef/*.xml file that is in >>>> any >>>>> ofbiz-component.xml is run one after the other, with no chance >> for >>>> any >>>>> manual setup between each test. >>>>> >>>>> In this circumstance, they do not work, and never did work. It >> is >>>>> this circumstance that an *automated* test case must work. >>>> >>>> I do not know what is the point here to discuss same thing again >> and >>>> >>>> again. I agree to your point of making test automated and lot of >>>> people have complaint about >>>> this in past but no one really come forward for the contribution. >>>> >>>> Its really useless point to discuss on that these things in the >> trunk >>>> >>>> are making you frustrated because they are not written properly so >> >>>> why >>>> >>>> not complain early in the process and not after a YEAR or so. >> Sorry >>>> man, no time to look back and why not fix them by yourself if you >> see >>>> >>>> issues. |
Thanks for the pointer - I'll dig into this ASAP.
Cheers, Tim -- Tim Ruppert HotWax Media http://www.hotwaxmedia.com o:801.649.6594 f:801.649.6595 ----- "David E Jones" <[hidden email]> wrote: > A good file to see this in is servicetests.xml. While all tests in > this file can be run together, there are really 3 different sets in > the file that could be independent. > > Anyway, here is one set of test-cases that are meant to be run > together: > > <test-case case-name="load-service-test-data"> > <entity-xml action="load" > entity-xml-url="component://service/ > testdef/data/ServiceTestData.xml"/> > </test-case> > <test-case case-name="service-dead-lock-retry-test"> > <service-test service-name="testServiceDeadLockRetry"/> > </test-case> > <test-case case-name="service-dead-lock-retry-assert-data"> > <entity-xml action="assert" entity-xml-url="component:// > service/testdef/data/ServiceDeadLockRetryAssertData.xml"/> > </test-case> > > and here is another: > > <test-case case-name="service-own-tx-sub-service-after-set- > rollback-only-in-parent"> > <service-test service- > name > = > "testServiceOwnTxSubServiceAfterSetRollbackOnlyInParentErrorCatchWrapper > > "/> > </test-case> > <test-case case-name="service-own-tx-sub-service-after-set- > rollback-only-in-parent-assert-data"> > <entity-xml action="assert" entity-xml-url="component:// > service/testdef/data/ServiceSetRollbackOnlyAssertData.xml"/> > </test-case> > > -David > > > On Mar 7, 2009, at 3:57 PM, Tim Ruppert wrote: > > > Yeah, I guess I'm going to have to get into the test data in order > > > to disprove this. I just don't see how it could be possible that we > > > cannot load the appropriate data for a single test before and put > > the db back. Whether or not this is feasible in the sense of timing > > > on these particular tests is another matter. The way it runs now, > > > those other tests must be putting the data in the right state for > > someone to run the next test - which is tantamount to a data load. > > > > David, please let me know whether this is just my ignorance on this > > > particular data setup or if my assumptions above are incorrect. > > > > Cheers, > > Tim > > -- > > Tim Ruppert > > HotWax Media > > http://www.hotwaxmedia.com > > > > o:801.649.6594 > > f:801.649.6595 > > > > ----- "David E Jones" <[hidden email]> wrote: > > > >> I'm still for running tests as a set for each suite. > >> > >> If you disagree with me, take a look at some of the current test > >> suite > >> > >> XML files and explain to me how it makes sense, or is even > possible, > >> > >> to run most of them with 100% independent tests. You can't even > load > >> > >> or assert data if you run each test case independently... > >> > >> -David > >> > >> > >> On Mar 7, 2009, at 1:40 PM, Scott Gray wrote: > >> > >>> I haven't worked on it for a few weeks but I do have some code > that > >> > >>> can track changes on the GenericDelegator and then reverse them > when > >> > >>> requested. At the moment it makes the test independent at the > >>> component level, mostly because the was the easiest place to do > it. > >> > >>> I've tested it by exporting the data from a fresh install, > running > >> > >>> the tests, exporting again and comparing the differences and at > the > >> > >>> moment the only data that gets left behind is anything coming > from > >> > >>> async service calls. > >>> > >>> I'll try and make some time for getting it to work at the test > level > >> > >>> over the next couple of days and then put a patch in jira for > >>> review. Of course the problem with committing it is that a large > >>> percentage of the tests will fail because they depend on the > tests > >> > >>> that came before them. > >>> > >>> Regards > >>> Scott > >>> > >>> HotWax Media > >>> http://www.hotwaxmedia.com > >>> 801.657.2909 > >>> > >>> > >>> ----- Original Message ----- > >>> From: "Tim Ruppert" <[hidden email]> > >>> To: [hidden email] > >>> Sent: Saturday, March 7, 2009 1:13:26 PM GMT -07:00 US/Canada > >> Mountain > >>> Subject: Re: how to write a test case > >>> > >>> I've been a committer on a number of xxxUnit projects in the past > >>> and grew up as one of the people bringing the agile development > >>> processes to many different organizations, so I'd like to think > that > >> > >>> I'm pretty savvy on this stuff. That being said, I've never been > >>> happy with the way the testing frameworks work in OFBiz - some > >>> because of my ignorance, but mostly because of the dependencies. > >>> I've built code in a test-driven environment and let me just say > >>> that we had few bugs that weren't caught, so when people added > >>> stuff, we knew just about each and every time when there were > side > >> > >>> effects and were able to fix them quickly. > >>> > >>> What I'd like to see sometime soon is something that works like > >> this: > >>> > >>> 1. Each test (note I did not say component or test suite or test > >>> group, I said test) is totally independent. > >>> > >>> 2. Each test utilizes entity engine XML files to load the > >>> appropriate data necessary for that test. > >>> -- Sometimes this will mean loading the same or similar XML files > a > >> > >>> few times. > >>> -- That's ok :) > >>> > >>> 3. Each test puts the db back in exactly the same state as it was > >>> before the test. > >>> -- I used to use DbUnit to do this in the past. > >>> -- Did this for both WebTest tests (functional) and normal JUnit > > >>> tests. > >>> -- Worked like a charm. > >>> -- This should be even easier for us because the Entity Engine can > > >> > >>> keep track of all we do and roll it all back. > >>> -- I know that Scott Gray has been working with this for a bit - > and > >> > >>> it would be a HUGE win IMHO. > >>> > >>> 4. Utilizing the Entity Engine for better testing. > >>> -- This is alluded to in #3 above about the roll backs. > >>> -- It would also be cool if it could keep track of all you and > BUILD > >> > >>> an entity engine XML file and save it if you like. > >>> -- -- This should be super easy as well. > >>> -- Then you could use these files you're generating in these tests > > >> > >>> for future tests. > >>> > >>> Anyways, that's my wish list and something that if we start to > get > >> > >>> into place, I think we can build TONS of new unit tests around > the > >> > >>> existing work. It will make each everyone's lives easier and the > >>> project even more viable long term. Looking forward to feedback > >>> whenever you guys get a chance, but I really feel this is the way > we > >> > >>> should go. > >>> > >>> Cheers, > >>> Tim > >>> -- > >>> Tim Ruppert > >>> HotWax Media > >>> http://www.hotwaxmedia.com > >>> > >>> o:801.649.6594 > >>> f:801.649.6595 > >>> > >>> ----- "Vikas Mayur" <[hidden email]> wrote: > >>> > >>>> On Mar 7, 2009, at 2:01 AM, Adam Heath wrote: > >>>> > >>>>> Vikas Mayur wrote: > >>>>> > >>>>>>> How did it work? I reverted back to 660193, the last patch > for > >>>>>>> OFBIZ-1790, and the accounting tests failed. > >>>>>>> > >>>>>>> If they worked in the past, I'd like to know when. If so, > then > >>>> that > >>>>>>> means something since then has caused them to break, and I > will > >>>> more > >>>>>>> than gladly track that down. > >>>>>>> > >>>>>>> However, if they have never worked(which is what I'm strongly > >>>>>>> suspecting), then I stand by my original assessment. > >>>>>>> > >>>>>> Do not know why it is not working for you and I have no > >>>> idea/solution > >>>>>> for this. > >>>>> > >>>>> If you run the test individually, and follow the instructions > in > >>>> the > >>>>> file, it'll probably work. > >>>> > >>>> Yeah, I think so. > >>>> > >>>>> > >>>>> > >>>>> However, that's not how things are done. > >>>>> > >>>>> All tests are run together. Every testdef/*.xml file that is > in > >>>> any > >>>>> ofbiz-component.xml is run one after the other, with no chance > >> for > >>>> any > >>>>> manual setup between each test. > >>>>> > >>>>> In this circumstance, they do not work, and never did work. It > >> is > >>>>> this circumstance that an *automated* test case must work. > >>>> > >>>> I do not know what is the point here to discuss same thing again > >> and > >>>> > >>>> again. I agree to your point of making test automated and lot of > >>>> people have complaint about > >>>> this in past but no one really come forward for the > contribution. > >>>> > >>>> Its really useless point to discuss on that these things in the > >> trunk > >>>> > >>>> are making you frustrated because they are not written properly > so > >> > >>>> why > >>>> > >>>> not complain early in the process and not after a YEAR or so. > >> Sorry > >>>> man, no time to look back and why not fix them by yourself if > you > >> see > >>>> > >>>> issues. |
In reply to this post by David E Jones-3
David E Jones wrote:
> > A good file to see this in is servicetests.xml. While all tests in this > file can be run together, there are really 3 different sets in the file > that could be independent. > > Anyway, here is one set of test-cases that are meant to be run together: > > <test-case case-name="load-service-test-data"> > <entity-xml action="load" > entity-xml-url="component://service/testdef/data/ServiceTestData.xml"/> > </test-case> > <test-case case-name="service-dead-lock-retry-test"> > <service-test service-name="testServiceDeadLockRetry"/> > </test-case> > <test-case case-name="service-dead-lock-retry-assert-data"> > <entity-xml action="assert" > entity-xml-url="component://service/testdef/data/ServiceDeadLockRetryAssertData.xml"/> > > </test-case> So that is either a group/suite, or a single test case. > <test-case > case-name="service-own-tx-sub-service-after-set-rollback-only-in-parent"> > <service-test > service-name="testServiceOwnTxSubServiceAfterSetRollbackOnlyInParentErrorCatchWrapper"/> > > </test-case> > <test-case > case-name="service-own-tx-sub-service-after-set-rollback-only-in-parent-assert-data"> > > <entity-xml action="assert" > entity-xml-url="component://service/testdef/data/ServiceSetRollbackOnlyAssertData.xml"/> > > </test-case> As is this. They should be moved to a separate suite.xml, or combined into a single test case. |
In reply to this post by Scott Gray-2
Scott Gray wrote:
> I haven't worked on it for a few weeks but I do have some code that > can track changes on the GenericDelegator and then reverse them when > requested. At the moment it makes the test independent at the > component level, mostly because the was the easiest place to do it. > I've tested it by exporting the data from a fresh install, running > the tests, exporting again and comparing the differences and at the > moment the only data that gets left behind is anything coming from > async service calls. My code doesn't require anything fancy. It just makes a backup copy of the entire data folder, and restores it between test runs. It was easy to do this, then try to have some filter that rolls back a complex series of changes. > I'll try and make some time for getting it to work at the test level > over the next couple of days and then put a patch in jira for > review. Of course the problem with committing it is that a large > percentage of the tests will fail because they depend on the tests > that came before them. Then they shouldn't be a separate set of tests. Whatever granularity we decide upon, the individual unit *must* be completely self-contained, be completely automatable, and not have any external dependencies, be it manually run configuration(creating entities thru some front-end, etc), or requiring some other piece of automation be run beforehand. |
Administrator
|
From: "Adam Heath" <[hidden email]>
> Scott Gray wrote: >> I haven't worked on it for a few weeks but I do have some code that >> can track changes on the GenericDelegator and then reverse them when >> requested. At the moment it makes the test independent at the >> component level, mostly because the was the easiest place to do it. >> I've tested it by exporting the data from a fresh install, running >> the tests, exporting again and comparing the differences and at the >> moment the only data that gets left behind is anything coming from >> async service calls. > > My code doesn't require anything fancy. It just makes a backup copy > of the entire data folder, and restores it between test runs. It was > easy to do this, then try to have some filter that rolls back a > complex series of changes. Yes, this makes sense indeed (even better with a than than a then ;o) Jacques |
In reply to this post by Adam Heath-2
2009/3/8 Adam Heath <[hidden email]>
> Scott Gray wrote: > > I haven't worked on it for a few weeks but I do have some code that > > can track changes on the GenericDelegator and then reverse them when > > requested. At the moment it makes the test independent at the > > component level, mostly because the was the easiest place to do it. > > I've tested it by exporting the data from a fresh install, running > > the tests, exporting again and comparing the differences and at the > > moment the only data that gets left behind is anything coming from > > async service calls. > > My code doesn't require anything fancy. It just makes a backup copy > of the entire data folder, and restores it between test runs. It was > easy to do this, then try to have some filter that rolls back a > complex series of changes. > That approach is fine by me, except it limits you to testing with derby only rather than a production type database. |
Scott Gray wrote:
> 2009/3/8 Adam Heath <[hidden email]> > >> Scott Gray wrote: >>> I haven't worked on it for a few weeks but I do have some code that >>> can track changes on the GenericDelegator and then reverse them when >>> requested. At the moment it makes the test independent at the >>> component level, mostly because the was the easiest place to do it. >>> I've tested it by exporting the data from a fresh install, running >>> the tests, exporting again and comparing the differences and at the >>> moment the only data that gets left behind is anything coming from >>> async service calls. >> My code doesn't require anything fancy. It just makes a backup copy >> of the entire data folder, and restores it between test runs. It was >> easy to do this, then try to have some filter that rolls back a >> complex series of changes. >> > > That approach is fine by me, except it limits you to testing with derby only > rather than a production type database. Not really, just alter the script to create a backup of the binary production db, and stop/start it as well. Of course, that means running as root or the same user as the other database, which may not be feasible. |
Adam - I don't disagree with you often, but this is not the way to do this at all. I'd much rather have the db rollback the changes that were made and use data inserts than keep a collection of data files in place and hope that that always works - especially once you've upgraded the db, etc.
Data files and rollbacks will always work - that's definitely my vote. Cheers, Tim -- Tim Ruppert HotWax Media http://www.hotwaxmedia.com o:801.649.6594 f:801.649.6595 ----- "Adam Heath" <[hidden email]> wrote: > Scott Gray wrote: > > 2009/3/8 Adam Heath <[hidden email]> > > > >> Scott Gray wrote: > >>> I haven't worked on it for a few weeks but I do have some code > that > >>> can track changes on the GenericDelegator and then reverse them > when > >>> requested. At the moment it makes the test independent at the > >>> component level, mostly because the was the easiest place to do > it. > >>> I've tested it by exporting the data from a fresh install, > running > >>> the tests, exporting again and comparing the differences and at > the > >>> moment the only data that gets left behind is anything coming > from > >>> async service calls. > >> My code doesn't require anything fancy. It just makes a backup > copy > >> of the entire data folder, and restores it between test runs. It > was > >> easy to do this, then try to have some filter that rolls back a > >> complex series of changes. > >> > > > > That approach is fine by me, except it limits you to testing with > derby only > > rather than a production type database. > > Not really, just alter the script to create a backup of the binary > production db, and stop/start it as well. Of course, that means > running as root or the same user as the other database, which may not > be feasible. |
Free forum by Nabble | Edit this page |