|
Administrator
|
A huge task ahead...
Jacques From: "Adam Heath" <[hidden email]> > Scott Gray wrote: >>> Additionally, just because a line has been noted in cobertura, doesn't >>> mean all variations have been tested. Consider the case that some >>> condition is doing some kind of pattern match, or looking at >>> Collection.contains or Map.containsKey. It's much simpler to verify >>> that everything is tested when it is done explicitly. >> >> Okay I see what you mean now, it's a bad thing that coverage is reported >> without explicit thorough testing. Even though the indirect coverage is >> still better than no coverage whatsoever. > > As a better example, let's say that there is only 10% coverage on the > entire ofbiz code base. But base has 100% coverage. That other 90% > of untested code may test parts of base that may not work, and would > break the higher-level code. > > It's easier to write tests that are close to the code being tested. > Trying to tweak a high-level test to make certain all low-level code > is wrong is very very difficult. > > Plus, if a typo gets introduced in one of those map keys, it might not > be detected until much much later in time, when explicit tests are not > used. > > In my opinion, as each new component is activated in the ofbiz system, > it should have it's own set of tests that move it close to 100% > coverage. So, I can test just base, and get 100%, then base+sql, and > get 100% on base+sql, then base+sql+entity, and get 100% on > base+sql+entity, and so on. You want to make certain that earlier > components are correct before utilizing later ones, or the entire test > run may fail spectactulary. > |
|
Huge tasks are prone to failure, what we really have is a lot of
little tasks ahead. If you're writing some code, think about writing the tests that should code along with it. Hopefully it'll become so common place that anyone who doesn't do it will just look silly. Regards Scott HotWax Media http://www.hotwaxmedia.com On 11/12/2009, at 8:59 PM, Jacques Le Roux wrote: > A huge task ahead... > > Jacques > > From: "Adam Heath" <[hidden email]> >> Scott Gray wrote: >>>> Additionally, just because a line has been noted in cobertura, >>>> doesn't >>>> mean all variations have been tested. Consider the case that some >>>> condition is doing some kind of pattern match, or looking at >>>> Collection.contains or Map.containsKey. It's much simpler to >>>> verify >>>> that everything is tested when it is done explicitly. >>> Okay I see what you mean now, it's a bad thing that coverage is >>> reported >>> without explicit thorough testing. Even though the indirect >>> coverage is >>> still better than no coverage whatsoever. >> As a better example, let's say that there is only 10% coverage on the >> entire ofbiz code base. But base has 100% coverage. That other 90% >> of untested code may test parts of base that may not work, and would >> break the higher-level code. >> It's easier to write tests that are close to the code being tested. >> Trying to tweak a high-level test to make certain all low-level code >> is wrong is very very difficult. >> Plus, if a typo gets introduced in one of those map keys, it might >> not >> be detected until much much later in time, when explicit tests are >> not >> used. >> In my opinion, as each new component is activated in the ofbiz >> system, >> it should have it's own set of tests that move it close to 100% >> coverage. So, I can test just base, and get 100%, then base+sql, and >> get 100% on base+sql, then base+sql+entity, and get 100% on >> base+sql+entity, and so on. You want to make certain that earlier >> components are correct before utilizing later ones, or the entire >> test >> run may fail spectactulary. >> > |
|
Administrator
|
Sure Scott,
Good recommendation, I will remember... BTW, we could already open an umbrella issue in Jira and add subtask as things go on? I can't see a better way to coordinate this work Jacques From: "Scott Gray" <[hidden email]> > Huge tasks are prone to failure, what we really have is a lot of > little tasks ahead. If you're writing some code, think about writing > the tests that should code along with it. Hopefully it'll become so > common place that anyone who doesn't do it will just look silly. > > Regards > Scott > > HotWax Media > http://www.hotwaxmedia.com > > On 11/12/2009, at 8:59 PM, Jacques Le Roux wrote: > >> A huge task ahead... >> >> Jacques >> >> From: "Adam Heath" <[hidden email]> >>> Scott Gray wrote: >>>>> Additionally, just because a line has been noted in cobertura, >>>>> doesn't >>>>> mean all variations have been tested. Consider the case that some >>>>> condition is doing some kind of pattern match, or looking at >>>>> Collection.contains or Map.containsKey. It's much simpler to >>>>> verify >>>>> that everything is tested when it is done explicitly. >>>> Okay I see what you mean now, it's a bad thing that coverage is >>>> reported >>>> without explicit thorough testing. Even though the indirect >>>> coverage is >>>> still better than no coverage whatsoever. >>> As a better example, let's say that there is only 10% coverage on the >>> entire ofbiz code base. But base has 100% coverage. That other 90% >>> of untested code may test parts of base that may not work, and would >>> break the higher-level code. >>> It's easier to write tests that are close to the code being tested. >>> Trying to tweak a high-level test to make certain all low-level code >>> is wrong is very very difficult. >>> Plus, if a typo gets introduced in one of those map keys, it might >>> not >>> be detected until much much later in time, when explicit tests are >>> not >>> used. >>> In my opinion, as each new component is activated in the ofbiz >>> system, >>> it should have it's own set of tests that move it close to 100% >>> coverage. So, I can test just base, and get 100%, then base+sql, and >>> get 100% on base+sql, then base+sql+entity, and get 100% on >>> base+sql+entity, and so on. You want to make certain that earlier >>> components are correct before utilizing later ones, or the entire >>> test >>> run may fail spectactulary. >>> >> > > |
|
We could but I don't think it would help us track progress very well,
there are just too many tests that need to be created. A good start IMO would be to go through every open bug and create a test case that reproduces the problem and attach them as patches to be committed with the fix. Tackling those would at least prevent known bugs from recurring. Other than that I think people will just proceed with creating tests for the areas that concern them most. Regards Scott On 11/12/2009, at 10:13 PM, Jacques Le Roux wrote: > Sure Scott, > > Good recommendation, I will remember... > BTW, we could already open an umbrella issue in Jira and add subtask > as things go on? > I can't see a better way to coordinate this work > > Jacques > > From: "Scott Gray" <[hidden email]> >> Huge tasks are prone to failure, what we really have is a lot of >> little tasks ahead. If you're writing some code, think about >> writing the tests that should code along with it. Hopefully it'll >> become so common place that anyone who doesn't do it will just >> look silly. >> Regards >> Scott >> HotWax Media >> http://www.hotwaxmedia.com >> On 11/12/2009, at 8:59 PM, Jacques Le Roux wrote: >>> A huge task ahead... >>> >>> Jacques >>> >>> From: "Adam Heath" <[hidden email]> >>>> Scott Gray wrote: >>>>>> Additionally, just because a line has been noted in cobertura, >>>>>> doesn't >>>>>> mean all variations have been tested. Consider the case that >>>>>> some >>>>>> condition is doing some kind of pattern match, or looking at >>>>>> Collection.contains or Map.containsKey. It's much simpler to >>>>>> verify >>>>>> that everything is tested when it is done explicitly. >>>>> Okay I see what you mean now, it's a bad thing that coverage is >>>>> reported >>>>> without explicit thorough testing. Even though the indirect >>>>> coverage is >>>>> still better than no coverage whatsoever. >>>> As a better example, let's say that there is only 10% coverage on >>>> the >>>> entire ofbiz code base. But base has 100% coverage. That other >>>> 90% >>>> of untested code may test parts of base that may not work, and >>>> would >>>> break the higher-level code. >>>> It's easier to write tests that are close to the code being tested. >>>> Trying to tweak a high-level test to make certain all low-level >>>> code >>>> is wrong is very very difficult. >>>> Plus, if a typo gets introduced in one of those map keys, it >>>> might not >>>> be detected until much much later in time, when explicit tests >>>> are not >>>> used. >>>> In my opinion, as each new component is activated in the ofbiz >>>> system, >>>> it should have it's own set of tests that move it close to 100% >>>> coverage. So, I can test just base, and get 100%, then base+sql, >>>> and >>>> get 100% on base+sql, then base+sql+entity, and get 100% on >>>> base+sql+entity, and so on. You want to make certain that earlier >>>> components are correct before utilizing later ones, or the >>>> entire test >>>> run may fail spectactulary. >>>> >>> >> > |
|
In reply to this post by Jacques Le Roux
Hi Jacques,
inline Le 11/12/2009 08:47, Jacques Le Roux a écrit : > What about groovy scripts, are they handled by Cobertura? From what I know Cobertura needs .class files, and for the groovy files, it would mean to compile them first http://docs.codehaus.org/display/GROOVY/Code+Coverage+with+Cobertura > And actions in Screens, is it worth do something? I guess checking > "static structural" files like web.xml, ofbiz-component.xml files (and > all xml like them, controllers, menus, commonsScreens and such) is not > necessary? I think this would mean a lot of work to adapt a tool to this. But if we just stay on the services, this vould give a great overview of their coverage. As we are in a SOA system, and service is the most important part of the system, we have to be sure they are tested. For all user interface testing, validation could be done via storylines and scenarios, with expected results after an action, as David described in UBPL, for the user point of view. > > Jacques > > From: "Erwan de FERRIERES" <[hidden email]> >> Now the point would be to show the coverage of the simple methods and >> also, maybe in sometime, the selenium coverage. >> >> What would be great is finding a way to indicate which services are >> tested, and then display it in the webtools. This won't give an >> information as precise as cobertura but would add some quick display >> on what is or can be tested. >> >> If we agree on a syntax to use, I would be ready to add the screen in >> OFBiz. >> >> Cheers, >> >> Le 11/12/2009 00:21, Scott Gray a écrit : >> ../.. >>> >>> >> >> -- >> Erwan de FERRIERES >> www.nereide.biz >> > > > -- Erwan de FERRIERES www.nereide.biz |
|
On 11/12/2009, at 11:02 PM, Erwan de FERRIERES wrote:
> But if we just stay on the services, this vould give a great > overview of their coverage. > As we are in a SOA system, and service is the most important part of > the system, we have to be sure they are tested. The easiest way to achieve something like this might be to try and extend/adapt the ArtifactInfo stuff to do it. It could inspect each test case for service calls and report on the service coverage based on that. We could make it so that it doesn't dig too deep and only reports on services that are called directly from the test case and not services that are called as a consequence of the service call e.g. ECAs, inline service calls, etc. Of course it wouldn't be able to report on the actual line coverage of the service call but combined with Adam's report it might start to give us better overall picture. > > For all user interface testing, validation could be done via > storylines and scenarios, with expected results after an action, as > David described in UBPL, for the user point of view. > >> From: "Erwan de FERRIERES" <[hidden email]> >>> Now the point would be to show the coverage of the simple methods >>> and >>> also, maybe in sometime, the selenium coverage. >>> >>> What would be great is finding a way to indicate which services are >>> tested, and then display it in the webtools. This won't give an >>> information as precise as cobertura but would add some quick display >>> on what is or can be tested. >>> >>> If we agree on a syntax to use, I would be ready to add the screen >>> in >>> OFBiz. >>> >>> Cheers, >>> >>> Le 11/12/2009 00:21, Scott Gray a écrit : >>> ../.. >>>> >>>> >>> >>> -- >>> Erwan de FERRIERES >>> www.nereide.biz >>> >> >> >> > > -- > Erwan de FERRIERES > www.nereide.biz |
|
Administrator
|
In reply to this post by Erwan de FERRIERES-3
Erwan, Scott,
Small comments Inline... From: "Erwan de FERRIERES" <[hidden email]> > Hi Jacques, > inline > > Le 11/12/2009 08:47, Jacques Le Roux a écrit : >> What about groovy scripts, are they handled by Cobertura? > From what I know Cobertura needs .class files, and for the groovy files, it would mean to compile them first > http://docs.codehaus.org/display/GROOVY/Code+Coverage+with+Cobertura Class are created when they are used and placed in cache. As suggested Scott we could use the wonderful artifact info mechanism to deal with this aspect and also the others (not UI) >> And actions in Screens, is it worth do something? I guess checking >> "static structural" files like web.xml, ofbiz-component.xml files (and >> all xml like them, controllers, menus, commonsScreens and such) is not >> necessary? > I think this would mean a lot of work to adapt a tool to this. But if we just stay on the services, this vould give a great > overview of their coverage. > As we are in a SOA system, and service is the most important part of the system, we have to be sure they are tested. > > For all user interface testing, validation could be done via storylines and scenarios, with expected results after an action, as > David described in UBPL, for the user point of view. I guess this is more related to Selenim. Not sure how to link Cobertura and Selenium though Jacques >> >> Jacques >> >> From: "Erwan de FERRIERES" <[hidden email]> >>> Now the point would be to show the coverage of the simple methods and >>> also, maybe in sometime, the selenium coverage. >>> >>> What would be great is finding a way to indicate which services are >>> tested, and then display it in the webtools. This won't give an >>> information as precise as cobertura but would add some quick display >>> on what is or can be tested. >>> >>> If we agree on a syntax to use, I would be ready to add the screen in >>> OFBiz. >>> >>> Cheers, >>> >>> Le 11/12/2009 00:21, Scott Gray a écrit : >>> ../.. >>>> >>>> >>> >>> -- >>> Erwan de FERRIERES >>> www.nereide.biz >>> >> >> >> > > -- > Erwan de FERRIERES > www.nereide.biz > |
|
In reply to this post by Jacques Le Roux
Jacques Le Roux wrote:
> What about groovy scripts, are they handled by Cobertura? > And actions in Screens, is it worth do something? I guess checking > "static structural" files like web.xml, ofbiz-component.xml files (and > all xml like them, controllers, menus, commonsScreens and such) is not > necessary? This requires groovy including the line number metadata info in the compiled bytecode, then a groovy parser that cobertura can use, so no. Now, if there was a fancy program that supported plugins for other languages, then that would be cool, but I don't know of one. |
|
In reply to this post by Erwan de FERRIERES-3
Erwan de FERRIERES wrote:
> Hi Jacques, > inline > > Le 11/12/2009 08:47, Jacques Le Roux a écrit : >> What about groovy scripts, are they handled by Cobertura? > From what I know Cobertura needs .class files, and for the groovy files, > it would mean to compile them first > http://docs.codehaus.org/display/GROOVY/Code+Coverage+with+Cobertura No, it needs a byte array that contains class data. I haven't looked inside groovy internals in a bit, but I'm fairly certain that at some point it creates said byte array, seeing as how the jvm requires it to do so. |
|
The coverage stuff I have done for ofbiz is based on the method I did
it for webslinger. I wrote a shim to allow for other coverage tools to be used, then wrote a classpath/jar walker to find all classes matching a particular pattern, a dynamic temporary classloader to load the instrumenter, then a concrete implementation that uses Cobertura. I can check everything in, including a build.xml tweak, but can't add the cobertura library itself. So, with that out of the way, here's my current headache. cobertura-1.9.1 does not handle annotation definitions(@interface) in java files. 1.9.3 does. However, that requires upgrading from asm-2.2.3 to asm-3.2. But then groovy fails, 1.6.0-1.6.7 still use asm-2.2.3. Groovy has to be upgraded to 1.7-rc-2. But then webslinger fails, 'cuz it has direct links to asm, cobertura, and groovy. I've done the webslinger side, and my test cases pass, still have to upgrade the ofbiz stuff. ps: I love java :| |
|
Adam Heath wrote:
> The coverage stuff I have done for ofbiz is based on the method I did > it for webslinger. I wrote a shim to allow for other coverage tools > to be used, then wrote a classpath/jar walker to find all classes > matching a particular pattern, a dynamic temporary classloader to load > the instrumenter, then a concrete implementation that uses Cobertura. > > I can check everything in, including a build.xml tweak, but can't add > the cobertura library itself. > > So, with that out of the way, here's my current headache. > > cobertura-1.9.1 does not handle annotation definitions(@interface) in > java files. 1.9.3 does. However, that requires upgrading from > asm-2.2.3 to asm-3.2. But then groovy fails, 1.6.0-1.6.7 still use > asm-2.2.3. Groovy has to be upgraded to 1.7-rc-2. But then > webslinger fails, 'cuz it has direct links to asm, cobertura, and groovy. > > I've done the webslinger side, and my test cases pass, still have to > upgrade the ofbiz stuff. > > ps: I love java :| You didn't have any plans for this weekend anyway, right? |
| Free forum by Nabble | Edit this page |
