[hidden email] wrote:
> Author: doogie > Date: Sun Apr 4 19:08:46 2010 > New Revision: 930737 > > URL: http://svn.apache.org/viewvc?rev=930737&view=rev > Log: > Make use of UtilObject.equalsHelper in storeAll. > > Modified: > ofbiz/trunk/framework/entity/src/org/ofbiz/entity/GenericDelegator.java > > Modified: ofbiz/trunk/framework/entity/src/org/ofbiz/entity/GenericDelegator.java > URL: http://svn.apache.org/viewvc/ofbiz/trunk/framework/entity/src/org/ofbiz/entity/GenericDelegator.java?rev=930737&r1=930736&r2=930737&view=diff > ============================================================================== > --- ofbiz/trunk/framework/entity/src/org/ofbiz/entity/GenericDelegator.java (original) > +++ ofbiz/trunk/framework/entity/src/org/ofbiz/entity/GenericDelegator.java Sun Apr 4 19:08:46 2010 > @@ -1450,7 +1450,7 @@ public class GenericDelegator implements > if (value.containsKey(fieldName)) { > Object fieldValue = value.get(fieldName); > Object oldValue = existing.get(fieldName); > - if ((fieldValue == null && oldValue != null) || (fieldValue != null && !fieldValue.equals(oldValue))) { > + if (UtilObject.equalsHelper(oldValue, fieldValue)) { > toStore.put(fieldName, fieldValue); > atLeastOneField = true; > } Did anyone else review this commit? Or did you not understand the code? Yes, I screwed it up. But if no one reviews the commits, then how will much bigger problems be found? This was a simple problem to find, an obvious inverted test, yet no one noticed it. What's the point of having commits mailed to the list if no one will review them? |
Adam Heath wrote:
> [hidden email] wrote: >> Author: doogie >> Date: Sun Apr 4 19:08:46 2010 >> New Revision: 930737 >> >> URL: http://svn.apache.org/viewvc?rev=930737&view=rev >> Log: >> Make use of UtilObject.equalsHelper in storeAll. >> >> Modified: >> ofbiz/trunk/framework/entity/src/org/ofbiz/entity/GenericDelegator.java >> >> Modified: ofbiz/trunk/framework/entity/src/org/ofbiz/entity/GenericDelegator.java >> URL: http://svn.apache.org/viewvc/ofbiz/trunk/framework/entity/src/org/ofbiz/entity/GenericDelegator.java?rev=930737&r1=930736&r2=930737&view=diff >> ============================================================================== >> --- ofbiz/trunk/framework/entity/src/org/ofbiz/entity/GenericDelegator.java (original) >> +++ ofbiz/trunk/framework/entity/src/org/ofbiz/entity/GenericDelegator.java Sun Apr 4 19:08:46 2010 >> @@ -1450,7 +1450,7 @@ public class GenericDelegator implements >> if (value.containsKey(fieldName)) { >> Object fieldValue = value.get(fieldName); >> Object oldValue = existing.get(fieldName); >> - if ((fieldValue == null && oldValue != null) || (fieldValue != null && !fieldValue.equals(oldValue))) { >> + if (UtilObject.equalsHelper(oldValue, fieldValue)) { >> toStore.put(fieldName, fieldValue); >> atLeastOneField = true; >> } > > Did anyone else review this commit? Or did you not understand the code? > > Yes, I screwed it up. But if no one reviews the commits, then how > will much bigger problems be found? This was a simple problem to > find, an obvious inverted test, yet no one noticed it. > > What's the point of having commits mailed to the list if no one will > review them? Whoa, back off. Don't forget that commit wouldn't even compile until I fixed it. In case you hadn't noticed, only a handful of us were on the ml this last weekend. And no, I didn't review it - I was busy working on other things. |
In reply to this post by Adam Heath-2
On 5/04/2010, at 5:11 PM, Adam Heath wrote:
> [hidden email] wrote: >> Author: doogie >> Date: Sun Apr 4 19:08:46 2010 >> New Revision: 930737 >> >> URL: http://svn.apache.org/viewvc?rev=930737&view=rev >> Log: >> Make use of UtilObject.equalsHelper in storeAll. >> >> Modified: >> ofbiz/trunk/framework/entity/src/org/ofbiz/entity/GenericDelegator.java >> >> Modified: ofbiz/trunk/framework/entity/src/org/ofbiz/entity/GenericDelegator.java >> URL: http://svn.apache.org/viewvc/ofbiz/trunk/framework/entity/src/org/ofbiz/entity/GenericDelegator.java?rev=930737&r1=930736&r2=930737&view=diff >> ============================================================================== >> --- ofbiz/trunk/framework/entity/src/org/ofbiz/entity/GenericDelegator.java (original) >> +++ ofbiz/trunk/framework/entity/src/org/ofbiz/entity/GenericDelegator.java Sun Apr 4 19:08:46 2010 >> @@ -1450,7 +1450,7 @@ public class GenericDelegator implements >> if (value.containsKey(fieldName)) { >> Object fieldValue = value.get(fieldName); >> Object oldValue = existing.get(fieldName); >> - if ((fieldValue == null && oldValue != null) || (fieldValue != null && !fieldValue.equals(oldValue))) { >> + if (UtilObject.equalsHelper(oldValue, fieldValue)) { >> toStore.put(fieldName, fieldValue); >> atLeastOneField = true; >> } > > Did anyone else review this commit? Or did you not understand the code? > > Yes, I screwed it up. But if no one reviews the commits, then how > will much bigger problems be found? This was a simple problem to > find, an obvious inverted test, yet no one noticed it. > > What's the point of having commits mailed to the list if no one will > review them? smime.p7s (3K) Download Attachment |
Scott Gray wrote:
> On 5/04/2010, at 5:11 PM, Adam Heath wrote: > >> [hidden email] wrote: >>> Author: doogie >>> Date: Sun Apr 4 19:08:46 2010 >>> New Revision: 930737 >>> >>> URL: http://svn.apache.org/viewvc?rev=930737&view=rev >>> Log: >>> Make use of UtilObject.equalsHelper in storeAll. >>> >>> Modified: >>> ofbiz/trunk/framework/entity/src/org/ofbiz/entity/GenericDelegator.java >>> >>> Modified: ofbiz/trunk/framework/entity/src/org/ofbiz/entity/GenericDelegator.java >>> URL: http://svn.apache.org/viewvc/ofbiz/trunk/framework/entity/src/org/ofbiz/entity/GenericDelegator.java?rev=930737&r1=930736&r2=930737&view=diff >>> ============================================================================== >>> --- ofbiz/trunk/framework/entity/src/org/ofbiz/entity/GenericDelegator.java (original) >>> +++ ofbiz/trunk/framework/entity/src/org/ofbiz/entity/GenericDelegator.java Sun Apr 4 19:08:46 2010 >>> @@ -1450,7 +1450,7 @@ public class GenericDelegator implements >>> if (value.containsKey(fieldName)) { >>> Object fieldValue = value.get(fieldName); >>> Object oldValue = existing.get(fieldName); >>> - if ((fieldValue == null && oldValue != null) || (fieldValue != null && !fieldValue.equals(oldValue))) { >>> + if (UtilObject.equalsHelper(oldValue, fieldValue)) { >>> toStore.put(fieldName, fieldValue); >>> atLeastOneField = true; >>> } >> Did anyone else review this commit? Or did you not understand the code? >> >> Yes, I screwed it up. But if no one reviews the commits, then how >> will much bigger problems be found? This was a simple problem to >> find, an obvious inverted test, yet no one noticed it. >> >> What's the point of having commits mailed to the list if no one will >> review them? > > Try to avoid dumping 10-20 commits at a time and I'll try to avoid skimming over them. Dumping 20 commits at once, and not doing anything for a week, vs: doing 2 a day, for 7 days, the same amount of review has to happen. And, it's better doing them all at once, because you don't have to constantly multi-task. And, besides, this little commit was not part of some larger commit flood anyways. > |
On 5/04/2010, at 5:47 PM, Adam Heath wrote:
> Scott Gray wrote: >> On 5/04/2010, at 5:11 PM, Adam Heath wrote: >> >>> [hidden email] wrote: >>>> Author: doogie >>>> Date: Sun Apr 4 19:08:46 2010 >>>> New Revision: 930737 >>>> >>>> URL: http://svn.apache.org/viewvc?rev=930737&view=rev >>>> Log: >>>> Make use of UtilObject.equalsHelper in storeAll. >>>> >>>> Modified: >>>> ofbiz/trunk/framework/entity/src/org/ofbiz/entity/GenericDelegator.java >>>> >>>> Modified: ofbiz/trunk/framework/entity/src/org/ofbiz/entity/GenericDelegator.java >>>> URL: http://svn.apache.org/viewvc/ofbiz/trunk/framework/entity/src/org/ofbiz/entity/GenericDelegator.java?rev=930737&r1=930736&r2=930737&view=diff >>>> ============================================================================== >>>> --- ofbiz/trunk/framework/entity/src/org/ofbiz/entity/GenericDelegator.java (original) >>>> +++ ofbiz/trunk/framework/entity/src/org/ofbiz/entity/GenericDelegator.java Sun Apr 4 19:08:46 2010 >>>> @@ -1450,7 +1450,7 @@ public class GenericDelegator implements >>>> if (value.containsKey(fieldName)) { >>>> Object fieldValue = value.get(fieldName); >>>> Object oldValue = existing.get(fieldName); >>>> - if ((fieldValue == null && oldValue != null) || (fieldValue != null && !fieldValue.equals(oldValue))) { >>>> + if (UtilObject.equalsHelper(oldValue, fieldValue)) { >>>> toStore.put(fieldName, fieldValue); >>>> atLeastOneField = true; >>>> } >>> Did anyone else review this commit? Or did you not understand the code? >>> >>> Yes, I screwed it up. But if no one reviews the commits, then how >>> will much bigger problems be found? This was a simple problem to >>> find, an obvious inverted test, yet no one noticed it. >>> >>> What's the point of having commits mailed to the list if no one will >>> review them? >> >> Try to avoid dumping 10-20 commits at a time and I'll try to avoid skimming over them. > > Dumping 20 commits at once, and not doing anything for a week, vs: > doing 2 a day, for 7 days, the same amount of review has to happen. > And, it's better doing them all at once, because you don't have to > constantly multi-task. > And, besides, this little commit was not part of some larger commit > flood anyways. I see 9 commits within half an hour from yesterday. smime.p7s (3K) Download Attachment |
Is it reasonable for a non-commiter to review commits? To be honest, I never even considered that I should be doing that.
However, two other things occurred to me ... Got the distinct feeling that this points to a not very good practice when it comes to unit testers. I definitely understand the pain of attempting to force a "must have tester" policy but having a project code-coverage goal might be something to strive towards (with published metrics). Moreover, I would guess in this case a unit-tester would have likely caught the inverted condition check (if it was compiling). I am definitely not a fisheye expert, but I would hope with its JIRA integration that something could be automated that would drive reviews targeted at the commiters. Again having a code-review goal (published) would be something the project could strive towards. It would be pretty awesome to have Ofbiz - 2+ million lines of code, 100% code reviewed, 90% unit-test code coverage. |
+1
Cheers, Ruppert -- Tim Ruppert HotWax Media http://www.hotwaxmedia.com o:801.649.6594 f:801.649.6595 On Apr 5, 2010, at 9:43 PM, Bob Morley wrote: > I am definitely not a fisheye expert, but I would hope with its JIRA > integration that something could be automated that would drive reviews > targeted at the commiters. Again having a code-review goal (published) > would be something the project could strive towards. > > It would be pretty awesome to have Ofbiz - 2+ million lines of code, 100% > code reviewed, 90% unit-test code coverage. |
In reply to this post by Bob Morley
On 5/04/2010, at 9:43 PM, Bob Morley wrote:
> > Is it reasonable for a non-commiter to review commits? To be honest, I never > even considered that I should be doing that. Reasonable? It would be fantastic, we have 166 subscribers and very little feedback. > However, two other things occurred to me ... > > Got the distinct feeling that this points to a not very good practice when > it comes to unit testers. Buildbot has been reporting failures since yesterday: http://ci.apache.org/waterfall?show=ofbiz-trunk The problem wasn't missed, I just think Adam was a little peeved that no one noticed his simple mistake in such as small commit. > I definitely understand the pain of attempting to > force a "must have tester" policy but having a project code-coverage goal > might be something to strive towards (with published metrics). Moreover, I > would guess in this case a unit-tester would have likely caught the inverted > condition check (if it was compiling). > > I am definitely not a fisheye expert, but I would hope with its JIRA > integration that something could be automated that would drive reviews > targeted at the commiters. Again having a code-review goal (published) > would be something the project could strive towards. > > It would be pretty awesome to have Ofbiz - 2+ million lines of code, 100% > code reviewed, 90% unit-test code coverage. Write some tests and I'll commit them. Regards Scott smime.p7s (3K) Download Attachment |
In reply to this post by Bob Morley
--- On Mon, 4/5/10, Bob Morley <[hidden email]> wrote:
> Is it reasonable for a non-commiter to review > commits? To be honest, I never > even considered that I should be doing that. As an OFBiz user, it seems to me you would have a vested interest in reviewing commits. -Adrian |
In reply to this post by Scott Gray-2
Thanks for that url; very interesting indeed. From what I could tell the set of unit tests that execute are littered with failures (or they intentionally cause a lot stack traces due to exception). OK Mr. Gray, you have committed to committing! Now you have asked for it! Full review too, I am not above planting some stuff to see if you catch it! ;) |
In reply to this post by Adrian Crum-2
Agreed. Our fundamental problem (and this is definitely our problem) is that we branched off pretty badly and as a result, have not merged Ofbiz trunk in a LONG time. So while we do have a vested interest in these commits and the longevity of Ofbiz in general, they are longer term then they should be. |
In reply to this post by Bob Morley
On 5/04/2010, at 10:34 PM, Bob Morley wrote:
> Scott Gray-2 wrote: >> >> Buildbot has been reporting failures since yesterday: >> http://ci.apache.org/waterfall?show=ofbiz-trunk >> The problem wasn't missed, I just think Adam was a little peeved that no >> one noticed his simple mistake in such as small commit. >> >> This isn't a dig at you, but talk is cheap :-) >> Write some tests and I'll commit them. >> > > Thanks for that url; very interesting indeed. From what I could tell the > set of unit tests that execute are littered with failures (or they > intentionally cause a lot stack traces due to exception). > OK Mr. Gray, you have committed to committing! Now you have asked for it! Sure did, bring it on :-) > Full review too, I am not above planting some stuff to see if you catch it! > ;) I'm certainly not infallible, but I won't commit anything that I don't fully understand either. smime.p7s (3K) Download Attachment |
In reply to this post by Scott Gray-2
Scott Gray wrote:
> On 5/04/2010, at 9:43 PM, Bob Morley wrote: > >> Is it reasonable for a non-commiter to review commits? To be honest, I never >> even considered that I should be doing that. > > Reasonable? It would be fantastic, we have 166 subscribers and very little feedback. More eyes, the better. Tell us where we are wrong, tell us what could be done better. >> However, two other things occurred to me ... >> >> Got the distinct feeling that this points to a not very good practice when >> it comes to unit testers. This wasn't a problem of not having tests. If I had ran the test cases, even if I had done a full compile, I would found this particular error before I ever committed it. And for that, I can't apologize enough for. I've been adding lots of unit tests to base. You'll find a bunch of my handiwork in classes that have the SourceMonitored annotation. > Buildbot has been reporting failures since yesterday: http://ci.apache.org/waterfall?show=ofbiz-trunk > The problem wasn't missed, I just think Adam was a little peeved that no one noticed his simple mistake in such as small commit. I happily admit to causing a fuckup. That wasn't the point I was making, as you surmised. But, if no one is reviewing commits, and such a simple, obvious error wasn't detected by someone else, then how can we even detect the more complex problems? I understand we are all human, and that we are all busy. I've been quite good at finding and tracking down buildbot problems before(git bisect, woo!). I'm just annoyed that no one else pointed the finger at me, and that it sat for a day. >> I definitely understand the pain of attempting to >> force a "must have tester" policy but having a project code-coverage goal >> might be something to strive towards (with published metrics). Moreover, I >> would guess in this case a unit-tester would have likely caught the inverted >> condition check (if it was compiling). Install cobertura.jar into framework/base/lib, compile, and you will get metrics. It's just that cobertura is gpl, and there are no other good tools that are currently being maintained that are also license compatible. I will *not* spend any of my time working at integrating a non-free coverage library. Period. Don't even bother asking me. I have written the cobertura integration has a pluggable system. >> I am definitely not a fisheye expert, but I would hope with its JIRA >> integration that something could be automated that would drive reviews >> targeted at the commiters. Again having a code-review goal (published) >> would be something the project could strive towards. >> >> It would be pretty awesome to have Ofbiz - 2+ million lines of code, 100% >> code reviewed, 90% unit-test code coverage. > > This isn't a dig at you, but talk is cheap :-) > Write some tests and I'll commit them. Or I. |
In reply to this post by Bob Morley
Bob Morley wrote:
> Thanks for that url; very interesting indeed. From what I could tell the > set of unit tests that execute are littered with failures (or they > intentionally cause a lot stack traces due to exception). I've been trying to reduce the number of duplicated errors logged. OfBiz is famous of catching an error, logging it, then rethrowing it. Repeat ad-infinitum. |
On 5/04/2010, at 10:54 PM, Adam Heath wrote:
> Bob Morley wrote: >> Thanks for that url; very interesting indeed. From what I could tell the >> set of unit tests that execute are littered with failures (or they >> intentionally cause a lot stack traces due to exception). > > I've been trying to reduce the number of duplicated errors logged. > OfBiz is famous of catching an error, logging it, then rethrowing it. > Repeat ad-infinitum. OfBiz is famous of catching an error, logging it, then rethrowing it. Repeat ad-infinitum. smime.p7s (3K) Download Attachment |
--- On Mon, 4/5/10, Scott Gray <[hidden email]> wrote:
> On 5/04/2010, at 10:54 PM, Adam Heath > wrote: > > > Bob Morley wrote: > >> Thanks for that url; very interesting > indeed. From what I could tell the > >> set of unit tests that execute are littered with > failures (or they > >> intentionally cause a lot stack traces due to > exception). > > > > I've been trying to reduce the number of duplicated > errors logged. > > OfBiz is famous of catching an error, logging it, then > rethrowing it. > > Repeat ad-infinitum. > > OfBiz is famous of catching an error, logging it, then > rethrowing it. > Repeat ad-infinitum. OfBiz is famous of catching an error, logging it, then rethrowing it. Repeat ad-infinitum. |
In reply to this post by Bob Morley
Bob Morley wrote:
> > Adrian Crum-2 wrote: >> As an OFBiz user, it seems to me you would have a vested interest in >> reviewing commits. >> > > Agreed. Our fundamental problem (and this is definitely our problem) is > that we branched off pretty badly and as a result, have not merged Ofbiz > trunk in a LONG time. So while we do have a vested interest in these > commits and the longevity of Ofbiz in general, they are longer term then > they should be. We recently switched to using git to manage our debian package of ofbiz. Then, after 6 months, I did an upgrade to a newer snapshot. It took me just half a day to do that. I'm currently maintaining a single version of webslinger, that installs into 595296, 811564, and 902021. I am also maintaining all those ofbiz versions. The latter 2 are maintained in git. It's been my policy to only put things in the package that first get added to trunk. This makes cherry-picking simpler, and, when I eventually upgrade, I can throw away all the old versions, and just go to trunk. I can also push the local branch into our internal git, then do a checkout on production if I need to do some special fixes or production. |
In reply to this post by Adrian Crum-2
Adrian Crum wrote:
>>> OfBiz is famous of catching an error, logging it, then >> rethrowing it. >>> Repeat ad-infinitum. >> OfBiz is famous of catching an error, logging it, then >> rethrowing it. >> Repeat ad-infinitum. > > OfBiz is famous of catching an error, logging it, then rethrowing it. > Repeat ad-infinitum. Ok, I've fixed this bug. s/of/for/ |
In reply to this post by Scott Gray-2
Ahh yes; I spotted one simple problem -- [java] Unable to load test suite class : org.ofbiz.order.test.CustRequestTest [java] Exception: java.lang.ClassNotFoundException [java] Message: org.ofbiz.order.test.CustRequestTest Looks like if we have a bad testdef configured, we do not bubble this up and it gets lost. This should probably be failing the build ... Most of the noise seems to come as you guys indicated, from negative test cases coupled with the fact that we are logging "info". Here is a pattern from the bottom of a tester -- assertNotNull("Foreign key referential integrity is not observed for create (INSERT)", caught); Debug.logInfo(caught.toString(), module); The second is explicit rollback and the end of a test method which under the covers (if info is turned on) it will create an exception and logError with it -- TransactionUtil.rollback(transBegin, null, null); I will fix the first problem when I get into the office tomorrow (as it is not really pressing). The second thing though, wanna consider changing the log4j threshold on the run-tests target to warn? This would cut down on the noise (but not eliminate). |
Bob Morley wrote:
> > Scott Gray-2 wrote: >> On 5/04/2010, at 10:54 PM, Adam Heath wrote: >> >>> Bob Morley wrote: >>>> Thanks for that url; very interesting indeed. From what I could tell >>>> the >>>> set of unit tests that execute are littered with failures (or they >>>> intentionally cause a lot stack traces due to exception). >>> I've been trying to reduce the number of duplicated errors logged. >>> OfBiz is famous of catching an error, logging it, then rethrowing it. >>> Repeat ad-infinitum. >> OfBiz is famous of catching an error, logging it, then rethrowing it. >> Repeat ad-infinitum. >> > > Ahh yes; I spotted one simple problem -- > > [java] Unable to load test suite class : > org.ofbiz.order.test.CustRequestTest > [java] Exception: java.lang.ClassNotFoundException > [java] Message: org.ofbiz.order.test.CustRequestTest > > Looks like if we have a bad testdef configured, we do not bubble this up and > it gets lost. This should probably be failing the build ... > > Most of the noise seems to come as you guys indicated, from negative test > cases coupled with the fact that we are logging "info". Here is a pattern > from the bottom of a tester -- > assertNotNull("Foreign key referential integrity is not observed for > create (INSERT)", caught); > Debug.logInfo(caught.toString(), module); > > The second is explicit rollback and the end of a test method which under the > covers (if info is turned on) it will create an exception and logError with > it -- > TransactionUtil.rollback(transBegin, null, null); > > > I will fix the first problem when I get into the office tomorrow (as it is > not really pressing). The second thing though, wanna consider changing the > log4j threshold on the run-tests target to warn? This would cut down on the > noise (but not eliminate). I just fixed the first. The others should have their logging fixed, don't change the log level for run-tests. If low-level code logs an error, and rethrows, then just stop logging the error. |
Free forum by Nabble | Edit this page |