Hi,
Let me first acknowledge that I made a mistake from the beginning to go with the latest from the trunk and not 4.0 which was the stable release. I have not tried 4.0 again to see the differences between 4.0 and the latest. Besides I have a few relatively minor changes that I have made into my local ofbiz copy. Now the problem... I have been updating the latest since then to get the fixes for the things broken due to new checkins and i am going in circles, new patches seem to break a few other things and so on. Question is - How carefully are checkins being made/accepted? (just out of curiosity) and does anyone know of a version - still close to latest that is relatively stable? Thanks |
Ritesh Trivedi wrote:
> Hi, > > Let me first acknowledge that I made a mistake from the beginning to go with > the latest from the trunk and not 4.0 which was the stable release. I have > not tried 4.0 again to see the differences between 4.0 and the latest. > Besides I have a few relatively minor changes that I have made into my local > ofbiz copy. > > Now the problem... I have been updating the latest since then to get the > fixes for the things broken due to new checkins and i am going in circles, > new patches seem to break a few other things and so on. > > Question is - How carefully are checkins being made/accepted? (just out of > curiosity) and does anyone know of a version - still close to latest that is > relatively stable? As is always the case, please report problems you have; there is no way we can debug what issues you are having unless you tell us. |
In reply to this post by Ritesh Trivedi
Short answer trunk is not stable. That is when you are only doing bug
fixes not adding new features. Some new Features are being added in sections. 1) can't mix and match 4.0 and trunk. 4.0 uses 1.42 trunk uses 1.5+ 2) You can go over the commit ML and get each commit to see what is changed. Yes the committers are suppose to check each change. However with the size of ofbiz, that usually is focused, on the area the patch effects, not all of ofbiz. Some committers seem to add code then do testing. when you find a bug test on the demo server, if you can replicate it there put in a jira. http://www.apache.org/dev/committers.html Ritesh Trivedi sent the following on 10/27/2008 7:25 PM: > Hi, > > Let me first acknowledge that I made a mistake from the beginning to go with > the latest from the trunk and not 4.0 which was the stable release. I have > not tried 4.0 again to see the differences between 4.0 and the latest. > Besides I have a few relatively minor changes that I have made into my local > ofbiz copy. > > Now the problem... I have been updating the latest since then to get the > fixes for the things broken due to new checkins and i am going in circles, > new patches seem to break a few other things and so on. > > Question is - How carefully are checkins being made/accepted? (just out of > curiosity) and does anyone know of a version - still close to latest that is > relatively stable? > > Thanks |
Administrator
|
From: "BJ Freeman" <[hidden email]>
> Some committers seem to add code then do testing. > when you find a bug test on the demo server, if you can replicate it > there put in a jira. This is a very unsual case. We call it CTR (Commit then Review) mode and it's not much used in OFBiz. Sometimes though some bugs slip in, we are humans. This is why reporting bugs is so important. With a patch is much appreciated ;o) Jacques > http://www.apache.org/dev/committers.html > > Ritesh Trivedi sent the following on 10/27/2008 7:25 PM: >> Hi, >> >> Let me first acknowledge that I made a mistake from the beginning to go with >> the latest from the trunk and not 4.0 which was the stable release. I have >> not tried 4.0 again to see the differences between 4.0 and the latest. >> Besides I have a few relatively minor changes that I have made into my local >> ofbiz copy. >> >> Now the problem... I have been updating the latest since then to get the >> fixes for the things broken due to new checkins and i am going in circles, >> new patches seem to break a few other things and so on. >> >> Question is - How carefully are checkins being made/accepted? (just out of >> curiosity) and does anyone know of a version - still close to latest that is >> relatively stable? >> >> Thanks > |
Is there a minor release on horizon? or is there a release map somewhere which shows roadmap, projected milestones, dates etc?
|
There is a process called "Vendor Branch Management" that you can read about in the Subversion book. If you want to maintain your own stable version and occasionally merge with revisions from trunk you can use this process.
It is a pretty good process in my opinion but also very resource intensive. And you would still need to do your own QA and functional testing in order to determine what revisions are worthy of merging. ----- Original Message ----- From: "Ritesh Trivedi" <[hidden email]> To: [hidden email] Sent: Tuesday, October 28, 2008 10:40:51 AM (GMT-0700) America/Denver Subject: Re: Stable trunk? Is there a minor release on horizon? or is there a release map somewhere which shows roadmap, projected milestones, dates etc? jacques.le.roux wrote: > > From: "BJ Freeman" <[hidden email]> >> Some committers seem to add code then do testing. >> when you find a bug test on the demo server, if you can replicate it >> there put in a jira. > > This is a very unsual case. We call it CTR (Commit then Review) mode and > it's not much used in OFBiz. Sometimes though some bugs > slip in, we are humans. This is why reporting bugs is so important. With a > patch is much appreciated ;o) > > Jacques > >> http://www.apache.org/dev/committers.html >> >> Ritesh Trivedi sent the following on 10/27/2008 7:25 PM: >>> Hi, >>> >>> Let me first acknowledge that I made a mistake from the beginning to go >>> with >>> the latest from the trunk and not 4.0 which was the stable release. I >>> have >>> not tried 4.0 again to see the differences between 4.0 and the latest. >>> Besides I have a few relatively minor changes that I have made into my >>> local >>> ofbiz copy. >>> >>> Now the problem... I have been updating the latest since then to get the >>> fixes for the things broken due to new checkins and i am going in >>> circles, >>> new patches seem to break a few other things and so on. >>> >>> Question is - How carefully are checkins being made/accepted? (just out >>> of >>> curiosity) and does anyone know of a version - still close to latest >>> that is >>> relatively stable? >>> >>> Thanks >> > > > -- View this message in context: http://www.nabble.com/Stable-trunk--tp20200392p20211017.html Sent from the OFBiz - Dev mailing list archive at Nabble.com. |
In reply to this post by Jacques Le Roux
I see this in the commit logs continuously.
I would think if someone did thorough testing before commit that this would not happen. No biggy to me, since i have not brought it up till now. just a way to explain bugs and to check the commits to see if it is fixed before submitting a jira. Jacques Le Roux sent the following on 10/28/2008 9:26 AM: > From: "BJ Freeman" <[hidden email]> >> Some committers seem to add code then do testing. >> when you find a bug test on the demo server, if you can replicate it >> there put in a jira. > > This is a very unsual case. We call it CTR (Commit then Review) mode and > it's not much used in OFBiz. Sometimes though some bugs slip in, we are > humans. This is why reporting bugs is so important. With a patch is much > appreciated ;o) > > Jacques > >> http://www.apache.org/dev/committers.html >> >> Ritesh Trivedi sent the following on 10/27/2008 7:25 PM: >>> Hi, >>> >>> Let me first acknowledge that I made a mistake from the beginning to >>> go with >>> the latest from the trunk and not 4.0 which was the stable release. I >>> have >>> not tried 4.0 again to see the differences between 4.0 and the latest. >>> Besides I have a few relatively minor changes that I have made into >>> my local >>> ofbiz copy. >>> >>> Now the problem... I have been updating the latest since then to get the >>> fixes for the things broken due to new checkins and i am going in >>> circles, >>> new patches seem to break a few other things and so on. >>> >>> Question is - How carefully are checkins being made/accepted? (just >>> out of >>> curiosity) and does anyone know of a version - still close to latest >>> that is >>> relatively stable? >>> >>> Thanks >> > > > |
In reply to this post by BJ Freeman
Couldn't we use the principles of Test Driven Development as an approach to get and keep the trunk "stable"? I plea for more and better use of unit tests and not to commit code that does not pass the tests. It also makes it easier for reviewers to do what they have to do: just review and not test.
Introducing TDD in my company greatly reduced the amount of bugs and made use create new releases with very little effort. I understand that controlled environment is different from a community driven project so I'm curious what your opinions are. -Jeroen
|
there has been an effort to put in test units.
the only thing lacking, in making it complete is manpower. Jeroen van der Wal sent the following on 10/29/2008 8:47 AM: > Couldn't we use the principles of Test Driven Development as an approach to > get and keep the trunk "stable"? I plea for more and better use of unit > tests and not to commit code that does not pass the tests. It also makes it > easier for reviewers to do what they have to do: just review and not test. > > Introducing TDD in my company greatly reduced the amount of bugs and made > use create new releases with very little effort. I understand that > controlled environment is different from a community driven project so I'm > curious what your opinions are. > > -Jeroen > > > BJ Freeman wrote: >> Short answer trunk is not stable. That is when you are only doing bug >> fixes not adding new features. Some new Features are being added in >> sections. >> 1) can't mix and match 4.0 and trunk. 4.0 uses 1.42 trunk uses 1.5+ >> 2) You can go over the commit ML and get each commit to see what is >> changed. Yes the committers are suppose to check each change. However >> with the size of ofbiz, that usually is focused, on the area the patch >> effects, not all of ofbiz. >> Some committers seem to add code then do testing. >> when you find a bug test on the demo server, if you can replicate it >> there put in a jira. >> http://www.apache.org/dev/committers.html >> >> Ritesh Trivedi sent the following on 10/27/2008 7:25 PM: >>> Hi, >>> >>> Let me first acknowledge that I made a mistake from the beginning to go >>> with >>> the latest from the trunk and not 4.0 which was the stable release. I >>> have >>> not tried 4.0 again to see the differences between 4.0 and the latest. >>> Besides I have a few relatively minor changes that I have made into my >>> local >>> ofbiz copy. >>> >>> Now the problem... I have been updating the latest since then to get the >>> fixes for the things broken due to new checkins and i am going in >>> circles, >>> new patches seem to break a few other things and so on. >>> >>> Question is - How carefully are checkins being made/accepted? (just out >>> of >>> curiosity) and does anyone know of a version - still close to latest that >>> is >>> relatively stable? >>> >>> Thanks >> > |
BJ Freeman wrote:
> there has been an effort to put in test units. > the only thing lacking, in making it complete is manpower. And fixing the existing tests that are broken. :| |
On Oct 29, 2008, at 11:54 AM, Adam Heath wrote: > BJ Freeman wrote: >> there has been an effort to put in test units. >> the only thing lacking, in making it complete is manpower. > > And fixing the existing tests that are broken. :| This is an area where it would be REALLY GREAT to have more effort go into the project. Yep, great enough to capitalize "REALLY" and "GREAT". Who has worked on the unit tests that are in place? I'll admit I haven't much except on the toolset and some of the framework unit tests and helping some of the Hotwax Media people who wrote many of the tests that now exist, especially the ones in the various applications. Is there anyone interested in working on this stuff? If there are enough people who want to actively work on it we can setup some coordination resources (ie Jira tasks, confluence pages, etc). If there are only 2-3 then coordination through the mailing list would be better, and more visible to others possibly interested. -David |
Here at Nereide, we are ready to make selenium tests (it's a task we
have planned to do, but which is always postponed....). So, if it's ok with you and that you are interested in that, we are going to make it real ! David E Jones a écrit : > > On Oct 29, 2008, at 11:54 AM, Adam Heath wrote: > >> BJ Freeman wrote: >>> there has been an effort to put in test units. >>> the only thing lacking, in making it complete is manpower. >> >> And fixing the existing tests that are broken. :| > > This is an area where it would be REALLY GREAT to have more effort go > into the project. Yep, great enough to capitalize "REALLY" and "GREAT". > > Who has worked on the unit tests that are in place? I'll admit I haven't > much except on the toolset and some of the framework unit tests and > helping some of the Hotwax Media people who wrote many of the tests that > now exist, especially the ones in the various applications. > > Is there anyone interested in working on this stuff? If there are enough > people who want to actively work on it we can setup some coordination > resources (ie Jira tasks, confluence pages, etc). If there are only 2-3 > then coordination through the mailing list would be better, and more > visible to others possibly interested. > > -David > > -- - Erwan - |
In reply to this post by David E Jones
I have been slowly creating test more from a user input than testing code.
I think I can complete one that will work on the trunk for an example and review. target jan 09 David E Jones sent the following on 10/29/2008 11:02 AM: > > On Oct 29, 2008, at 11:54 AM, Adam Heath wrote: > >> BJ Freeman wrote: >>> there has been an effort to put in test units. >>> the only thing lacking, in making it complete is manpower. >> >> And fixing the existing tests that are broken. :| > > This is an area where it would be REALLY GREAT to have more effort go > into the project. Yep, great enough to capitalize "REALLY" and "GREAT". > > Who has worked on the unit tests that are in place? I'll admit I haven't > much except on the toolset and some of the framework unit tests and > helping some of the Hotwax Media people who wrote many of the tests that > now exist, especially the ones in the various applications. > > Is there anyone interested in working on this stuff? If there are enough > people who want to actively work on it we can setup some coordination > resources (ie Jira tasks, confluence pages, etc). If there are only 2-3 > then coordination through the mailing list would be better, and more > visible to others possibly interested. > > -David > > > |
In reply to this post by David E Jones
I have some spare time in next 3 months and can code some test cases.
Shi Yusen/Beijing Langhua Ltd. 在 2008-10-29三的 12:02 -0600,David E Jones写道: > On Oct 29, 2008, at 11:54 AM, Adam Heath wrote: > > > BJ Freeman wrote: > >> there has been an effort to put in test units. > >> the only thing lacking, in making it complete is manpower. > > > > And fixing the existing tests that are broken. :| > > This is an area where it would be REALLY GREAT to have more effort go > into the project. Yep, great enough to capitalize "REALLY" and "GREAT". > > Who has worked on the unit tests that are in place? I'll admit I > haven't much except on the toolset and some of the framework unit > tests and helping some of the Hotwax Media people who wrote many of > the tests that now exist, especially the ones in the various > applications. > > Is there anyone interested in working on this stuff? If there are > enough people who want to actively work on it we can setup some > coordination resources (ie Jira tasks, confluence pages, etc). If > there are only 2-3 then coordination through the mailing list would be > better, and more visible to others possibly interested. > > -David > |
Shi Yusen wrote:
> I have some spare time in next 3 months and can code some test cases. David was talking about fixing the *existing* tests, making them pass, or fixing the code that has broken them. |
Thanks Adam!
Surprising and interesting.:) What's the latest trunk version that I can build successfully? I'll try to do some test run later. 在 2008-10-29三的 13:56 -0500,Adam Heath写道: > Shi Yusen wrote: > > I have some spare time in next 3 months and can code some test cases. > > David was talking about fixing the *existing* tests, making them pass, > or fixing the code that has broken them. |
In reply to this post by David E Jones
We're currently working on unit testing for our custom (OFBiz) application and I'm willing to work on this.
-Jeroen
|
In reply to this post by Shi Yusen
From memory, the main reason the current unit tests are failing is
because they were mistakenly only tested against the component they were built for and not when running the entire set of tests. It means that the demo data used in the tests gets reused across tests and as that data changes it causes subsequent tests to fail because they were expected the data to be in it's original state. In most cases simply adding more test data and directing the later tests to use that instead should solve most of the problems. Regards Scott 2008/10/30 Shi Yusen <[hidden email]>: > Thanks Adam! > > Surprising and interesting.:) > > What's the latest trunk version that I can build successfully? I'll try > to do some test run later. > > > 在 2008-10-29三的 13:56 -0500,Adam Heath写道: >> Shi Yusen wrote: >> > I have some spare time in next 3 months and can code some test cases. >> >> David was talking about fixing the *existing* tests, making them pass, >> or fixing the code that has broken them. > > |
In reply to this post by Erwan de FERRIERES-3
Erwan, It would be great to have some tests that go through the user interface, but we don't have tools for this yet that fit into the automated test system in OFBiz (ie so they can run along with other tests, and run automatically). The goal is for the tests to all work with an "ant run-tests" (or "java -jar ofbiz.jar tests"), and to cover as much of OFBiz OOTB as possible, and then to also be easy to customize or comment out those that no longer apply after people customize or add on to OFBiz. If you guys would like to work on getting Selenium tests to work this way, that would be great. Others have looked at this and run into troubles, so the last idea I heard was to use something different and that might be more manual for initial test writing, but probably easier to maintain. At Hotwax we've written/recorded a bunch of Selenium tests for clients, but they are difficult to maintain and as far as we've gone they also have to be manually run and watched. -David On Oct 29, 2008, at 12:09 PM, Erwan de FERRIERES wrote: > Here at Nereide, we are ready to make selenium tests (it's a task we > have planned to do, but which is always postponed....). > So, if it's ok with you and that you are interested in that, we are > going to make it real ! > > David E Jones a écrit : >> On Oct 29, 2008, at 11:54 AM, Adam Heath wrote: >>> BJ Freeman wrote: >>>> there has been an effort to put in test units. >>>> the only thing lacking, in making it complete is manpower. >>> >>> And fixing the existing tests that are broken. :| >> This is an area where it would be REALLY GREAT to have more effort >> go into the project. Yep, great enough to capitalize "REALLY" and >> "GREAT". >> Who has worked on the unit tests that are in place? I'll admit I >> haven't much except on the toolset and some of the framework unit >> tests and helping some of the Hotwax Media people who wrote many of >> the tests that now exist, especially the ones in the various >> applications. >> Is there anyone interested in working on this stuff? If there are >> enough people who want to actively work on it we can setup some >> coordination resources (ie Jira tasks, confluence pages, etc). If >> there are only 2-3 then coordination through the mailing list would >> be better, and more visible to others possibly interested. >> -David > > -- > - Erwan - |
In reply to this post by Adam Heath-2
On Oct 29, 2008, at 12:56 PM, Adam Heath wrote: > Shi Yusen wrote: >> I have some spare time in next 3 months and can code some test cases. > > David was talking about fixing the *existing* tests, making them pass, > or fixing the code that has broken them. Yes, this would be the first priority. Other priorities would include adding more unit tests for the framework and applications, and enhancing the test infrastructure to support automated UI tests (which I just mentioned in another reply). There are LOTS of framework features that still don't have unit tests, though those don't tend to change as much so I'd still say more complete application unit tests are a higher priority. BTW, when I say application unit tests I really mean testing services and other logic level stuff in OFBiz. This is easiest to do by walking through processes in applications and seeing which services are called at each step, and then calling those services in the tests. More exotic ones could explore outside of that and do more tests on individual services, or tests processes that are not yet supported in the UI. Of course, this is a volunteer-driven effort, and hopefully the "volunteer" efforts will be driven by actual needs different organizations have that are generic enough to be contributed back (those tend to be the most useful contributions, ie very "grounded" in real world needs). -David |
Free forum by Nabble | Edit this page |