All of your examples are "developer" examples. We are focused on end-users, so we don't expect them to use vi, grep, or anything like that.
----- Original Message ----- > On 10/12/2010 10:25 AM, Adrian Crum wrote: > > Actually, a discussion of database versus filesystem storage of > > content would be worthwhile. So far there has been some hyperbole, > > but few facts. > > How do you edit database content? What is the procedure? Can a > simple editor be used? By simple, I mean low-level, like vi. No, you run the UI editor/configuration tool. > > How do you find all items in your content store that contain a certain > text word? Can grep and find be used? can't use grep. > > How do you handle moving changes between a production server, that is > being directly managed by the client, and multiple developer > workstations, which then all have to go first to a staging server? > Each system in this case has its own set of code changes, and > config+data changes, that then have to be selectively picked for > staging, before finally being merged with production. > > What about revision control? Can you go back in time to see what the > code+data looked like? Are there separate revision systems, one for > the database, and another for the content? And what about the code? In our use case, there is no code. Only a construction of gadgets to make up pages. The "code" is for the gadgets. Yes, think of Concrete 5, Joomla, etall. > > For users/systems that aren't capable of using revision control, is > there a way for them to mount/browse the content store? Think > nfs/samba here. Nope. > > Storing everything directly into the filesystem lets you reuse > existing tools, that have been perfected over countless generations of > man-years. If your a developer. > > > > > -Adrian > > > > On 10/12/2010 7:32 AM, Marc Morin wrote: > >> With all the other technologies in ofbiz, seems like webslinger > >> just adds more stuff onto the pile. I don't want to argue the > >> technical merits of database or file system persistence for a CMS, > >> but it > >> appears like ofbiz would benefit from reducing the number of > >> technologies used, and increase the amount of re-use of > >> technologies it already has. > >> > >> So, for me, that means entity/service/screen/presentment models are > >> the core technologies. Galvanizing initiatives around those appear > >> to provide leverage. > >> > >> Now don't get me wrong, the "CMS" that is native in ofbiz is > >> incomplete and needs a lot of work... and for our use case of > >> providing self edited web sites and ecommerce sites, that appears a > >> better starting point. We have done things to add self editing > >> etc... but we need to put a lot more effort into that to ensure > >> that there is > >> a real solution. > >> > >> my $0.02. > >> > >> > >> Marc Morin > >> Emforium Group Inc. > >> ALL-IN Software > >> 519-772-6824 ext 201 > >> [hidden email] > >> > >> ----- Original Message ----- > >>> On 10/11/2010 10:07 PM, Nico Toerl wrote: > >>>> On 10/12/10 01:41, Adam Heath wrote: > >>>> > >>>> <snip> > >>>>> Now, here it comes. The url to the site. > >>>>> http://ofbizdemo.brainfood.com/. > >>>>> > >>>>> Things to note. There are *no* database calls *at all*. It's all > >>>>> done with files on disk. History browsing is backed by git, > >>>>> using jgit to read it directly in java. CSS styling is rather > >>>>> poor. Most > >>>>> unimplemented pages should do something nice(instead of a big > >>>>> read 'Not Yet Implemented'); at least there shouldn't be an > >>>>> exceptions on those pages. > >>>> > >>>> that sounded real interesting and i thought i have to have a look > >>>> at > >>>> this, unfortunately all i got is: > >>>> > >>>> > >>>> HTTP Status 500 - > >>>> > >>>> ------------------------------------------------------------------------ > >>>> > >>>> > >>>> *type* Exception report > >>>> > >>>> *message* > >>>> > >>>> *description* _The server encountered an internal error () that > >>>> prevented it from fulfilling this request._ > >>>> > >>>> *exception* > >>>> > >>>> java.lang.NullPointerException > >>>> WEB_45$INF.Events.System.Request.DetectUserAgent_46$jn.run(DetectUserAgent.jn:166) > >>>> > >>> > >>> Hmm, nice, thanks. > >>> > >>> Your user-agent is: > >>> > >>> "Mozilla/5.0 (X11; U; Linux i686 (x86_64); en-GB; rv:1.9.2.9) > >>> Gecko/20100824 Firefox/3.6.9" > >>> > >>> The (x86_64) is what is causing the problem, I hadn't seen this > >>> type of string in the wild. The regex doesn't like nested (). It's > >>> fixed now. > >> |
On 10/12/2010 11:50 AM, Marc Morin wrote:
> All of your examples are "developer" examples. We are focused on end-users, so we don't expect them to use vi, grep, or anything like that. That's the problem. Don't treat your developers or users differently. It means you end up writing *more* code, to support different access patterns. Just write one set of code, and all modifications are done the same way. Yes, we have front-end editting. The url(ofbizdemo.brainfood.com) doesn't have any editting configured or installed, as I am creating new editting screens for it(it's a new application). However, that editting just ends up modifying files, like you would normally do from the command line, and ends up calling git add/remove/commit, just like you'd do from the command line. |
It seems that many Programmers feel it better to have the user spend
time to learn their system, than the programmer learn their way of doing things to reduce the learning curve for the user. Adam Heath sent the following on 10/12/2010 10:21 AM: > That's the problem. Don't treat your developers or users differently. > It means you end up writing *more* code, to support different access > patterns. Just write one set of code, and all modifications are done > the same way. |
On 10/12/2010 04:31 PM, BJ Freeman wrote:
> It seems that many Programmers feel it better to have the user spend > time to learn their system, than the programmer learn their way of doing > things to reduce the learning curve for the user. Exactly. The users of webslinger are those creating the backend events, or the designers writing the html fragments. They use their own preferred editor. This means those people don't have to learn a new way to manipulate the backend files. This is a good thing. Then, with the backend code and template files stored in the filesystem, the actual content itself is also stored in the filesystem. Why have a different storage module for the content, then you do for the application? |
In reply to this post by Adam Heath-2
On 13/10/2010, at 5:23 AM, Adam Heath wrote:
> On 10/12/2010 11:06 AM, Adrian Crum wrote: >> On 10/12/2010 8:55 AM, Adam Heath wrote: >>> On 10/12/2010 10:25 AM, Adrian Crum wrote: >>>> Actually, a discussion of database versus filesystem storage of content >>>> would be worthwhile. So far there has been some hyperbole, but few >>>> facts. >>> >>> How do you edit database content? What is the procedure? Can a simple >>> editor be used? By simple, I mean low-level, like vi. >>> >>> How do you find all items in your content store that contain a certain >>> text word? Can grep and find be used? >>> >>> How do you handle moving changes between a production server, that is >>> being directly managed by the client, and multiple developer >>> workstations, which then all have to go first to a staging server? Each >>> system in this case has its own set of code changes, and config+data >>> changes, that then have to be selectively picked for staging, before >>> finally being merged with production. >>> >>> What about revision control? Can you go back in time to see what the >>> code+data looked like? Are there separate revision systems, one for the >>> database, and another for the content? And what about the code? >>> >>> For users/systems that aren't capable of using revision control, is >>> there a way for them to mount/browse the content store? Think nfs/samba >>> here. >>> >>> Storing everything directly into the filesystem lets you reuse existing >>> tools, that have been perfected over countless generations of man-years. >> >> I believe Jackrabbit has WebDAV and VFS front ends that will accommodate >> file system tools. Watch the movie: >> >> http://www.day.com/day/en/products/crx.html > > Front end it wrong. It still being the store itself is in some other system(database). The raw store needs to be managed by the filesystem, so standard tools can move it between locations, or do backups, etc. Putting yet another layer just to emulate file access is the wrong way. > > <brainstorming> > Let's make a content management system. Yeah! Let's do it! So, we need to be able to search for content, and mantain links between relationships. Let's write brand new code to do that, and put it in the database. > > Ok, next, we need to pull the information out of the database, and serve it thru an http server. Oh, damn, it's not running fast. Let's have a cache that resides someplace faster than the database. Oh, I know, memory! Shit, it's using too much memory. Let's put the cache in the filesystem. Updates now remove the cache, and have it get rebuilt. That means read-only access is faster, but updates then have to rebuild tons of stuff. > > Hmm. We have a designer request to be able to use photoshop to edit images. The server in question is a preview server, is hosted, and not on his immediate network. Let's create a new webdav access method, to make the content look like a filesystem. > > Our system is getting heavily loaded. Let's have a separate database server, with multiple web frontends. Cool, that works. > > The system is still heavily loaded, we need a super-huge database server. > > Crap, still falling over. Time to have multiple read-only databases. > </brainstorming> > > or... > > <brainstorming> > Let's store all our content into the filesystem. That way, things like ExpanDrive(remote ssh fs access for windows) will work for remote hosted machines. Caching isn't a problem anymore, as the raw store is in files. Servers have been doing file sharing for decades, it's a well known problem. Let's have someone else maintain the file sharing code, we'll just use it to support multiple frontends. And, ooh, our designers will be able to use the tools they are familiar with to manipulate things. And, we won't have the extra code running to maintain all the stuff in the multiple databases. Cool, we can even use git, with rebase and merge, to do all sorts of fancy branching and push/pulling between multiple development scenarios. > </brainstorming> > > If the raw store was in the filesystem in the first place, then all this additional layering wouldn't be needed, to make the final output end up looking like a filesystem, which is what was being replaced all along. if (!myWay) { return highway; } The JCR was produced by an expert working group driven largely by Day Software which has Roy T. Fielding as their chief scientist. While I know next to nothing about what constitutes a great CMS infrastructure I cannot simply accept that you are right and they are wrong especially when you make no attempt whatsoever to paint the full picture, I mean are you suggesting that a file system based CMS has no downsides? Your approach is filled with pros and their's all cons? Regards Scott smime.p7s (3K) Download Attachment |
We think its interesting and handy to manage our web content using GIT.
Its hard to do that with JackRabbit, especially in its preferred configuration of a database backed store. I think that is a pretty reasoned explanation. I don't see Adam or I casting stones at your CMS test application so please consider lightening up. Thanks. :-D Scott Gray wrote: > To be honest it makes it a little difficult to take you seriously when you completely disregard the JCR/Jackrabbit approach without even the slightest hint of objectivity > if (!myWay) { > return highway; > } > The JCR was produced by an expert working group driven largely by Day Software which has Roy T. Fielding as their chief scientist. While I know next to nothing about what constitutes a great CMS infrastructure I cannot simply accept that you are right and they are wrong especially when you make no attempt whatsoever to paint the full picture, I mean are you suggesting that a file system based CMS has no downsides? Your approach is filled with pros and their's all cons? > -- Ean Schuessler, CTO [hidden email] 214-720-0700 x 315 Brainfood, Inc. http://www.brainfood.com |
This isn't about casting stones or attempting to belittle webslinger, which I have no doubt is a fantastic piece of work and meets its stated goals brilliantly. This is about debating why it should be included in OFBiz as a tightly integrated CMS and how well webslinger's goals match up with OFBiz's content requirements (whatever they are, I don't pretend to know). Webslinger was included in the framework with little to no discussion and I'm trying to take the opportunity to have that discussion now.
I'm not trying to add FUD to the possibility of webslinger taking a more active role in OFBiz, I'm just trying to understand what is being proposed and what the project stands to gain or lose by accepting that proposal. Version control with git and the ability to edit content with vi is great but what are we giving up in exchange for that? Surely there must be something lacking in a file system approach if the extremely vast majority of CMS vendors have shunned it in favor of a database (or database + file system) approach? I just cannot accept that all of these vendors simply said "durp durp RDMBS! durp durp". What about non-hierarchical node linking? Content meta-data? Transaction management? Referential integrity? Node types? Regards Scott On 13/10/2010, at 11:01 AM, Ean Schuessler wrote: > We think its interesting and handy to manage our web content using GIT. > Its hard to do that with JackRabbit, especially in its preferred > configuration of a database backed store. I think that is a pretty > reasoned explanation. I don't see Adam or I casting stones at your CMS > test application so please consider lightening up. Thanks. :-D > > Scott Gray wrote: >> To be honest it makes it a little difficult to take you seriously when you completely disregard the JCR/Jackrabbit approach without even the slightest hint of objectivity >> if (!myWay) { >> return highway; >> } >> The JCR was produced by an expert working group driven largely by Day Software which has Roy T. Fielding as their chief scientist. While I know next to nothing about what constitutes a great CMS infrastructure I cannot simply accept that you are right and they are wrong especially when you make no attempt whatsoever to paint the full picture, I mean are you suggesting that a file system based CMS has no downsides? Your approach is filled with pros and their's all cons? >> > -- > Ean Schuessler, CTO > [hidden email] > 214-720-0700 x 315 > Brainfood, Inc. > http://www.brainfood.com > smime.p7s (3K) Download Attachment |
On 10/12/2010 3:39 PM, Scott Gray wrote:
> This is about debating why it should be included in OFBiz as a tightly integrated CMS and how well webslinger's goals match up with OFBiz's content requirements (whatever they are, I don't pretend to know). I thought one of the goals was to replace OFBiz's content repository with something off-the-shelf. The idea behind using JCR was to avoid being locked into a specific product. In other words, if OFBiz talks to JCR, then OFBiz can use any JCR-compliant repository. That's why I asked Adam if there would be a JCR interface for webslinger. Webslinger could be one of many JCR-compliant repositories to choose from. I believe another thing that comes into play in this discussion is how people are picturing a CMS being used in OFBiz. I get the impression Adam pictures it being used for website authoring. On the other hand, I picture OFBiz retrieving documents from existing corporate repositories to be served up in web pages. So, an "OFBiz CMS" might mean different things to different people, and each person's requirement might drive the decision to use Webslinger or something else. -Adrian > Webslinger was included in the framework with little to no discussion and I'm trying to take the opportunity to have that discussion now. > > I'm not trying to add FUD to the possibility of webslinger taking a more active role in OFBiz, I'm just trying to understand what is being proposed and what the project stands to gain or lose by accepting that proposal. > > Version control with git and the ability to edit content with vi is great but what are we giving up in exchange for that? Surely there must be something lacking in a file system approach if the extremely vast majority of CMS vendors have shunned it in favor of a database (or database + file system) approach? I just cannot accept that all of these vendors simply said "durp durp RDMBS! durp durp". What about non-hierarchical node linking? Content meta-data? Transaction management? Referential integrity? Node types? > > Regards > Scott > > On 13/10/2010, at 11:01 AM, Ean Schuessler wrote: > >> We think its interesting and handy to manage our web content using GIT. >> Its hard to do that with JackRabbit, especially in its preferred >> configuration of a database backed store. I think that is a pretty >> reasoned explanation. I don't see Adam or I casting stones at your CMS >> test application so please consider lightening up. Thanks. :-D >> >> Scott Gray wrote: >>> To be honest it makes it a little difficult to take you seriously when you completely disregard the JCR/Jackrabbit approach without even the slightest hint of objectivity >>> if (!myWay) { >>> return highway; >>> } >>> The JCR was produced by an expert working group driven largely by Day Software which has Roy T. Fielding as their chief scientist. While I know next to nothing about what constitutes a great CMS infrastructure I cannot simply accept that you are right and they are wrong especially when you make no attempt whatsoever to paint the full picture, I mean are you suggesting that a file system based CMS has no downsides? Your approach is filled with pros and their's all cons? >>> >> -- >> Ean Schuessler, CTO >> [hidden email] >> 214-720-0700 x 315 >> Brainfood, Inc. >> http://www.brainfood.com >> > |
In reply to this post by Adam Heath-2
> Then, with the backend code and template files stored in the > filesystem, the actual content itself is also stored in the > filesystem. Why have a different storage module for the content, then > you do for the application? > I don't think it is a code idea to store your code and data together. Data is some thing which you need to take regular backup and your code is generally in binary form and reproducible easily such as deploying a war or jar file. |
In reply to this post by Scott Gray-2
> To be honest it makes it a little difficult to take you seriously when you completely disregard the JCR/Jackrabbit approach without even the slightest hint of objectivity > if (!myWay) { > return highway; > } > The JCR was produced by an expert working group driven largely by Day Software which has Roy T. Fielding as their chief scientist. While I know next to nothing about what constitutes a great CMS infrastructure I cannot simply accept that you are right and they are wrong especially when you make no attempt whatsoever to paint the full picture, I mean are you suggesting that a file system based CMS has no downsides? Your approach is filled with pros and their's all cons? > Subversion is good example of using a database to store the contents (source). Subversion does not use flat files to store the files. It does use a BDB or FSFS. Although FSFS is a single file filesystem, it is not a plain file to be manipulated directly. Generally applications using filessystem files add their own header information. > Regards > Scott |
Administrator
|
In reply to this post by Scott Gray-2
Scott Gray wrote:
> On 13/10/2010, at 5:23 AM, Adam Heath wrote: > >> On 10/12/2010 11:06 AM, Adrian Crum wrote: >>> On 10/12/2010 8:55 AM, Adam Heath wrote: >>>> On 10/12/2010 10:25 AM, Adrian Crum wrote: >>>>> Actually, a discussion of database versus filesystem storage of content >>>>> would be worthwhile. So far there has been some hyperbole, but few >>>>> facts. >>>> >>>> How do you edit database content? What is the procedure? Can a simple >>>> editor be used? By simple, I mean low-level, like vi. >>>> >>>> How do you find all items in your content store that contain a certain >>>> text word? Can grep and find be used? >>>> >>>> How do you handle moving changes between a production server, that is >>>> being directly managed by the client, and multiple developer >>>> workstations, which then all have to go first to a staging server? Each >>>> system in this case has its own set of code changes, and config+data >>>> changes, that then have to be selectively picked for staging, before >>>> finally being merged with production. >>>> >>>> What about revision control? Can you go back in time to see what the >>>> code+data looked like? Are there separate revision systems, one for the >>>> database, and another for the content? And what about the code? >>>> >>>> For users/systems that aren't capable of using revision control, is >>>> there a way for them to mount/browse the content store? Think nfs/samba >>>> here. >>>> >>>> Storing everything directly into the filesystem lets you reuse existing >>>> tools, that have been perfected over countless generations of man-years. >>> >>> I believe Jackrabbit has WebDAV and VFS front ends that will accommodate >>> file system tools. Watch the movie: >>> >>> http://www.day.com/day/en/products/crx.html >> >> Front end it wrong. It still being the store itself is in some other system(database). The raw store needs to be managed by >> the filesystem, so standard tools can move it between locations, or do backups, etc. Putting yet another layer just to emulate >> file access is the wrong way. >> >> <brainstorming> >> Let's make a content management system. Yeah! Let's do it! So, we need to be able to search for content, and mantain links >> between relationships. Let's write brand new code to do that, and put it in the database. >> >> Ok, next, we need to pull the information out of the database, and serve it thru an http server. Oh, damn, it's not running >> fast. Let's have a cache that resides someplace faster than the database. Oh, I know, memory! Shit, it's using too much >> memory. Let's put the cache in the filesystem. Updates now remove the cache, and have it get rebuilt. That means read-only >> access is faster, but updates then have to rebuild tons of stuff. >> >> Hmm. We have a designer request to be able to use photoshop to edit images. The server in question is a preview server, is >> hosted, and not on his immediate network. Let's create a new webdav access method, to make the content look like a filesystem. >> >> Our system is getting heavily loaded. Let's have a separate database server, with multiple web frontends. Cool, that works. >> >> The system is still heavily loaded, we need a super-huge database server. >> >> Crap, still falling over. Time to have multiple read-only databases. >> </brainstorming> >> >> or... >> >> <brainstorming> >> Let's store all our content into the filesystem. That way, things like ExpanDrive(remote ssh fs access for windows) will work >> for remote hosted machines. Caching isn't a problem anymore, as the raw store is in files. Servers have been doing file >> sharing for decades, it's a well known problem. Let's have someone else maintain the file sharing code, we'll just use it to >> support multiple frontends. And, ooh, our designers will be able to use the tools they are familiar with to manipulate things. >> And, we won't have the extra code running to maintain all the stuff in the multiple databases. Cool, we can even use git, with >> rebase and merge, to do all sorts of fancy branching and push/pulling between multiple development scenarios. </brainstorming> >> >> If the raw store was in the filesystem in the first place, then all this additional layering wouldn't be needed, to make the >> final output end up looking like a filesystem, which is what was being replaced all along. > > To be honest it makes it a little difficult to take you seriously when you completely disregard the JCR/Jackrabbit approach > without even the slightest hint of objectivity > if (!myWay) { > return highway; > } > The JCR was produced by an expert working group driven largely by Day Software which has Roy T. Fielding as their chief > scientist. While I know next to nothing about what constitutes a great CMS infrastructure I cannot simply accept that you are > right and they are wrong especially when you make no attempt whatsoever to paint the full picture, I mean are you suggesting that > a file system based CMS has no downsides? Your approach is filled with pros and their's all cons? > > Regards > Scott Minor detail, but I think Roy T. Fielding was appointed chief scientist after the JCR has been produced Jacques |
On 13/10/2010, at 8:00 PM, Jacques Le Roux wrote:
> Scott Gray wrote: >> On 13/10/2010, at 5:23 AM, Adam Heath wrote: >>> On 10/12/2010 11:06 AM, Adrian Crum wrote: >>>> On 10/12/2010 8:55 AM, Adam Heath wrote: >>>>> On 10/12/2010 10:25 AM, Adrian Crum wrote: >>>>>> Actually, a discussion of database versus filesystem storage of content >>>>>> would be worthwhile. So far there has been some hyperbole, but few >>>>>> facts. >>>>> How do you edit database content? What is the procedure? Can a simple >>>>> editor be used? By simple, I mean low-level, like vi. >>>>> How do you find all items in your content store that contain a certain >>>>> text word? Can grep and find be used? >>>>> How do you handle moving changes between a production server, that is >>>>> being directly managed by the client, and multiple developer >>>>> workstations, which then all have to go first to a staging server? Each >>>>> system in this case has its own set of code changes, and config+data >>>>> changes, that then have to be selectively picked for staging, before >>>>> finally being merged with production. >>>>> What about revision control? Can you go back in time to see what the >>>>> code+data looked like? Are there separate revision systems, one for the >>>>> database, and another for the content? And what about the code? >>>>> For users/systems that aren't capable of using revision control, is >>>>> there a way for them to mount/browse the content store? Think nfs/samba >>>>> here. >>>>> Storing everything directly into the filesystem lets you reuse existing >>>>> tools, that have been perfected over countless generations of man-years. >>>> I believe Jackrabbit has WebDAV and VFS front ends that will accommodate >>>> file system tools. Watch the movie: >>>> http://www.day.com/day/en/products/crx.html >>> Front end it wrong. It still being the store itself is in some other system(database). The raw store needs to be managed by >>> the filesystem, so standard tools can move it between locations, or do backups, etc. Putting yet another layer just to emulate >>> file access is the wrong way. <brainstorming> >>> Let's make a content management system. Yeah! Let's do it! So, we need to be able to search for content, and mantain links >>> between relationships. Let's write brand new code to do that, and put it in the database. Ok, next, we need to pull the information out of the database, and serve it thru an http server. Oh, damn, it's not running >>> fast. Let's have a cache that resides someplace faster than the database. Oh, I know, memory! Shit, it's using too much >>> memory. Let's put the cache in the filesystem. Updates now remove the cache, and have it get rebuilt. That means read-only >>> access is faster, but updates then have to rebuild tons of stuff. Hmm. We have a designer request to be able to use photoshop to edit images. The server in question is a preview server, is >>> hosted, and not on his immediate network. Let's create a new webdav access method, to make the content look like a filesystem. Our system is getting heavily loaded. Let's have a separate database server, with multiple web frontends. Cool, that works. >>> The system is still heavily loaded, we need a super-huge database server. >>> Crap, still falling over. Time to have multiple read-only databases. >>> </brainstorming> >>> or... >>> <brainstorming> >>> Let's store all our content into the filesystem. That way, things like ExpanDrive(remote ssh fs access for windows) will work >>> for remote hosted machines. Caching isn't a problem anymore, as the raw store is in files. Servers have been doing file >>> sharing for decades, it's a well known problem. Let's have someone else maintain the file sharing code, we'll just use it to >>> support multiple frontends. And, ooh, our designers will be able to use the tools they are familiar with to manipulate things. And, we won't have the extra code running to maintain all the stuff in the multiple databases. Cool, we can even use git, with >>> rebase and merge, to do all sorts of fancy branching and push/pulling between multiple development scenarios. </brainstorming> If the raw store was in the filesystem in the first place, then all this additional layering wouldn't be needed, to make the >>> final output end up looking like a filesystem, which is what was being replaced all along. >> To be honest it makes it a little difficult to take you seriously when you completely disregard the JCR/Jackrabbit approach >> without even the slightest hint of objectivity if (!myWay) { >> return highway; >> } >> The JCR was produced by an expert working group driven largely by Day Software which has Roy T. Fielding as their chief >> scientist. While I know next to nothing about what constitutes a great CMS infrastructure I cannot simply accept that you are >> right and they are wrong especially when you make no attempt whatsoever to paint the full picture, I mean are you suggesting that >> a file system based CMS has no downsides? Your approach is filled with pros and their's all cons? Regards >> Scott > > Minor detail, but I think Roy T. Fielding was appointed chief scientist after the JCR has been produced Since 2002: http://roy.gbiv.com/vita.html JSR-170 was released 2005 JSR-283 in 2009 Regards Scott smime.p7s (3K) Download Attachment |
Administrator
|
Scott Gray wrote:
> On 13/10/2010, at 8:00 PM, Jacques Le Roux wrote: > >> Scott Gray wrote: >>> On 13/10/2010, at 5:23 AM, Adam Heath wrote: >>>> On 10/12/2010 11:06 AM, Adrian Crum wrote: >>>>> On 10/12/2010 8:55 AM, Adam Heath wrote: >>>>>> On 10/12/2010 10:25 AM, Adrian Crum wrote: >>>>>>> Actually, a discussion of database versus filesystem storage of content >>>>>>> would be worthwhile. So far there has been some hyperbole, but few >>>>>>> facts. >>>>>> How do you edit database content? What is the procedure? Can a simple >>>>>> editor be used? By simple, I mean low-level, like vi. >>>>>> How do you find all items in your content store that contain a certain >>>>>> text word? Can grep and find be used? >>>>>> How do you handle moving changes between a production server, that is >>>>>> being directly managed by the client, and multiple developer >>>>>> workstations, which then all have to go first to a staging server? Each >>>>>> system in this case has its own set of code changes, and config+data >>>>>> changes, that then have to be selectively picked for staging, before >>>>>> finally being merged with production. >>>>>> What about revision control? Can you go back in time to see what the >>>>>> code+data looked like? Are there separate revision systems, one for the >>>>>> database, and another for the content? And what about the code? >>>>>> For users/systems that aren't capable of using revision control, is >>>>>> there a way for them to mount/browse the content store? Think nfs/samba >>>>>> here. >>>>>> Storing everything directly into the filesystem lets you reuse existing >>>>>> tools, that have been perfected over countless generations of man-years. >>>>> I believe Jackrabbit has WebDAV and VFS front ends that will accommodate >>>>> file system tools. Watch the movie: >>>>> http://www.day.com/day/en/products/crx.html >>>> Front end it wrong. It still being the store itself is in some other system(database). The raw store needs to be managed by >>>> the filesystem, so standard tools can move it between locations, or do backups, etc. Putting yet another layer just to emulate >>>> file access is the wrong way. <brainstorming> >>>> Let's make a content management system. Yeah! Let's do it! So, we need to be able to search for content, and mantain links >>>> between relationships. Let's write brand new code to do that, and put it in the database. Ok, next, we need to pull the >>>> information out of the database, and serve it thru an http server. Oh, damn, it's not running >>>> fast. Let's have a cache that resides someplace faster than the database. Oh, I know, memory! Shit, it's using too much >>>> memory. Let's put the cache in the filesystem. Updates now remove the cache, and have it get rebuilt. That means read-only >>>> access is faster, but updates then have to rebuild tons of stuff. Hmm. We have a designer request to be able to use >>>> photoshop to edit images. The server in question is a preview server, is >>>> hosted, and not on his immediate network. Let's create a new webdav access method, to make the content look like a >>>> filesystem. Our system is getting heavily loaded. Let's have a separate database server, with multiple web frontends. Cool, >>>> that works. The system is still heavily loaded, we need a super-huge database server. >>>> Crap, still falling over. Time to have multiple read-only databases. >>>> </brainstorming> >>>> or... >>>> <brainstorming> >>>> Let's store all our content into the filesystem. That way, things like ExpanDrive(remote ssh fs access for windows) will work >>>> for remote hosted machines. Caching isn't a problem anymore, as the raw store is in files. Servers have been doing file >>>> sharing for decades, it's a well known problem. Let's have someone else maintain the file sharing code, we'll just use it to >>>> support multiple frontends. And, ooh, our designers will be able to use the tools they are familiar with to manipulate >>>> things. And, we won't have the extra code running to maintain all the stuff in the multiple databases. Cool, we can even use >>>> git, with rebase and merge, to do all sorts of fancy branching and push/pulling between multiple development scenarios. >>>> </brainstorming> If the raw store was in the filesystem in the first place, then all this additional layering wouldn't be >>>> needed, to make the final output end up looking like a filesystem, which is what was being replaced all along. >>> To be honest it makes it a little difficult to take you seriously when you completely disregard the JCR/Jackrabbit approach >>> without even the slightest hint of objectivity if (!myWay) { >>> return highway; >>> } >>> The JCR was produced by an expert working group driven largely by Day Software which has Roy T. Fielding as their chief >>> scientist. While I know next to nothing about what constitutes a great CMS infrastructure I cannot simply accept that you are >>> right and they are wrong especially when you make no attempt whatsoever to paint the full picture, I mean are you suggesting >>> that a file system based CMS has no downsides? Your approach is filled with pros and their's all cons? Regards >>> Scott >> >> Minor detail, but I think Roy T. Fielding was appointed chief scientist after the JCR has been produced Indeed, sorry Jacques > > Since 2002: http://roy.gbiv.com/vita.html > JSR-170 was released 2005 > JSR-283 in 2009 > > Regards > Scott |
In reply to this post by Scott Gray-2
I agree that databases are very, very powerful but they also introduce
fundamental limitations. It depends on your priorities. For instance, we've found that the processes companies pursue for editing documentation can be every bit as fluid, complex and partitioned as source code. I'd ask you, as a serious thought experiment, to consider what the ramifications of managing OFBiz itself in a Jackrabbit repository. Please don't just punt on me and say "oh, well source code is different". That's an argument by dismissal and glosses over real-world situations where you might have a pilot group editing a set of process documentation based on the core corporate standards, folding in changes from "HEAD" as well as developing their own changes in conjunction. I've just personally found that the distributed revision control function is fundamental to managing the kinds of real content that ends up on websites. Maybe you haven't. Scott Gray wrote: > This isn't about casting stones or attempting to belittle webslinger, which I have no doubt is a fantastic piece of work and meets its stated goals brilliantly. This is about debating why it should be included in OFBiz as a tightly integrated CMS and how well webslinger's goals match up with OFBiz's content requirements (whatever they are, I don't pretend to know). Webslinger was included in the framework with little to no discussion and I'm trying to take the opportunity to have that discussion now. > > I'm not trying to add FUD to the possibility of webslinger taking a more active role in OFBiz, I'm just trying to understand what is being proposed and what the project stands to gain or lose by accepting that proposal. > > Version control with git and the ability to edit content with vi is great but what are we giving up in exchange for that? Surely there must be something lacking in a file system approach if the extremely vast majority of CMS vendors have shunned it in favor of a database (or database + file system) approach? I just cannot accept that all of these vendors simply said "durp durp RDMBS! durp durp". What about non-hierarchical node linking? Content meta-data? Transaction management? Referential integrity? Node types? > -- Ean Schuessler, CTO [hidden email] 214-720-0700 x 315 Brainfood, Inc. http://www.brainfood.com |
For me it all comes to down to a couple of basic but very important points:
- Webslinger by your own admission takes a vastly different approach from anything else on the market and you're asking the OFBiz community to take that risk along with you and ignore what everyone else is doing. - Webslinger has no community behind it and is the product and vision of a single company (and within that probably only a single developer understands it deeply). OFBiz takes a big risk by depending upon it in any meaningful way for bugfixes, support and documentation, both now and in the future. Name me one other major external library in OFBiz that doesn't come from a well established open source community. I don't pretend for a second to be an expert on the topic of content management but I can see those risks staring me in the face. At the end of the day if the community wants webslinger then they'll get it but blindly ignoring the risks does no one any good. Regards Scott On 14/10/2010, at 12:34 PM, Ean Schuessler wrote: > I agree that databases are very, very powerful but they also introduce > fundamental limitations. It depends on your priorities. > > For instance, we've found that the processes companies pursue for > editing documentation can be every bit as fluid, complex and partitioned > as source code. I'd ask you, as a serious thought experiment, to > consider what the ramifications of managing OFBiz itself in a Jackrabbit > repository. Please don't just punt on me and say "oh, well source code > is different". That's an argument by dismissal and glosses over > real-world situations where you might have a pilot group editing a set > of process documentation based on the core corporate standards, folding > in changes from "HEAD" as well as developing their own changes in > conjunction. I've just personally found that the distributed revision > control function is fundamental to managing the kinds of real content > that ends up on websites. Maybe you haven't. > > Scott Gray wrote: >> This isn't about casting stones or attempting to belittle webslinger, which I have no doubt is a fantastic piece of work and meets its stated goals brilliantly. This is about debating why it should be included in OFBiz as a tightly integrated CMS and how well webslinger's goals match up with OFBiz's content requirements (whatever they are, I don't pretend to know). Webslinger was included in the framework with little to no discussion and I'm trying to take the opportunity to have that discussion now. >> >> I'm not trying to add FUD to the possibility of webslinger taking a more active role in OFBiz, I'm just trying to understand what is being proposed and what the project stands to gain or lose by accepting that proposal. >> >> Version control with git and the ability to edit content with vi is great but what are we giving up in exchange for that? Surely there must be something lacking in a file system approach if the extremely vast majority of CMS vendors have shunned it in favor of a database (or database + file system) approach? I just cannot accept that all of these vendors simply said "durp durp RDMBS! durp durp". What about non-hierarchical node linking? Content meta-data? Transaction management? Referential integrity? Node types? >> > -- > Ean Schuessler, CTO > [hidden email] > 214-720-0700 x 315 > Brainfood, Inc. > http://www.brainfood.com > smime.p7s (3K) Download Attachment |
In reply to this post by Ean Schuessler
In the Early Nineties I was Hired to Take the MSDN portion of Microsoft
into a document Library type of design. A user would give a link to a Document, the app would then parse the document for various type of mime and store them on a files system on the network, was well as have key word search and Associative links from one Document to another. This was a Network Wide system that covered many offices of Microsoft all over the world. it was more a SGML before HTML AND XML became defacto. It was Database centric in all the Network references to the pieces were in the Database. The Department would setup its defaults for where their document pieces were stored. A department that was doing development in a particular area could call up all the references already in the database and associate them with their work. Still to me this is the best marriage between the two worlds. I see this along with using Open office to do the actual User interface as the way to have a robust Document system. I also believe the basic model for Ofbiz document container could be enhanced to allow file type of documents in the Data resources. Ean Schuessler sent the following on 10/13/2010 4:34 PM: > I agree that databases are very, very powerful but they also introduce > fundamental limitations. It depends on your priorities. > > For instance, we've found that the processes companies pursue for > editing documentation can be every bit as fluid, complex and partitioned > as source code. I'd ask you, as a serious thought experiment, to > consider what the ramifications of managing OFBiz itself in a Jackrabbit > repository. Please don't just punt on me and say "oh, well source code > is different". That's an argument by dismissal and glosses over > real-world situations where you might have a pilot group editing a set > of process documentation based on the core corporate standards, folding > in changes from "HEAD" as well as developing their own changes in > conjunction. I've just personally found that the distributed revision > control function is fundamental to managing the kinds of real content > that ends up on websites. Maybe you haven't. > > Scott Gray wrote: >> This isn't about casting stones or attempting to belittle webslinger, which I have no doubt is a fantastic piece of work and meets its stated goals brilliantly. This is about debating why it should be included in OFBiz as a tightly integrated CMS and how well webslinger's goals match up with OFBiz's content requirements (whatever they are, I don't pretend to know). Webslinger was included in the framework with little to no discussion and I'm trying to take the opportunity to have that discussion now. >> >> I'm not trying to add FUD to the possibility of webslinger taking a more active role in OFBiz, I'm just trying to understand what is being proposed and what the project stands to gain or lose by accepting that proposal. >> >> Version control with git and the ability to edit content with vi is great but what are we giving up in exchange for that? Surely there must be something lacking in a file system approach if the extremely vast majority of CMS vendors have shunned it in favor of a database (or database + file system) approach? I just cannot accept that all of these vendors simply said "durp durp RDMBS! durp durp". What about non-hierarchical node linking? Content meta-data? Transaction management? Referential integrity? Node types? >> |
Free forum by Nabble | Edit this page |