OEF::Why::Queue |
- computing convenience from windows perspective/view[point]: - base level: - gnu -> cygwin - user-level: - two hotsync-buttons: - synchronize files with others: RsyncHere|RsyncThere -> rsync -> cygwin -> SSH-client-library -> (SSH-daemon -> [cygwin] -> remote-filesystem) - synchronize addresses with others: Outlook -> OLE -> outlook2ldap -> Net::LDAP[S] -> (LDAP-daemon -> (LDAP-daemon-storage-impl.: dbm|proxy|mysql?) -> remote-filesystem) - various "init"-scripts: - rsync: like the rsync.conf-preparation from ilo.de (it's a .bat) - (vs.) other ssh-keygen-automation-scripts - windows-settings (explorer, internet explorer) - office (ms word, ms excel) - unattended.sourceforge.net - ideas (for the XyzHere-series) - for the Internet Explorer: "mail/post link (including top-page) here" (to preconfigured|selected target)
- use Tom or Pixie for live-object-passivation? - the current attempt is going into the Tom-direction (Data::Storage::Handler::Tom, OEF::Component::Task)
- have Data::Map load its mappings from (e.g.): - a csv-file - a ldap-dn
- first components for an "OpenAccess": - schema-editor (based on Tangram::Relational::Schema) - relationship-/association-editor (based on Tangram::Relational::Schema) - im-/export - metadata-editor (based on Data::Transfer::Sync) - task-viewer for Tasks: - deploy-/retreat schema - run im-/export
- integrate Torus (outlook2ldap) with Data::Transfer::Sync
- fully persist native perl objects to be able to a) suspend processing, b) transfer the object to another process's scope (memory) and c) resume processing - Data::Dumper? no, that's just for data - Pixie? no, just Perl 5.8.0 - FreezeThaw - converting Perl structures to strings and back? hmm, seems to be for data only - Storable? - Memoize? no, just caches function return values (trade space for time) ("Automatically cache results of functions") - Attribute::Memoize? no, this is just an attribute-style interface to Memoize - TOM? - IPC::Cache - a perl module that implements an object storage space where data is persisted across process boundaries??? ... seems to be for data-persistence, also ;( - Aspect? hmmm, maybe .....
- OQL on top of Tangram::QueryObject???
- new perl modules: Data::Filter Regexp::Group Getopt::Simple Date::Merge
- take care to declare your Plugin-modules as 'mixins' to BizWorks::Process
- take care to specify BizWorks::Process-modules correctly (including the full-name to the perl-namespace in the 'package'-declaration at the top of each file)
~~~ 3
- refactor use of class 'SystemEvent' - maybe rename to show its scope already in classname (e.g. 'OefEvent') - 1. generic inject on schema-level (like already successfully done with guid) - auto-select of correct database at wiring-time (addLogDispatchHandler)
- module-refactoring & namespace garbage collection: + refactored mechanism to use sub-modules inside the BizWorks::Process-namespace (anchorless) - refactor BizWorks::Process to be anchorless itself - spread new loading-mechanism over already used scripts and integrate to API to be able to do some kind of ... ... "remote component loading" (use in admin panel!) ("admin panel"=admin-area/page/site/oberfläche/bereich? -- !) - refactor parts of BizWorks to OEF? (OEF::Component, OEF::Process) again, think about looking at CPAN's PApp (more in detail!) here - refactor files in etc/ to be compliant to some - not yet developed (tbd!) - kind of configuration-metabase-structure ---> look at some already investigated modules from CPAN (Meta::XYZ???) - refactor internal structure of files in doc/ to be html-renderable. use pod? (just with the new ones starting from 0.03)
- use the session (backend->frontend) described below also to propagate error-messages from back- to frontend --> each frontend uses one remote-/backend-session ---> restricted access! (tbd) --> each admin(-area) uses one remote-/backend-session ---> full access!
- document ids (identifiers) used: - oid: object id (created by the innards of Tangram) - guid: global id (created by a hook on object-insertion inside Tangram::Storage::_insert) - serial: hash built from the content of the object (payload) via (e.g.) Digest::MD5 identifying it somehow...? - sid: session id (created for _each_ object per session inside BizWorks::API::Session::_useobject ... ... and also gets stored there (instead of the others in this list which come along with the object itself.)
- about: global identifiers - if you're designing a fully distributed (global) system it's important that your objects have assigned some kind of "global identifier" (e.g. generated from Data::UUID or similar) when used across possibly disconnected systems. --> each disconnection must/may/can be assumed as a "redeploy" (an operation which invalidates e.g. all oids and/or breaks all relationships (or even more!)) - so relying on per-deployment identifiers (oids) from one of n systems will break your neck certainly :-) - how to implement this? - implement & use session-based mechanisms! (compare ASP/php sessions) - assure you have full session-based-communcation between _all_ layers ;-) .... else communication using identifiers will break at this very layer - provide a guid -> <new-id-created-representing-the-master-guid> - mapping for each session - integrate these new mechanisms as new layer between the backend and frontend --> this will give you multiple frontends talking to the backend with their own identifiers! (and probably much more....) - use this stuff to enhance infrastructure at system-setup/redeploy-/take-up/down-level: - here we could apply (more or less) generic rules in the process-flow: - flush session-metadata (guid->sid - mapping) ... - ... - this gives us - possibility of having backend-db-redeploys transparent for the rest of the system - possibility to drive "older" frontends (because we already have the mapping to the old identifiers) .... ... just introduce a "VERSION" to actually measure age!
- about: data verbs: pattern matching, filtering, scanning, searching, reading, writing, mungling, munging, transforming, translating, migrating, finding, replacing, grep, stepping, viewing, reporting, including, feeding, syncing, encapsulating, sending, transmitting, recieving, serializing, unserializing, mapping, meta, merging, looking up, indexing, sharing, distributing, centralizing, backing up, restoring, comparing, diffing, matrix manipulation, math manipulation, visualizing, filtering, iterating through, traversing, calculating, accumulating, freezing, processing, routing, transferring, converting, loading, saving, storing, activating, archiving nouns: transitioning, set, structure, object, file, database, memory, filesystem combined nouns: --> combine the ones above adjectives: nested, flat, instantiated, raw, real, test, production, ----> puzzle all by-random ;-)
- what's that? - why Perl? - hate writing too much code? - why Tangram and Class::Tangram? - hate writing sql? - hate writing constructors? - these are the most simple reasons from a RAD perspective, (OEF) should give you much more, please read on... (and otherwhere)
- PApp::Storable 2.04 - ;-) try to interconnect with Tangram 2.04.....
- Data::Filter ... - ... utilizing Regexp::Group
- _don't_ start OEF!!! try to use PApp!!! (+POE, +P5EE) ---> write components for these (and maybe try to tie them together in a convenient way) ---> like already done with Data::Storage ---> to _really_ start building higher level components for/with them (for real/bigger/distributed applications) ---> "OpenDavid", "OpenAccess", "OpenExchange"
- about: a database: (+rdbms, +odbms) - basic actions (rdbms) - holding - storing and retrieving by identifier - querying - features (oo-style) - orm (object relational mapper: maps oo-object-properties to table-fields) - schema (maps oo-classes to oo-objects) - relations - constraints - object inheritance - highlevel services - transactions - stored procedures, views - replication (master -> slave)
- refactoring: use PerlIO::via to get _real_ layers (in this case for the "per-file basis")?
~~~ 2
- refactor perl/libs/misc
- refactor perl/libs/libp.pm into Data::Conversion:: and/or Data::
~~~ 1
- Tangram::Storage/Data::UUID: - rewrite _full_ functionality of Data::UUID to a pure perl version Data::UUID::PP or Data::UUID::PurePerl + Data::UUID::PurePerl which uses Digest::MD5 with a random payload internally
- OEF::Component::Transfer should get commands issued to tell.pl transfer ....
- documentation/self-documentation: - doc/meta (for self-documenting purposes, place sloccounts and cvs-graphs/commitinfo/commitgraph) here - doc/tracker (bug-, feature- and issue-tracking-system) - interface with cvs (file based use) somehow.....
- cmd.pl/command.pl --> tell.pl
- core-services to be abstracted out as components: - lowlevel - object manipulation (query, load, edit, save) - data manipulation (transformation, conversion, encoding, encapsulation) - communication services (rpc-xml, soap, file-system-based (pipe, socket, dir/file-poll/-watch), other rpc-mechs, net/raw (tcp, udp, ...)) - highlevel (utilizing above components and cross-utilizing themselves) - reporting facility - logging - xyz - processing facility (here is the actual code!) - routing facility (- view generation)
- implement command-abstraction: - use router/hop - mechanism - declare commands in etc/BizWorks/Commands.pm, (or insert them to db and enhance router/hop-mechanism with "nodes") - map them to: - description - argument-container (hash containing passed-in arguments: this should be processed via some kinda "request"-object) - at the end - a "response"-object should be the result of the command-routing process - given this - it should be as easy to switch between synchronous/asynchronous-processing on command-level easily (speaking "on-the-fly") - rewire this to a new script: cmd.pl/command.pl
- add diff/merge on per-node-basis (inside!) for Data::Transfer::Sync as an alternative to md5-checksum-hashing
- write small tool to scan source-code for various often-used-patterns: - 1st run: TODO, HACK, REVIEW - 2nd run: lowercase versions of above listed items - report text/code after that (make context-width (before, after) configurable)
- backend-sync: detect changes to underlying file when using DBD::CSV when syncing - will get destroyed otherwise ;)
- central Data::UUID authority (two steps) - move all code to a central location (BizWorks::Helper::UUID) - central GUID-server/authority (like a ticket-authority) (RPC around BizWorks::Helper::UUID, etc.)
- integrate a mechanism to disable new-guid-assignment when calling Tangram::Storage::_insert - purpose: option for db_backup.pl dbkey=backend --action=restore
- introduce generic "advanced.error.propagation"-mechanism for functions in Data::Storage::Handler to let them return more verbose error-messages if any errors actually occour - don't just return and don't pass back verbose strings in return values directly! => pass back response-hash = { errcode => 5, errmsg => '<verbose error message>', ok => 0, } - go oo via Event? - go exception-style via try { ... } catch { ... }?
- ideas to abstract the rotation-mechanism: (feed.pl --action=rotate) - rotate.by.flag (is what we have now implemented in a hardwired fashion (backend_import.pl)) - rotate.by.date (give date to rotate by ....)
- an abstract/third language to write validation processes in? - intention: include this code (the identical) at _two_ system-nodes independent of used language (language-binding needed!) (python, javascript?)
- libraries/modules: currency and/or datetime calculation for php and/or perl!? - fields of "datetime": date, time, timezone - fields of "currency": from -> to, setBase, getByCountry, (setBaseByCountry???) - perl: - Date::Manip - php: - PEAR/Date - PEAR/Date/TimeZone???
- frontend-sync - new concept: - sync: two features - not more! 1. use guids for identification of nodes (wipe out usage of tangram-oid!!!: now the synchronization-layer is independent from tangram) 2. generic hook for im-/exports: wrap xml im-/export around that? - maybe: don't do that above, but just _add_ the guid to the field-map of each node while syncing (generic "introduce"/"inject"-mechanism!)
- OEF: docu: declared databases can be used similar to / compared to mountpoints known from e.g. the unix filesystem. $process->{bizWorks}->{<database-key>}->storageMethod
- OEF: look at pef@sourceforge!!! - pef = perl enterprise framework - contact authors!!!???
- describe how to start write (new) code for/with OEF - start with simple .pl-script - encapsulate code inside the very same .pl-file into a special compartment to get it accessible from core OEF - maybe provide an external tool to automatically do this job? (e.g. cat example.pl | oef.pl --refactor=script-compartment --outfile=example_c.pl) - include to Config.pm!? - move code to own process-namespace (e.g. "BizWorks") - maybe provide an external tool to automatically do this job? (e.g. cat example.pl | oef.pl --refactor=process --outfile=BizWorks/)
- attempt to start "OpenAccess" as a MS Access clone????? using... - Data:: - OEF:: - Perl::Tk and/or mod_perl/Oak:: WebXyz:: - POE - P5EE - some more code...... ;-)
- OEF::Scripting - binding for scripting via perl/python/javascript/php? (even an emulated/open vbscript???)
- OEF/FAQ.pod - i can't find any getter/setter-methods in the source - where are they? -> don't worry - they are there - but hidden away from Tangram / Class::Tangram - please read their documentation to get an insight into the mechanisms used in the layers using tangram for storage - there are/will be some other "real"-getter/setter-methods or OEF will provide/wrap other class-generators as adapted/bridged handlers e.g. Tie::SecureHash, Class::Contract, Class::Xyz, ..... (try to integrate/move utilization of Class::Tangram into this (new) layer as well....)
- introduce mechanism to provide something like a "dynamic reversed mixin-inheritance" (from the view of perl's mixin-module available from CPAN) - purpose: - don't mixin plugin-subobject's to a common concrete parent-container-objects, but .... - .... dynamically (via eval) mixin an abstract parent-container-object into an arbitrary instantiated plugin-object's namespace - this (maybe) will get you shared resources between applications and plugins but namespace-seperated methods on plugin-level, but .... - .... still lets you have common methods available to all plugins declared in the abstract parent-container-object (needed e.g. for 'load/unload' itself....) => hierarchical plugin namespace at runtime - sharing resources, common methods and (maybe) other preconfigured (helper-)objects - another idea/enhancement on top of this: dynamically mixin arbitrary/given package into _current_ __PACKAGE__ - another idea/enhancement parallel to this (maybe better/easier to use/implement/merge-with-POE?): - provide each plugin with a meta-data-storage-container (kinda _SESSION/_STACK) which can/will get flushed on destroy, but ... ... can also be made persistent on demand/request, or ... ... sent to a remote location: would process-migration be possible with this??? _start_proc(local) _pause_proc(local) _migrate_proc(local, remote) _migrate_proc_meta(local, remote) _migrate_proc_stack(local, remote) _resume_proc(remote) while (status = _query_proc(remote)) { print status last if status.ready last if status.error }
- write code for (automatic )documentation-purposes: - perl-filter to (hard) dump all _values_ sub- input- and output- variables - perl-filter to (hard) extract sub-name and attributes - perl-filter to (guess) all _names_ of a) passed-in (request)-variables (arguments, options) and b) passed-back (response-)variables ((processing-)results, boolesch or arbitraryly encoded status) - perl-filter to (hard) extract additional in-/out-metadata ... how? already proposed somewhere - search linkdb
- guid should be generated in a core module in the Data::-namespace to be accessible by both Data::Transfer::Sync and BizWorks::xyz -> mkGuid -> Data::Storage::Sync - metadata: add ->{isGuidProvider}???
- Data::Transform::Deep: - rename 'expand' to 'deep_expand' - create new sub 'merge' (from 'hash2object_traverse_mixin'), use from 'hash2object' - use IterArray and/or IterHash???
- include main namespace (BizWorks, BizWorks::Process) in Config.pm somehow ----> refactor code to provide declarative boot/startup => SYNOPSIS: my $app = OEF::Application->new( config => $config, namespaces => { 'BizWorks' => { }, databases => [qw()], packages => [qw()], ); my $app = OEF::Application->new(package => 'BizWorks'); my $app = OEF::Application->new(script => 'feed.pl');
$app->boot(); $app->run();
propagation of config / further boot is done _after_ parsing the configuration e.g. booting BizWorks is just declard in there - it will not happen "by default" any more => the door maybe is open now to load plugins from/to other namespaces/scopes besides BizWorks
- generic "makeSerial"-function (BizWorks::Process::Common) to make serials from some concrete objects - rule: capitalize first letter of object-/class-name - append digest of object-payload (patched Data::Dumper)
- patch Data::Dumper to respect e.g. Set::Object
- synchronization: don't delete all nodes when running "export": just delete touched ones and show this via print "r" if $self->{verbose}
- transparently send tangram-objects via RPC::XML???
- introduce TTL and manual-purge-mechanism for HttpProxy
- introduce OEF, OEF::Process in nfo/ (Open Enterprise Framework) - the OEF-namespace: - OEF::Process (BizWorks::Process) - OEF::Script (xyz.pl, BizWorks::RunMe) - OEF::Core (BizWorks, BizWorks::Boot) - OEF::Request and OEF::Response? asynchronous? - OEF::Application (BizWorks) - OEF::Engine? - DOEPF? (Distributed Open Enterprise Perl Framework)? hmmm... no what about PHP? Python?
- what about code in BizWorks::Process::Core??? maybe use Data::Storage::Handler::Tangram::sendQuery
- introduce mechanism for developer-machines to detect and propagate schema-changes to e.g. frontend-databases (configure this option/feature in Config.pm)
$Id: Queue.pod,v 1.5 2003/01/28 10:38:39 joko Exp $
OEF::Why::Queue |