|Subject:||RE: [humanmarkup-comment] Processing Model Considerations|
|From:||Rex Brooks (re...@starbourne.com)|
|Date:||Jul 15, 2002 10:08:41 am|
I don't have any major arguments to this right now. I am responding more to eliminate the extra parties to which I copied my first post, which I did mainly to let them know that issues that could concern them were about to be discussed, leaving the choice of whether or not to follow the thread up to them.
For the most part, I agree. What I am mainly seeking in an initial look at the experiment is answered here, though I may want to take it a little further when I can get back to it. My concern is that we look to the semiotic processor experiment to see if it can or should specify an order in which certain web processes need to be accomplished before moving on to the local processing. Hence my example. It is, as Len notes, something more of a modeling question than a security question, though I am not sure that this particular aspect of the experiment really has much to do with how the semiotic processor works locally. Mainly the question is one of computer bookkeeping--making sure that the deck is cleared correctly before launching.
In other words, do we need to let the servers know that a semiotic processor is about to be employed once we have made sure that the rest of the connections are working and ready to receive whatever inputs our semiotic process may produce. This is what usually happens when the file header is read and that is where the xml/web wide implications occur. Do we need to specify some metadata about the processor? Should we leave a placeholder for that?
I'm assuming that the semiotic processor is going to be a bit of a departure from the purely algorithmic processing to which Len refers here, and, as such, apart from the innards of the processor we are about to delve into, the staging needs to be set, or does it? The "or does it?" is what I'm hoping the experiment can tell us, and in that, the boundary question may come up or not.
In any event, I would like to proceed. Let's play with this puppy and see what it does, ok?
At 10:17 AM -0500 7/15/02, Bullard, Claude L (Len) wrote:
I'm not sure I'm addressing the topic at hand. I want to point out that some problems such as security vetting are tangential. I doubt we can do more than aid such systems and even then, only by augmenting existing means. There isn't a one sized fits all approach to security. Markup can help but won't in and of itself solve this.
The primary problem of markup is pre-parse, and pre-system. It is the human activity of symbol grounding; essentially, given a zebra and a giraffe, there is no overlap at that level of naming which would cause us to confuse one for the other. At the level of two spheres whose only difference is a variation of size of small value, if each is observed in isolation from the other, it is impossible without prior grounding to makde that discrimination. This sort of categorical error occurs daily and may be solved in learning systems.
We can create a semiotic markup which essentially enables signs to be declared, nested, and categorized. But this is only the beginning, creating a tool for organizing named sign systems be they concrete, gestural or whatever. XML parsing only enables us to get these into forms that are processable; it says nothing about the post-parse processes that are to be applied to validate categorical processing. XML Doesn't Care. This isn't an XML problem; it is an XML application problem. XML tells you how to mark it up; not what the names are, why they are marked, what they "mean".
Our first problem is to create the semiotic markup language, then to apply that experimentally. The critical test is to discover how well it helps, hurts, or doesn't affect the category learning problem.
There is a lot to be said for the position that human mental processes are holonomic, not algorithmic; but eventually, a computer model must be the latter in its basic models, even if the former in the higher models. One can view this as a polarity problem type; it is not solved, simply managed.
In the problems you pose, this means that given filters or controls are applied. These may be emergent in that the filters themselves have to be fabricated given a category which has been recognized, but not yet understood. Identity is not an inherent quality outside philosophy; in a system, it is assigned. Assignment based on uniqueness of a member in a set presupposes that one can perform the identification process.
That is easy until one gets a near boundary member where the identity at some level of nested sets is easy, but for a process that puts that member near a boundary with overlapping members, this is harder. Security systems use role based assignments of privileges over data and processes and that works reasonably well given a vetted process owned by a recognized authority. Recursion can occur (the schrodinger cat problem), but that is an identifiable pattern.
So yes this is a process problem in very many ways, and not universally solvable (local policies prevail and one ends up fielding a configurable toolkit) but I assert that the solution to this is not in semiotic markup, although a symbol-grounding system can help implement local policies.
-----Original Message----- From: Rex Brooks [mailto:re...@starbourne.com]
First, no, I'm not even close to done with the Wolfram book, but it is clear that it is going to take longer than I thought, and given my own track record wrt getting back to unfiinished business, I decided to stay with the approach of annotating as I go, rather than pushing through a first reading to get the gist then going back for depth. That said, I can't let other work languish while I'm busy deciding whether this book is going to necessitate a change in my thinking or our work taken separately or together.
So, as we, in HumanMarkup, get ready to launch into Len's proposed experiment with a semiotic processor, I offer this article:
A question I think we need to ask is: Do we need to specify a processing order within a semiotic context for HumanMarkup in an a priori fashion for any application document using HumanMarkup?
I have long thought that some sort of preflighting of resources for any given application-specific xml operations on the web needs to be addressed even before or perhaps simultaneous with parser validation of a document invoking those operations. Our position on this needs to be noodled out before we start thinking about how or whether HumanMarkup-based or -supported applications documents SHOULD order parsing of applications document-specific operations.
In practical terms, what this means is that, for example, a web service being requested by an end-user needs to have all connections tested for reliability, security and availability before an end-user's HumanMarkup-enhanced personal preferences information is passed, and that needs to occur immediately after any single-sign-on identity authentication, which takes place first before a connection to a service is confirmed. I mention this in concrete terms so that we know that we are talking about clearly concrete issues, and not just a theoretical experiment. So, we need to cast the experiment so that it tells us the answer to these questions, in addition to more purely intra-HumanMarkup concerns.