G2 (Blog 2)
Puzzling: How Observations Are Accumulated Into Context
http://jeffjonas.typepad.com/jeff_jonas/2008/11/puzzling-how-observations-are-assembled-into-context.html

If the goal is to make substantially more sense of data – the only way forward is CONTEXT ACCUMULATION.  

On a Slightly More Technical Level:
1. When I speak of Persistent Context, this is synonymous with the ”work-in progress puzzle” where the net sum of all previous observation and assertions co-exist.
2. Semantic reconciliation (e.g., identity resolution) is but one of several steps of contextualization processing, albeit one of the most important ones that asserts ”same” or ”not same.”
3. Contextualizing observations is entirely dependent on one’s ability to extract and classify features from the observations.  Feature extractors fall woefully short today.  I’ve hinted at what I think will fix this in previous posts and more about this later.
4. Using new observations to correct earlier assertions is an essential property I have been referring to as Sequence Neutrality.  When systems favor the false negative, sequence neutrality most frequently discovers false negatives while the discovery of previous false positives are far and few between.
5. Non training-based, context accumulating systems with sequence neutrality have this behavior: the puzzle can be assembled first pass, without brute force, where the computation cost of the last pieces are as easy as the first pieces, while having no knowledge of what the picture looks like before hand, and regardless of the order in which the pieces are received.

CONTEXT(Help)
-
OpenSherlock Project »OpenSherlock Project
References »References
Web pages »Web pages
G2 (Blog 2)
G2 »G2
Context Accumulation »Context Accumulation
+Comments (0)
+Citations (0)
+About