Humans are proposition-making machines.
When we learn we connect new beliefs with the existing body of beliefs that we have, as per the coherence theory of epistemology. Thus it can be said that each datum of epistemology exists as a connection between to points of belief. The connection between two points is can, after being made, be acted upon then as a new point of data itself. Thus, to use an old example, if I know that this rock is warm (D1), and that the sun is shining (D2), I can connect these two points (with the purpose of empirical connection, purpose will be important later), to form the conclusion that the sun is making the rock warm (D3). Thus;
D3 can be seen as the nexus between D1, and D2. With a little inductive reasoning, D3 becomes the belief “The sun makes objects warm”. We can thus act upon our new proposition as if it was a data point itself, which indeed it is. We can use it to make new propositions, in junction with other beliefs.
This idea is neither new nor unique.
A quick aside, our data points (Dn) can be either internal referents, meaning pre-existing beliefs (as formed by the method above), or empirical phenomena procured via sense-data. This empirical data isn’t privileged (or given), meaning it is not a direct representation of the world-in-itself. When observing something in-the-world (object or phenomena) there are two parts, the purposeful act of observation which is called sensation in psychology terms, and the data that we actually see (as phenomena presented to consciousness), which is termed perception, in between these two acts is interpretation, our mind makes the sensation meaningful to itself, converting it to a perception presented to ourselves. In philosophy of mind terms this pure, uninterrupted, sensation is called The Given, and as per Sellars’ “Myth of The Given” we do not have access to this pure-sensation, only the interpreted perception. This each epistemic data point of an empirical nature is pre-digested by our cognitive apparatus. Another problem is the force of will in observation, we see what we choose to see, and not the world itself in its pleroma.
Learning can be said to be grasping, or seeking data points to connect. Grasping is a willful activity. Learning is the active process of selecting Dn’s to process via the epistemological network as a whole.
So, to quickly summarize, learning is the act of building propositions, this is something we all do constantly, it serves to place us in the world, and to make the world meaningful. From its purely mechanical nature, it becomes absurd to say that one person is better at this internal epistemic action than another, since we do it constantly, all of us. So intellectual judgments between individuals is not a questioning of process, but of overall network, the quantity and quality of connectivity. We now turn to the idea of quality, or the overall quality of each point. We can define quality as the degree of precision that each data point (Dn) corresponds to the World.
Intelligence then has nothing to do with the ability to make connections, in that this is something that we all do, but the ability to select items most fit for connection, with the goal of coming closest to The Given, meaning to cut out interpretation bias to the highest extent possible (via cancellation between points?). So intelligence becomes more of a gathering skill, and a statement of functionality.
This begs a question of framework.
I guess this can go back to my previous studies on the inherent subjectivity of epistemic systems. The fundamental 'laws of thought', as exemplified by the forms of logic, are true as forms or frameworks in which true statements about the world can be produced. Thus, in the form of modus ponens:
If P is true, and the conditional is valid, then the conclusion of Q must be true. This is inherent in human thought, modus ponens is universally valid as form. The problem is the selection of our Ps and Qs, we can call this a problem of data selection bias. Thus any proposition is only is only as good as its components, even if the form is valid. This goes down to the pure connection level (combining representations, in the language of Foucault), thus if we make the statement that
Our C is only going to be as good as our A and B, which are chosen, and not derived. So C can be erroneous, even if necessarily true in respect to A & B.
So we can see selection does matter towards our results, even if processed through a valid form.
None of this is new, either. We now need to discover how exactly data is chosen to be processed, since choice (or will to learn) is the most important aspect of learning.
So the open questions in this are:
-How does data selectivity work?
-What cues (in the world) would lead us to see what data is most fit?
-How can one minimize the error from interpretation if all of our data is non-privileged?
-What drives the grasping of the Will to Learn, thus what is our selection bias?
And more esoterically:
-Is these something hidden or implied in the connection itself?
-And could this possible implication lead to absurdity via infinite regress?
Humans are proposition-making machines.