See Estimate improvement value for background. Individuals and companies have artificial lists of values they produce as part of producing their identity; societies have more naturally formed sets of values. We use our values to decide what major projects to pursue.
See in particular:
And the quote:
Intrinsic and instrumental goods are not mutually exclusive categories. Some objects are both good in themselves, and also good for getting other objects that are good. “Understanding science” may be such a good, being both worthwhile in and of itself, and as a means of achieving other goods.
Do you agree with this statement? That is, is the purpose of science only to achieve other human interests, or is it good to learn in itself? If it’s not good in itself, you have only intrinsic and instrumental values.
Should you let yourself “learn” values (see the first paragraph of Learning)? We do learn values from each other, even in adulthood. When someone tries to shame you for focusing on one topic rather than another, they are essentially ask you to shift your values (assuming theirs are better).
Identity, ownership, and investment#
It’s easy to mix up your value estimation functions with your identity; e.g. some people value money or relationships more than others. At the same time, taking on an identity through value estimation functions is one way of “taking ownership” which has historically been an effective way for people to achieve goals (e.g. through ownership of a company).
Prefer goals to values#
It’s hard to say whether our values are derived from our goals, or vice-versa. In some ways, what we value in an abstract sense decides what we choose to work on. In other ways, we are what we work on i.e. the projects we freely choose to work on are a better indicator of our true values than our stated values.
We seem to have a natural tendency to prefer major projects to sets of abstract values; e.g. see the article Meaning of life. A generally-stated goal might help us replace a more abstract system of values. We often find that people who disagree on the meaning of life (e.g. Christians and Atheists) nevertheless have many common values that allow them to work together in e.g. a business. Similarly, people with differences in life goals may still have a large set of of shared subgoals (projects).
Values tend to be more abstract than goals and therefore often less testable. Our definition of values as being about the future set of projects we may work on avoids this issue, at the cost of considering many possible worlds that may never materialize (a more costly probabilistic value estimation function). More testable, in this case, helps us retrospect as well. Did we use what we learned in a particular area? Did we not work on the projects we expected to because our values changed, we discovered hidden costs, we discovered better ideas, or because we are not in control of our career direction? However, it’s still possible to compare abstracts set of values to the projects we ended up working on and looking for a general match.
A set of personal learning values (research interests) essentially defines the set of specific projects you would want to pursue. But, you may not have the words to describe yet what you need to achieve the major life goals you are most interested in, preventing you from writing them in a list of research interests. Does it matter what words we use in the end, it it works? At the same time, you need to continually push yourself to understand the words of others that have proved valuable (and to communicate, e.g. pass interviews).
In many cases (especially in statistics) you are learning synonyms for words you already know. This has major implications on whether you organize notes or not, and whether you rely on others notes or your own. If you don’t organize your own notes, you don’t know what you don’t know. That is, where you think lots of unknowns (value) are, even if you don’t have words associated with the value yet (the words people in the field you want to get into use). It also has implications to whether you do all questions in books, or only those that are more interesting to you.
You’re interested in the words other humans use if you feel they have valuable models you can discover the answer from faster than discovering the answer via your own note taking, data collection, and exploration. You should expect it to be harder to remake discoveries on your own rather than work on the shoulders of others. If you use your notes only to ask the right questions, you can hopefully find an answer without rediscovering it yourself.
Wikipedia often defines topics a certain way (such as a Likelihood function) that may need to be redefined in some context to do more useful work. You can’t rely on Wikipedia for all definitions not because it is necessarily wrong, but because what “right” may mean may depend on your goals.
It’s tempting to dedicate yourself or a team to a major project, such as self-driving cars or a robot maid. As with any project, we can only estimate the cost upfront. When goals get too large, we hedge our bets and instead choose a set of values (driving our learning) that will help us achieve what are likely the lowest-hanging fruit in the set of major projects we are interested in.
Said another way, our values are a way of driving learning (i.e. mental refactoring) to incrementally reach an important goal we may not even know exists yet. Everyone is invested in their own mental networks, and hedging your bets is generally speaking the safer option with respect to them.
Many TODo can go stale when the projects you care about most change, such as when you switch employers. That is, you need to start from the top again and re-analyze everything with the new changes in projects of interest. If your values are relatively stable, then what you’ve learned in other contexts should transfer to the new projects.