top of page


CULTURE

- PAST, PRESENT, AND FUTURE - PROBLEMS & SOLUTIONS -
-
BUILDING COMMUNITY - GOVERNMENT - EDUCATION -
- MEDIA -

Parthenon

 

DIRECT DEMOCRACY

 

          Ancient Athens is called the birthplace of democracy, and the influence of its legendary government casts a long shadow.  Men who were free, owned property, and native-born were known as citizens, and they were allowed to join the assembly on the hilltop called the Pnyx.  There they proposed laws and actions, spoke about issues in turn, and cast their votes to run their city-state.  The number of men willing to attend did not always meet the quorum, so rules were made to force people to show up.  The story is that two slaves would carry a rope with red dye through the forum, and any eligible male whose toga was touched with the dye had to head straight to the Pnyx or else face punishment.  This system of direct democracy was not perfect – it was not inclusive, it existed in a very unequal, slave-owning society, and participation was apparently lacking.  As bad as it was, the Athenian ideal of direct democracy stood as a model through the ages.  As people were suffering under the dictatorial rule of kings and emperors, they could read about a long-ago time when the opinions of ordinary citizens counted for just as much as the opinions of the elites: one person one vote.  This dream of equality continued to kindle the Western imagination across the millennia.

​

          When modern republics formed, they looked back to Greece and Rome for inspiration.  They looked to Rome for representative governments with three parts, each with balanced powers that supposedly functioned according to a constitution.  They looked to Greece for the democratic aspect of their systems.  Every eligible voter cast his vote and these were tallied equally: one person, one vote, independent of wealth.  These votes were not for policies or laws; they were votes for politicians to represent them in government and make the laws and policies for them, supposedly reflecting the will of the people.  Since the population of these republics was too large to come together regularly in one place to discuss issues and make decisions, it was assumed that Athenian-style direct democracy was impossible.

​

          Over the last two centuries, a number of important improvements have been made to our “democratic republics.”  Property requirements to vote were eliminated.  Voting rights were expanded to include women.  Universal suffrage allowed people of all races to participate.  The Progressive Era created avenues for the citizens to directly affect legislation and office-holders through initiative, referendum, and recall.  In Australia, voter participation has been over 90% for the last century because it was a requirement, with fines imposed for not voting.  Some might complain that this sort of rule takes away a person’s freedom, but should we really have the right not to participate in our own governance.  What kind of democracy is it if people don’t vote?

​

          Voter participation remains a problem.  Without automatic enrollment, political games ensue to attempt to disenfranchise those likely to vote for an opposing party.  Polls and voting requirements are being adjusted to make it harder for certain people to vote.  Even when there is no barrier to voter registration or the polls, people do not feel involved or motivated enough to properly maintain a citizen-run government.  In most countries that hold elections, somewhere between 25-50% of the eligible electorate in any given election fails to cast a vote.  In close elections with a small turnout, we may in some cases be allowing a quarter of eligible voters to decide things for the rest of us.  One elegant and effective solution would be for regional governing authorities to issue co-op charters with the written understanding that every member 18 and over who is not incapacitated will vote in every poll.  Each member who joins a co-op would need to sign their promise to do this.  The manner in which each co-op enforces this would be up to them.  Voters might also be required to prove basic understanding of issues every five years or so, or be required to take some educational classes, all according to the wishes of the majority of co-op members. 

​

          The solutions to our plethora of problems must be many-sided and nuanced, but one big part of the answer can be found by looking back to ancient Athens, where it all started.  Our assumption since the 1700s has been that direct democracies can’t work because our modern populations are too large.  This might have been true 250 years ago, when people had to physically gather to discuss issues and vote, but today, with modern computer technology, direct democracy is entirely possible.  If all citizens 18 or older were automatically registered to vote and required to be involved, online forums could be created that allowed everyone to learn about issues, propose legislation, discuss and debate, make amendments, and finally cast their votes.  Systems could be created to ensure that votes were not altered or hacked, so voter fraud would be next to impossible.  Voter participation would be near 100%.  There would be no more sense of alienation among the electorate.  There would be no more political gridlock and no more government inaction despite the will of the people for immediate action.  If cultural expectations made it clear that everyone needed to be knowledgeable about current issues, and if educational systems and community practices evolved to facilitate this, then the actions of the government and the will of the informed electorate would be one and the same.  In a virtual sense, we can now all meet on the Pnyx to make our voices heard.  Of course we can’t be everywhere at all times.  We can’t know everything and follow every aspect of government workings.  Systems would need to be devised to create committees (either with volunteers, on a rotational basis, or according to whatever protocols the majority determined to be best) for different projects or types of legislation as well as a fair justice system. 

​

       Technologically-enabled direct democracy is the only way for us to move forward and realize the dream of an effective, responsive, and just government.  In such a system, the people themselves would be the government.  They would no longer vote for others to represent them; they would primarily vote for policies and laws, and only secondarily select the right people to carry out these policies and enforce these laws (not politicians; technicians, engineers, law enforcement officers, court officials, etc.).  Computer technology and the internet are capable of doing far more than the tasks for which we currently employ them; they can revolutionize government and allow us to combine the promise of ancient Athenian democracy with the norms of a multiracial, multicultural, modern society.  We can break out of our current cycle of apathy, corruption, stagnation, gridlock, inefficiency, voter disenfranchisement, and general inability to deal with the problems that face us.  This can only happen if we redesign our systems of government to empower the people themselves to run the show. 

​

A sunise over the mountains.  A trail in the forest.  Flowers.

 

A WORLD BUILT ON INCREDIBLE EVENTS

​

    The suggestion that we will soon enter a Golden Age sounds too incredible to believe.  As a historian, I want to point out that incredible events have made up our world from start to finish.


    Never mind the odds that existed against the formation of the universe, stars, planets, the protective magnetic field around the earth, and the development of life on our planet.  Never mind the amazing pairing of mitochondria with other life forms to create plants and animals.  Never mind the unbelievable growth of pre-human cranial capacity.  Let’s just look at a few of the events in recorded history that would have seemed like fantasy prior to their occurrence.


    In ancient Greece, when the massive Persian Empire invaded, any outside observer would have written off the little Greek city-states.  The Battle of Marathon and the Battle of Thermopylae irritated and humiliated the Persians, and then the Battle of Salamis damaged their navy to the point that resupplying their forces in Greece became difficult.  Finally, the Greeks banded together and decimated the Persians in the fantastic victory at Plataea.  Then, a century and a half later, Alexander the Great had the gall to turn on the Persian Empire.  After winning two battles against the Persians, he faced an enemy army perhaps five times larger than his own puny force at Gaugamela.  In a battle that defies logic to this day, he won against all odds, and Greek culture was extended to Afghanistan and the Indus River Valley.  Because of this cultural mixing, Buddhist statues in Tibet, Vietnam, China, Japan, and Korea are all based on the realistic Greek style of art.


    A hundred years later in China, a look at the map of the time would have led one to believe that the large state of Chu would probably take over the other kingdoms to unify the Hua people.  If not Chu, the victor would probably be Wu or Chin.  Against all odds, the small, backward, semi-barbaric state of Qin conquered all the rest.  The short-lived Qin Dynasty collapsed into chaotic rebellions, and in the end, the two main rebel leaders were Xiang Yu, a young man from a Chu military family, and Liu Bang, a commoner in his 40s, who had a force one-tenth the size of Xiang Yu’s.  Against long odds, Liu Bang wore down Xiang Yu’s forces over years of fighting, and this man, the least-likely ruler you could imagine, established the Han Dynasty, creating the foundation for all the Chinese dynasties to follow.  To this day, the people of China call themselves “Han people” after the name he invented for his dynasty, and Liu Bang is known as the "Great Ancestor."  Chinese writing was codified at this time and is still known as "Han" characters.  Confucianism, which was almost destroyed under the Qin, became the state philosophy during the Han Dynasty, and it went on to influence all of East Asia to this day.


    If we go back to early Italy and look at a map, we would expect the Etruscans to dominate the peninsula.  Rome barely existed.  Then, this little city conquered its neighboring city-states and forged alliances of a new sort, offering most of the rights of citizenship to its one-time enemies.  Rome was then both a city and a republic spanning many cities.  Through sheer force of will, this republic, and later empire, expanded to dominate the area from England to Israel, and from the Black Sea to Morocco.  If we look at the religion of the empire, we would pity the poor, persecuted Christians, but never expect that they would become the main religion of the empire.  Through persistence and slow indoctrination, this small sect turned the tide and became the primary religion of the whole region, and later the main religion of the whole world.


    If anyone were to have looked at the map at that time and attempted to predict where the world’s next major religion would come from, the best guesses would have seemed like heavily-populated areas like China, India, or the Mediterranean.  Yet, lo and behold, about a hundred years after the fall of Rome, in the sparsely-populated desert subcontinent of Arabia, Muhammad (peace be upon him) claimed to have a revelation from the angel Gabriel.  The religion of Islam exploded onto the scene in the most unexpected way, taking both the exhausted Persian Empire and Byzantine Empire completely by surprise.  Within a few hundred years, this new religion extended from the Atlantic Ocean to the Indian Ocean, and later all the way to the western Pacific.


    After the fall of Rome, the Middle Ages was a time of interesting contradictions.  Europeans were backward in many ways but were thirsty for knowledge.  They discovered, of all unlikely things, the lost writings of Aristotle (translated via Arabic into Latin) and began to objectively question everything, including the Church.  Europeans were technologically far behind the Chinese and the Islamic World, yet they were in love with contraptions ​ so much so that by the 1300s, long after the Chinese had dismantled their own giant clock (the first of its kind in the world, made in the 1000s) because they didn't know what to do with it, each town in Europe clamored for its own clock in the town square.  The simple metal spring, ironically invented by the Romans (the least inventive people of all) was transformed into a coil and used to hold and release energy slowly like a battery.  When used to power a clock, this opened up huge potential for miniaturization, and soon European engineers were making the most finely-tuned precision devices in the world.  The Renaissance was another unlikely surprise that no one could have predicted or imagined.  It created the idea of progress, and, along with the Aristotelian view of the world, led to the Age of Reason and the development of science.


    The backward British Isles of the 1100s were the last place in the world one would have expected to create a democratic parliament, but because of the indignation at the excessive taxation of the king, the Magna Carta was signed in 1215.  The ideas of English liberty expanded until they were turned against their mother country by the colonists in the American Revolution.  When the shot heard 'round the world was fired on Lexington Green, it must have seemed a foregone conclusion that the Americans would be crushed by the mighty British army.  Like Alexander the Great, George Washington had that blend of magic and luck to survive and win against all odds.  From the Revolutionary War onward, the story of the world has largely been the story of expanding American influence.  Once again, the speck on the horizon that no one believes will materialize has become the foundation upon which all that follows is built.


    Ordinarily, the safest bet is to expect that tomorrow will be pretty much the same as today, but if history teaches us anything, it is to expect the unexpected.  Sometimes drastic, sudden changes do take place.  According to numerous prophecies, we are living in one of those times right now.  The first step is believing that positive change is possible.  Don’t fall into the trap of thinking that we cannot solve our problems and create a Golden Age.  We can do whatever we put our minds to.  Once the idea exists, it is only a matter of taking action to make the dream a reality.  As it says in Acts 4:11, the stone which was rejected by the builders will be made into the cornerstone.  The people of the future will thank us for it.
 

interracial friends

​

META-PRIVATE AND MINI-PUBLIC SPACES

 

        The culture wars rage over public spaces.  What books should be in our public libraries?  What should be the policies for our public bathrooms and sporting events?  What should the curriculum be in public schools?  How should public accommodations laws be interpreted when religious views come into question?  The battle-lines of these arguments are based on our current conception of what is public and what is private.  As it is, the only private space is your own house, your car (to a lesser degree), and your own mind (so long as you keep your mouth shut in public).  Everything else in the world, from the sidewalk line onward, is pretty much public space.  But what if there were an entirely new paradigm for separating public and private, so that we had many new layers in between what was purely private and what was fully public?

​

        If we redesign our socio-economic system to create democratically-run cooperative communities, and people joined these communities voluntarily according to shared beliefs, we would be living among people who agreed with us in most ways.  Let’s look at some examples of what sorts of cultural norms would meet with the approval of different groups.  In Pennsylvania, the Amish are conservative, pacifist Christians who refuse to use modern technology.  In Canada, Hutterite communities are conservative Christian communes.  They are a bit more open-minded than the Amish in the use of technology, but are not willing to accept any non-traditional culture, especially where gender roles are concerned.  Some church communities are more liberal, and endorse same-sex weddings.  People who belong to the Humanist Society might form a community that was atheist, and focused on scientific and artistic accomplishments.  This is only the tip of the iceberg.  The full spectrum of human cultural opinions is vast: it includes all religions and all aspects of life from diet to education.  If you put conservative Hutterites and progressive Humanists in a community and asked them to agree on dress codes, cultural norms, and school curriculum, this is a mission impossible.  This is partly why the Hutterites form their own communities.  Understandably, they want their own space.  This may seem strange to most of us, but the truth is that we should all take a cue from them (with a twist).

​

        While I strongly urge people not to create insular groups like the Hutterites, but instead to form intentional cooperative communities where members of all races and religions ae represented, it is also true that people need a mechanism to group themselves with others with others who share similar standards.  The humanist family who wants to provide a non-judgmental environment where their children are free to determine their own identity should be allowed to do so, but not at the expense of the Amish family next door that does not want their neighbor’s liberal views to be normalized in their household.  Both have a right to live as they see fit.  Until people are 18, their parents have the legal authority to determine their upbringing, and they should be allowed to form the best community and culture possible with like-minded people.  It would be relatively easy to find people who agreed with you in terms of cultural attitude that were from a broad selection of religious background ​ Christian, Muslim, Hindu, Buddhist, Sikh, etc.  You could find people from all these religious groups who were either (A) vegetarian, anti-firearm, and LGBTQ-friendly, (B) meat-eating, pro-gun, and promoted traditional gender roles, or any permutation of these three options, or any other option.  The same range of choices could exist for community policies about technology use, social media use, cell phone use in public, frequency of shared meals, dress code, acceptable language in public, amount of community service work required, shared traditions, expectations about stating one's pronouns (or not), and bylaws of the community in general.  We do not have to live in places where some people can swear loudly in front of kids or wear a shirt with a vulgar word written across it (unless we choose to live in such a place); we can make shared covenants with our neighbors and hold each other accountable.  For all choices in lifestyle, there can be a menu of options, created by members themselves, and in cases where no community exists that fits your preferences, it might be time to try and start a new one.

​

        These co-op communities might or might not be fenced off and gated, but the important thing about them is that they would provide a new kind of zone between our old definitions of public and private.  When outside your home but within the boundaries of your co-op, you would be in public, yes, but only within the community of your peers and friends (plus guests).  The rules for behavior would be those agreed upon by all members of the community.  Everyone would be on a first-name basis with everyone else.  If you saw something that upset you in your community, you would not just turn away, and neither would you immediately call the police.  You would bring everyone's attention to the issue, go over the shared covenant, and get the issue resolved democratically.  This is what you could call a meta-private space: not your own house, per se, but shared property that functions according to an agreed-upon set of rules.

 

       If a co-op was part of a network, the network could create institutions to serve only those members.  These could be educational, recreational, medical, research-related, etc., creating even more options for spaces where people can be with others that agree with them about codes for living.  Imagine universities or other institutions of equal size created by and reserved for the use of the group or groups that supported them.  A retreat center could be as spectacular as Hearst Castle, made for use on a rotational basis.  Conference areas could be combinations of hotel, garden, and inspirational architecture.  Roman bath complexes could be re-created.  Projects could be undertaken to build historical re-enactment zones for students.  These would operate according to rules and norms that suited their members; they would not have to try and please everyone's sensibilities.  In more conservative areas, people would not be expected to state their pronouns.  In some of these zones, it would be understood that all the food would be vegetarian and organic.  Some spaces would be have more of a rustic, cowboy, hunter's lodge atmosphere.  Some could be like monasteries and others progressive havens.  Others would mix and match preferences in new ways according to the wishes of their supporting communities.  These sorts of zones would be another layer of the onion, what we might call a mini-public space.  You would be with people who agreed with you in general about certain things, but not to the degree of specificity as the members of your own co-op.  With the option of getting your entire education, employment, and much of your business needs fulfilled within the bubble of these sorts of spaces (if you so chose), many of the conflicts and lawsuits we experience today could be avoided.  Sometimes good fences make good neighbors.

​

       In larger environments between these community spaces would be truly public spaces, as we know them today.  These would be places where democratically-determined rules had to be followed according to the vote of everyone in the region (much as we follow state or federal laws today), and people with vastly different attitudes toward life had to get along together.  The main principle here would be to follow rules of civility, decorum, and basically treat others' beliefs with the respect that you would like them to have for yours, no matter how deep the disagreement may be.  The culture wars we have today would be lessened because everyone would have more spaces in which to live as they saw fit according to their own set of values and beliefs, more often surrounded by people with whom they shared basic principles and beliefs about codes of conduct and the kind of environment they consider ideal.  Instead of asking people if they approve of a law, essentially asking what standard they themselves think is appropriate, and passing laws to force all others to live according to their rules (laws which would need to be followed on co-ops as well), the more appropriate question would be, “Is this issue so crucial to protecting human rights that I must limit other people’s freedom in this particular area by making a restrictive regulation?”  If the answer is no, but we personally feel that something is wrong, we can refrain from making a law against it, but still consider making a co-op rule against it.  Some drugs, for example, are so harmful that their use can be banned everywhere, while others may be allowed on some co-ops, as the residents see fit.  Another example is a dietary rule or animal-rights-related rule, like the rule against eating meat that once used to exist in Japan.  A dietary rule should be a personal decision or a group decision, not a nation-wide decision.  If a co-op declines to pass a certain rule that you feel is absolutely necessary, you could consider changing co-ops to find a more suitable place.  No one will ever find a community that fits their preferences 100%, but in this proposed system of designer co-ops, one can come close. 

​

       We all like the idea of live and let live ​– right up to the point when our neighbors do something that irritates us.  As the culture wars rage, it seems that we are attempting to legislate morality for each other.  Political theater is becoming the norm, and performances of outrage to either gain viewers or posture for one’s political base have unfortunately replaced civil debate.  As long as our agreed-upon norms end at the property line, everything in the remaining public sphere must be determined in this blunt, inexact way, driven by forces that are not able to lead us to good decisions.  The system is like a watch repair shop whose only tool is a sledgehammer.  As it is, everything is either black or white.  Until we redesign our communities to make them more than just collections of houses with people who don't know each other or have anything in common, there can be no nuanced shades of gray.  We need a spectrum of community types that reflects the diversity in the population, and people need to have the freedom to choose in which of these they would like to live.  We need more space to be ourselves without having to reach a consensus with others who think completely differently, yet live beside us cheek by jowl.   Under such improved circumstances as this, it will be a lot easier to live and let live.

​

Disneyland

​

AIMING FOR EDUTOPIA

 

        When moving to a new place with kids, one has to be aware of the schools in the area.  That’s because, unless you are rich, you will probably be sending your kids to a public school, and these operate according to attendance area.  You are not asked which school you want to attend; you are told which school your kids will attend.  You might prefer a STEM school, an arts-based school, a history-based school, etc., but these choices are not offered.  As it used to be in the Soviet Union, you get no choices.  Why is it that Americans become so upset over the requirement that they have insurance so they can see a doctor, saying this is somehow socialized medicine (although it leaves them a choice of which doctor to see), but have no qualms about being told they have no choice in which taxpayer-supported school they will attend?  It is all determined by your address.  In Moscow on the Hudson, Robin Williams played an immigrant from the U.S.S.R. who had waited in line for coffee in the old country.  When he saw the selection of coffee in an American supermarket, he swooned in disbelief.  Strangely enough, we are not strangers in a strange land, we are Americans living in America, and when told there is only one flavor of education (that we have to pay for ourselves through taxation), we are okay with that.  Are you okay with that, comrade?

 

        With the exception of those who live in spots where a few charter schools have been set up as a token gesture (or as a way to weaken unions), the vast majority of us have no real choice when it comes to education.  The idea is that the people themselves decide by electing school boards, state superintendents, etc., but the reality is that after the system was set in motion, it is easier to steer an iceberg in the opposite direction than to make serious reforms to the public school system.  Parents have expectations of child care, unions have contracts, millions of employees are in place, thousands of schools are up and running, and so forth.  The system isn’t about to go from having one flavor of ice cream to offering 31 flavors overnight.  But if there is no choice in education, how can the system possibly meet all the needs of its students and their families?  We are trying to accommodate parents with political beliefs all over the spectrum and students with interests and learning styles that run the gamut.  How can we give everyone the best experience possible in a single classroom?

 

        When Europeans created universities in the Middle Ages, there were two main models: the Paris model and the Bologna model.  In Paris, a group of teachers got together and offered a course of study to students.  When the students passed the teachers’ exams at the end of the course, the students received a certificate called a degree.  In the Bologna model, groups of students came together and hired teachers to tutor them so they could receive their degrees.  In the Paris model, the teachers made the rules.  In the Bologna model, the students made the rules.  Lifespans were shorter then, and college students were younger – usually in their early teens.  In the Paris model, students who didn't follow the rules could be punished or expelled.  One of the rules they wrote into the Bologna system was that teachers had to stop teaching the instant the bell rang to end the class – or else be punished.  Today, we have a weird, hybridized system where the administration of a school (and district) makes their own rules, but, since it is funded by the public, they are subservient to the whims of the public.  Professional educators are told what to do and how to do it by people who have never tried teaching and have no degrees.  The public is promised under the Tenth Amendment that they will have the power to create their own schools, and yet this guaranteed ability to decide your region’s education is exercised through such Rube Goldberg contraptions that it has resulted in a one-size-fits-all school system that fits no one perfectly.

 

        If I go to a restaurant and I dislike anything about it, I have the freedom to avoid that place again.  I may never find a restaurant that meets my tastes 100% perfectly, but I can keep looking.  In the end, I have to either settle for the best of all available restaurants, cook at home, or open my own business.  This is the beauty of the Paris system.  It is designed by professionals.  They have real-world experience and believe in what they are doing.  They offer services for a fee, and if you don’t like it, you can go somewhere else.  With competition, there is a wide selection, so the students and their families have plenty of choices.  The teachers are also the administrators, and, like restaurant owners, they must be concerned about content, quality of delivery, and customer satisfaction.  If we tried to guarantee that no diner would be left behind by eliminating supermarkets, requiring that everyone eat at their local corner restaurant, and then having all the diners in that sector vote for the chef they liked best, you can imagine how well that would turn out.  If all restaurants were run by the government’s Department of Food, this would not be a very good solution either, but this is essentially what we do in America, the supposed land of the free.

 

        We all love choices, so why is it that we surrender this freedom of choice when it comes to education?  Why not allow teachers to group together according to the educational system they believe in and let parents choose among them?  Students will opt for fun and ease, but parents will emphasize test score improvement and college admission rates.  These negotiations between parents and students can be settled in the home.  Once a student arrives in class, it would pretty much be (from the teacher’s point of view) my way or the highway.  Students would not cuss the teacher out on Monday and then be back in class on Tuesday.  Students who don’t want to be in a particular school can go elsewhere and make room for someone who does want to be there.  Schools should have the right to refuse service to people who could not follow the rules, and new schools, more focused on discipline, or which used unorthodox approaches, could be created to deal with students who couldn’t find any other school that fit.  Where there was no demand, supply would shrink or adjust accordingly, and where there was a demand, new supply would rise to meet it.  

 

        All of this would work most smoothly in a society based on intentional co-op communities.  In an environment like this, people would have more equality and group assistance, so the system would not devolve into elitism.  Schools would be smaller and better-suited to meet the needs of students and families.  The focus would be on making sure every student got 90-100% of the skills and/or material under their belt before they moved on.  Staff would make certain that student attitude, behavior, and citizenship were excellent, instead of being forced to tolerate egregious abuses because the conveyor belt is moving too fast and there are too many limits on the interventions that can be applied.  Instead of resembling prisons, schools of the future could be a cross between Disneyland and an interactive science museum.  Students would either be fully engaged or else they would move on to another school that fit them better.  Students would no longer be allowed to be disruptive, disrespectful, or just take up space and do no work.  If they learned no more in school than the fact that these basic rules were non-negotiable, it would in many ways be a massive improvement over the current system.  People who grew up with an education like this would probably know more when they finished high school than the average college graduate does today.  Isn’t that the point of an education system: to educate people and improve over time?  Most states have been doing things the same old way for almost 200 years now.  Does it look to you like things are getting better?

​

Julius Caesar

 

POLITICS WITHOUT THE POLITICIANS

​

        You may have noticed that the election (or appointment) of a single person can radically alter policies that affect hundreds of millions of people.  These small, seemingly random choices can mean the difference between life and death for many.  For others, it can mean tremendous changes in lifestyle, often in the most negative sense.  Whether a position of power is held by candidate A or candidate B (who has diametrically opposing views) comes down to voter turnout, media spin, voter suppression, gerrymandering, various dirty tricks, disinformation, and social media algorithms.  It seems we might as well just flip a coin. 

​

        The problem isn’t democracy; it’s a lack of democracy.  It’s true that the people could be better informed and educated, but they do have opinions, and those opinions should be valued.  We have a system wherein we select people to run the government for us, and then we are forced to choose between them in a two-party system where most people feel left out.  We select between the two top candidates based on a combination of their policies and their personalities.  Unfortunately, it is human nature to pick based on image.  We assume that the image reflects the person’s true personality.  In actuality, the image the public sees has usually been crafted by public relations managers, and may be very misleading.  The policies of these politicians are often unstated, unclear, based on connections, formed by telling the base what they want to hear, totally impractical, or totally arbitrary.  The leaders, once elected, are under no obligation to actually follow through with the policies they promised the electorate.  The whole system seems more suitable to the 1800s than the 2000s.

​

       With today’s computer technology, we should vote for policies, not people.  When we have decided in a democratic fashion on the policies we want, we can then hire or appoint people, according to predetermined system, to enact these chosen policies.  Today, an issue as momentous and morally-charged as whether or not America will continue to support a democratic Ukraine against Putin’s invasion may be entirely decided by our choice for president.  This is a lot of power for one man to have.  Add to that the ability to appoint judges, control the military, propose a budget, wield veto power, etc., and you begin to see how our system is like a giant roulette wheel.  This is far too much power for one person to have.  These matters are too important to be bundled in one popularity contest and decided in one vote.  When we marry someone, we need to accept a package deal.  When we craft public policy, economic policy, foreign policy, the political persuasion of judges, etc., it does not have to be done this way.

​

       Why should one person be standard-bearer for all those who hold a certain political view?  This runs the risk of creating a cult of personality that eclipses both party and belief.  The strengths and weaknesses of the party chief become all-important for the survival of the nation.  Their successes or failures, witticisms or foibles, cleverness or stupidity, wisdom or psychoses all become inflated.  Everything they do or say becomes a battleground for partisan propaganda.  Those who are allied to the emperor cannot bring themselves to point out that the emperor has no clothes, even when he strides down the avenue in his birthday suit.  Objective truths become a subjective set of “alternative facts,” and we reach a twilight zone state where we can agree on nothing and public officials get death threats for merely stating facts and doing their jobs.  Whether it is the sun or the moon in the sky depends on which party you ask and which ruler is in charge at a given time.

​

       Julius Caesar is one of Shakespeare’s greatest plays.  The complex character of Caesar and the dilemma facing Brutus make it a classic, timeless tale.  Caesar is caring and magnanimous, but he is also an egomaniac and a dictator.  At what point does concern for the future of one’s country outweigh personal loyalty?  What will happen if one chooses to act against a potential tyrant?  What will happen if one chooses not to act?  Are the fears of tyranny overblown?  Caesar is a patrician, but he is loved by the masses.  It is ironic that his main supporters are those who have nothing in common with him, and those who fear him the most are members of his own class.  His assassination was the most famous in history.  It was the subject of Marc Antony’s famous funeral oration, where he tactfully spun the incident not as the just execution of a tyrant, as Brutus claimed, but as the murder of a loving and just leader.  Caesar’s death is controversial even today.  Was he killed because he was a dictator for life, threatening the republic itself, or because he offended the elite by threatening to reapportion wealth and power in favor of the poor?  In 44 B.C., Romans had to take sides.  They had to be either for Caesar (and with Antony and Octavian), or against him (and on the side of Brutus and Cassius).  In the 2020s, we should be able to run our countries in a more nuanced way.  We do not have to be either for Caesar or against him.  From here on, we can do without any more Caesars.

​

 

MEDIA RESPONSIBILITY

 

       Once upon a time in America, there used to be an FCC regulation called the Fairness Doctrine.  If a radio or television station had a story with a slanted viewpoint, a person or political party with an opposing view had to be given equal time to make a counter-argument.  This was repealed in 1987.  The idea was that this doctrine was limiting people’s freedom of speech.  Once upon a time, there used to be an agreed-upon, common-sense understanding that “news” meant a reality-based description of current events, but this understanding has also disappeared.  Certain people and corporations have played fast and loose with this gentleman’s agreement that a news agency must actually contain news – and report it in an unbiased fashion.  Now, the fox has moved into the henhouse and turned it into a propaganda factory with the label “news” still on top of the signboard.  When there is an event that everyone is covering (a fire, a hurricane, an earthquake), these propaganda stations cover it about the same as everyone else.  This occasional correspondence between their reporting and the undeniable reality on the ground lends credence to the lie that they are a reliable reporter of the facts.  Whenever there is a lull in breaking news, or whenever they can slip in some commentary, it is so one-sided that it resembles what viewers in Russia see on their all-Putin, all the time, government-run stations.  It isn’t balanced, and it often isn’t even grounded in reality.

 

       You would think the public would be savvy enough to resist being fooled by this kind of skewed, partisan reporting, but you would be wrong.  As in the movie Anchorman 2, a large audience was sucked in by waving the flag and telling people what they wanted to hear.  People started following certain on-screen personalities and believing everything they said as if they were infallible deities, even though their talking points come straight from the corporate desk above, and only have a tangential relationship with any objective reality.  It quickly became clear just how easily people could be brainwashed by the talking heads on their screens to believe just about anything.  In 2020, Fox "news" won a defamation lawsuit because the judge agreed that no "reasonable viewer" would take what the big-name host said seriously.  This may make legal sense, but the real-world problem is that tens of millions of viewers do take what these folks say seriously.  These are grown-ups who should know better but don't, and they can cause quite a ruckus when they are outraged by phony stories meant to outrage them.  Over the last few years, four dangerous lies were driven home on right-wing fake news channels that a frightening number of people have accepted as true.  First, by repeating over and over that there has been a lot of voter fraud (when in fact there has been almost none), citizens were more accepting of restrictive laws, ostensibly to protect the ballot, which in actuality made it harder for targeted segments of the population to vote.  Second, the validity of the 2020 presidential election was repeatedly challenged despite over 60 court rulings that found no evidence in support of these claims.  The net result of this disingenuous haranguing was to create a violent movement to overthrow the government in order to save the country, as we saw on January 6, 2001.  Third, their racist nonsense about a “replacement theory” has led crazy white men to shoot people of other races in shopping centers.  Fourth, their lies and misrepresentation of what they call “Critical Race Theory” has created a rabid reaction against a thing that does not even exist.  As Hitler said, if you repeat a lie often enough, people will believe it.  Fox “news” mentioned what they called “Critical Race Theory” over 1,900 times in less than three and a half months in 2021.  There actually is a thing called CRT, but the real CRT is the study of how government policy has worked against minorities in the past.  These policies were very real, right through the 1900s, from Woodrow Wilson’s firing of all African-American employees in the federal government, to redlining by government policy to prevent people from borrowing money to buy homes in areas where African-Americans lived, to the policies that made it harder for African-American farmers to borrow money.  This study of CRT is pretty much restricted to law schools and postgraduate studies.  No one outside of that had ever even heard of it until Fox took the title and redefined it, based on their own wild imagination, to try and rile people up.  They say, with no evidence except their own echo, that CRT calls all white people racist and teaches kids to hate their country.  Politicians then followed suit, treating the make-believe thing as if it were real and, like "Professor" Harold Hill in The Music Man, getting people all worked up over nothing.  The parallels with Putin’s Russia are startling.  In Russia, non-profit agencies working to document Stalin’s atrocities have been shut down.  The reasoning for this is part of a nationalistic propaganda narrative that portrays Stalin as a hero who saved Russia from the Nazis.  To maintain a simple, easy-to-understand storyline, any facts that do not fit this narrative must be edited out.  This means anyone who focuses on negative aspects of Stalin’s rule must be anti-Russian.  The facts that the people in the agencies were all Russians who love their country, and that Stalin was a psycho who ordered millions of innocent Russians to be imprisoned and executed are considered irrelevant in Putin’s eyes.  Similarly, the Fox-invented definition of CRT is the one we mostly use in what passes for our national discourse on their pre-fabricated “issue.”  People make passionate statements about a thing that does not exist, others react to these statements as if the non-existent thing were real, and on we go, with no one ever pausing to make it clear that the entire discussion is based on fantasy and disinformation.  Twenty-eight states have even passed legislation to ban the imaginary version of CRT.  They might as well pass laws to protect themselves from werewolves, vampires, witches, zombies, succubi, and tree sprites gone bad.  It is as if we are all living inside the movie Wag the Dog.

 

       So how can we extricate ourselves from this hole?  How do we create a system where some people report on current events without spin, half-truths, and lies?  One component of a workable solution is the requirement that agencies that want to be licensed as news providers in the future follow very clear and narrow parameters.  This includes refraining from making any statement that is not both objective and immediately verifiable.  If reporting on a worker’s strike, news agencies would not be allowed to comment on the worker’s actions or the company policy, only report on what is happening.  A reporter shouldn't say things like, "The workers are angry.  They are in a frenzy."  The reporter can show us videos of the strike, and ask striking workers, "Can you please tell us what you're doing and why?"  People on the spot could be interviewed, but reporters would no longer be able to cherry-pick the most positive or negative of these interviews to share with viewers, characterize the situation with their own positive or negative adjectives, or sum things up the meaning or import of the event with their own spin.  If reporting on a president, you cannot call him a "wannabe dictator" and still act as an official, accredited news agency.  You cannot declare that the Department of Justice, when searching with a warrant for classified documents that an ex-president refuses to return, is “out of control.”  These sorts of statements are not facts.  These are opinions.  They have no place in a news report.  A person or group that reports in this way should not be allowed to use the name “news” to describe what they provide.  CNN and MSNBC, while at least operating in a fact-based universe, are also guilty of slanting the news.  These outlets make no secret of their distain for Trump with sarcasm, pejoratives, shaking of the head, facial expressions, and headlines about "Trump's Dark Vision."  Just report the facts without a running commentary of opinion and let us decide for ourselves whether something is dark.  In the future, viewers who acted in conjunction with public regulatory agencies would need to monitor broadcasts carefully to see that no licensed news agency was omitting important details, lying, or skewing their portrayal of events with a political agenda in mind.  News reporting should be neutral and honest, like a scientist recording data.  Reporters are not supposed to tip the scales. 

 

       Free speech is wonderful in theory, but there are always limits.  One clear limit should be in the reporting of the news.  Your own personal political opinion should not be a factor in gathering or reporting the news, and the viewers who come to the well for a drink should not receive a poisonous potion of editorial comment mixed in.  When this sort of biased reporting becomes the norm, the populace becomes so drugged that they can’t tell news from propaganda anymore.  When right and left news sources mix fact and opinion, or just substitute fact with opinion and outright lies, people cannot even agree on the basic objective facts that constitute our reality.  They can no longer have a civil conversation.  We see messages of hate on social media between armed members of paramilitary groups (who ironically think they are saving the country) openly asking each other why they shouldn't just kill all the people who disagree with them.  If one believes the outright, baseless lies that are being substituted for truth, it leads a person to this insane conclusion, and we let loose the four horsemen of the apocalypse.

 

       Can we trust government to fix this problem?  Can we trust corporations?  Can we trust the fox to guard the henhouse?  It seems we can only rely on ourselves to make sure that we wind up with a fair, honest, and informative news system in the future.  This will not be easy to achieve, but it is necessary.  With clear guidelines, a decentralized, fact-based system could be maintained, so long as it relied on strict monitoring, constant verification, and many checks throughout the entire news-gathering and reporting process.  This is hard to picture right now, with giant multinational corporations dominating the infotainment and outrage politics landscape, but it would fit hand-in-glove with a socio-economic structure based on intentional co-op communities that operated according to cooperative economic principles.  An accountable news media is a key component of an informed society and a world that works.  It may seem a long way off right now, but the basic outline is not so difficult to understand, and aligns perfectly with international norms of fairness, openness, and democratic principles.  If we can conceive of an effective, impartial reporting system and agree that it is an absolute necessity to have such a thing, we can, in the not so distant future, make it a reality.  If you allow yourself to be so jaded as to think it will never happen, you will be throwing away your children’s future.  Do you want them to live in a world where billionaire corporate owners can go on brainwashing anyone into believing anything?  If not, take some time to consider what a transparent, unbiased, and responsible system of news gathering and reporting would look like, what laws are needed, and how we, the people, can keep a good system from going bad over time.  This is not something that can be left to others to fix; we all have to be part of the solution.

​

RON BURGUNDY media fairness
interracial kiss

 

IS RACE A STORY OR A CATEGORY?

   

       In the racist South, there used to be a legal principle known as the “one-drop rule."  If it could be proven that a white person had even one ancestor who was black, no matter how far back in time, then their blood was said to be corrupted.  They were not allowed to vote, serve on a jury, or hold office.  No matter how light their skin was, they were categorized as black.  The question of their story ​ their family background, culture, upbringing, education, life experiences, principles, character, etc. ​ was not seen as relevant.  The term they used was “miscegenation,” an abominable concept that assumed any mixed-race person was somehow flawed.  All of this stems from the flawed way that people, Americans chief among them, view race in terms of categories.

​

       Writer Sarah Cooper was recently interviewed on NPR.  She described her own experience with the oversimplification of race in America through categorization.  Sarah’s family was from Jamaica.  Her best friend, Stacey, was Jewish.  Stacey had been called a “N-lover” by other kids, and Sarah asked her why.  Stacey said, “Because my best friend is black.”  Hurt and confused, Sarah said, “I thought I was your best friend.”  Stacey said, “You are.”  Sarah, who had never thought of herself as black, processed this and said, “But wait.”  Stacey then informed Sarah that she was black.  Sarah’s parents had said they were Jamaican and not used the word black to self-identify.  Probably they thought this term referred only to people who were of purely sub-Saharan African descent.  The family history was especially nuanced because one of Sarah’s grandparents was Chinese and another was German.  To them, identity had been a matter of telling the complex story of where they were from and who their ancestors were, which makes a lot more sense than using a single word to describe a relative shade of skin color, and have this word supposedly encapsulate your entire racial identity.  She said that in Jamaica, everyone was very mixed, but in America, everyone had to pick one group with which to identify (in her own words: “everyone needs to be: you’re black, you’re white, you’re this, you’re that”).

 

       So what is race?  Is it a question of where you are from, your culture, your self-identification, or the color of your skin?  President Obama was called America’s first black president by one and all with virtually no discussion about his racial identity, but there was an inherent leap of logic to categorize him like this.  Obama’s mother was European-American and his father was from Kenya.  So why then must he be categorized as one or the other?  Was this identification based on his appearance?  This practice of calling people black who have one black parent is normally meant as a politically-correct sign of African-American pride and a positive, progressive way of describing a person, but this all-or-nothing definition has a lot in common with the old one-drop rule.  There is a cultural pressure for people to identify as one or the other.  For people who are biracial and given a questionnaire asking them to pick one race or another, it can be confusing.  If a person belongs to two or more categories, why should they be asked to pick a single one?  Why must we categorize race in such black and white terms at all?  It reminds me of the question Hank Hill asked his Laotian neighbor Khan in King of the Hill: “So, are ya Chinese or Japanese?”  Hank was trying to be friendly, but as these were the only two categories in his simple Texan mind, it was the only question he could think of.  Khan gave a brief overview of the location and population of Laos, which went right over Hank’s head.  Hank then repeated the question verbatim, and Khan lost his patience.  This brings up an important point: when people do not understand your story because of limited education and/or experience, do not become frustrated with them.  You just need to explain your full back-story to them (sometimes repeatedly) until they get it.  Once they finally do comprehend the complexity of your origin story (and possibly others like you), they will be able to pass this understanding on to others.  This is how culture is changed: one mind at a time.

   

       Race is a construct and always has been.  Most Han Chinese have no knowledge of non-Han origins, but every virtually Chinese person alive today has DNA from Huns, Turks, and Mongols, along with genes from dozens of tribal groups.  Japanese, who have been fanatic about racial purity at times in their past, all have ancestors from the Korean peninsula, from China, and from the indigenous Ainu.  If we rewind to a time before the rise of Rome, the Latin League was just one group among a patchwork of many in Italy, the most powerful being the Etruscans.  The various groups on the peninsula spoke different languages and saw each other as enemies.  Little did they know that in a few hundred years they would all consider themselves Romans and venerate their mythic origins.  Although the notions of “Rome” and “Romanness” were created out of thin air, the Romans were racist in the beginning of their rise, viewing all non-Romans as inferior.  They mellowed in many ways on the subject of race after they allowed all subjects of empire to share in citizenship, but they never fully lost their sense of superiority until the final fall of Rome.  It was the mixing of “Romans” with invading “barbarians” that formed the states of the European Middle Ages, where people were later to become insanely racist with ideas (sprung purely from their imagination) about their own purity of blood.  If we rewind back far enough, of course, all humans come from small bands of hominins in Africa a few million years ago.

 

       When people focus on these supposed differences in the wrong way, the outcome is never good.  As an extreme example, take the case of Rwanda.  In Rwanda, the Hutus and Tutsis shared a common language and origin.  Their only difference seemed to be that Hutus farmed and Tutsis herded cattle.  The Kingdom of Rwanda was run by Tutsis from the 1700s onward.  At this time, some Tutsis formed a ruling class and disenfranchised the more numerous Hutus.  Back then it was still understood that the two groups were castes, not races, and that a person could move from one group to the other.  During the Belgian occupation after W.W. I, however, a rigid classification system was created and the groups were kept separate.  The Belgians supported the Tutsi hegemony over the Hutus in return for Tutsi support of colonial rule.  Hutus understandably resented the arrangement.  These two groups were virtually indistinguishable culturally, physically, or linguistically, yet n 1994, following a civil war, Hutus were driven by hate-filled speeches over the radio into a murderous rage.  Tutsis were called cockroaches by “Radio Machete,” and Hutus were told to exterminate them.  Over 500,000 Tutsis were massacred, mostly with machetes.  The movie Hotel Rwanda is a fact-based drama that portrays the events of the genocide.  The protagonist is a Hutu hotel owner with a Tutsi wife.  If they had been living anywhere outside Rwanda, no one would have been able to find the slightest racial or cultural difference between the two, yet these manufactured differences within Rwanda were the cause of mass slaughter.

 

       A similar situation could be said for couples living in the West in which:

 

       *One partner was Japanese and the other was Ainu.

 

       *One partner was Japanese and the other was Korean-Japanese.

 

       *One partner was Japanese and the other was Burakumin.

 

       *One partner was Vietnamese and the other was Hmong.

 

       *One partner was from the Kshatria caste in India and the other was Dalit.

 

       *One partner was Croat and the other was Serbian.  

 

       In each case, the differences between people are so slight that outsiders cannot distinguish between them, yet within the microcosm of their own national milieus, these differences have been magnified to justify discrimination, persecution, warfare, and genocide.  We could expand this list to cover more national and religious differences that are used as casus belli, but you get the idea.  These differences are important in that we should seek to understand each other in full and not lump people together in general categories that deny them their unique identities.  These differences are not important in that they should never be used to divide people or incite negative feelings.  The only way forward to world peace is to eliminate all divisions between people.  We need to integrate on a larger scale and in a different way than ever before into true communities.  We need to provide protections against speech that incites racial hatred, discrimination, or even a sense of superiority.  The ludicrous notion of “replacement theory” disseminated by Fox “news” is a prime example of free speech straying into dangerous racism.  There should be sanctuary from racism.  People should see that no race is intrinsically better than any other and realize that racial mixing, which has been going on as long as there have been people on the earth, is a positive thing in every way.  The only thing lost in an interracial union is the narrowness of belonging to a single group.  The gains are immeasurable. 

 

       As Walt Whitman wrote in his poem, Passage to India,

 

       Lo, soul, seest thou not God’s purpose from the first?

       The earth to be spann’d, connected by network,

       The races, neighbors, to marry and be given in marriage,

       The oceans to be cross’d, the distant brought near,

       The lands to be welded together.

 

       In the future, the human race will be a melting-pot wherein these artificial differences will disappear.  For our grandchildren, it be normal for people to be as mixed as Tiger Woods: African, Native American, Chinese, Thai, and European.  There really is only one race: the human race.  All distinctions that have biases attached are nonsense.  We are one species.  Variety is the spice of life, and we need to recognize our differences, but always in a positive, celebratory way.  All the drops of blood that make us who we are genetically and culturally should be researched because they are fascinating, as shown on Dr. Luis Gates' show, Finding Your Roots, not because they are useful for categorizing and dividing.  We don’t have to choose part of our lineage and declare that to be our one true identity.  We need to tell our stories in their fullness without concern for the meaningless classification systems that have existed up to now.  If we ignore these empty abstractions and instead define ourselves as we want to be understood, we can do so independent of previous stereotypes and limitations.  In addition to relating our own stories, we need to be willing to listen to the stories of others as well.  In the end, we will realize that the stories of others belong to us, the listeners, as much as they belong to the storytellers themselves.  Each of us is unique, and at the same time we are all one.  Love is the glue that binds us together.  This is the wisdom that will save us.

​

Einstein and IQ

​

THE I.Q. MISCONCEPTION

 

       “Intelligence Quotient” does not mean what most people seem to think it means.  Your so-called “IQ” does not tell “how smart you are,” as people assume.  What is known as an “IQ level” is a confusing misnomer that has large implications for the way we think of human intelligence.  An “IQ” is really just the score a person gets on an “Intelligence Quotient” test.  This is a test, of which there have been many versions over the years, to roughly gauge a person’s ability to solve problems, reason, and demonstrate understanding of such things as vocabulary, instructions, sequence, relationships, patterns, and rules.  A test like this is useful to a large organization, like the military, which in times of emergency feels the need to rapidly determine which inductees may be able to think on a higher level than others.  Beyond that, a test like this has little or no use.  To be clear, an “IQ” is only a score on a test, which can change based on the amount of knowledge a person has, as well as their preparation for the test.  It is not a fixed number, as many mistakenly assume, telling a person in some authoritative way how “intelligent” or “smart” they are.  There is no way to accurately determine this about an individual and assign them a score that is unchanging and meaningful.  At the heart of the matter is the mistaken notion, which we have all accepted far too easily, that a human being’s brain power can somehow be correctly measured and assigned a number like a bar code on an item in a store.  This is not true and never will be. 

 

       There are countless different kinds of intelligence describing all the skills we could attempt to categorize and assess: mathematical skills, language skills, listening skills, memory (in all its forms), attention to detail, survival skills, street smarts, decoding skills, problem-solving ability, intuition, social skills, ability to read people, ability to play a role or lie convincingly, ability to think ahead, ability to follow through on necessary actions to achieve a goal, etc.  A failure in any one of these areas might cause people to have a lower estimation of our "intelligence,"  but their lowered estimation may not matter at all.  A person with amazing survival skills is needed when lost in the wilderness, not a person with a great abilities as an accountant.  When we need advice on our relationships, we need help from people who understand emotions, not people with the best engineering knowledge.  For every purpose, there is a person with a skill, and we all have different skills and weaknesses.  Who is to say which is better or worse?  It all depends on the situation.  As the saying goes, a blind cat is better than a horse, if what you need is to catch mice.

 

       We are people with names and unlimited potential, not numbers.  That is why this seemingly small issue of accepting an "IQ number" is actually a big deal.  Is each young person to be counted as “just another brick in the wall,” as the Pink Floyd song claims public schools treat children, or are we individuals with names and personalities, filled with infinite creativity and power?  It all depends on how we define ourselves.  No one can be adequately summed up by a number, yet we have been conditioned by our culture to do this without question for the last century.  So from now on, when people say that something will lower someone's IQ or shave off IQ points as if an “IQ” is a true measure of cerebral ability, you should call them on their mistake.  Point out that this is nonsense based on a misunderstanding of what “IQ” scores really are, and explain to the speaker that the very notion of a person being defined by a number degrades all human beings.  Describing people according to their IQ as if it is a fixed number is akin to describing them according to their SAT scores.  If you want to get a higher score on an IQ test, just study and take the test again, as with any test.  The persistent use of IQ tests and the practice of labeling people by their scores is part and parcel of our whole dysfunctional system.  As long as we say that so-and-so has an IQ of X, we perpetuate this erroneous train of thought that makes us believe we have predetermined limitations.  Don't buy this insidious lie.  You are not a machine; you are a human being.  Pay no attention to the labels, numbers, or preconceptions that others have about you.  Don't let anything stop you or slow you down.  Get out there and do something great that no one else has ever done – something that no one but you could possibly do. 

​

Travel to Tibet and Nepal

 

TRAVEL WITH A PURPOSE

 

       Like most people, I consider travel to be one of the most enjoyable and rewarding experiences in life.  I have spent endless hours reading about far-off lands and dreaming of visiting them.  I sincerely believe that travel can be one of the most powerful ways to change a person for the better.  That being said, I think it is important that we travel in a mindful manner.  When engaging in non-essential travel, it is healthy to (1) remain thankful for the opportunity and not take such a thing for granted, (2) remain humble about what travel does and does not do for a person, (3) consider the cost to the environment, and (4) always try to travel with a purpose.

 

       Almost everyone loves to travel.  There is a particular kind of excitement inherent in entering the portal of an international airport and exiting at any other point on earth.  Air travel connects the far-flung parts of the planet like a giant internet with teleportation ability.  The great epic journeys of our history – crossing seas, traversing the vast and varied expanses of the Silk Road, exploring continents and cultures by foot like Ibn Battuta – can all be replicated with ease in the space of a single day.  From the moment you step off the plane, you are met with new smells, stimulating sounds, and a whole different feel to the air (a different temperature, humidity, elevation, level of purity, etc.).  All of it is so pleasurable as to be intoxicating.  Travel is truly wonderful, and almost indispensable for maximum growth in the modern world.  I would not be who I am today if not for the time spent overseas.  As important and beneficial as travel can be, however, I have observed that it has become so ubiquitous that it has become a kind of mindless universal fixation, a burning desire to travel in order to relieve boredom.  Essentially, it has become a matter of travel for the sake of travel.

 

       One problem with our common conception of travel is that we all too often view it not in terms of growth, learning, carrying out a mission to help others, or meaningful exploration, but with a consumer mindset.  The whole experience of travel is approached in terms of how much satisfaction we get from a trip, as if it were no more than a visit to a restaurant.  Like a newspaper critic, we assess and score all the individual components that make up each travel opportunity the way we critique the various aspects of a dinner: the service, the decor, and every single item on the menu.  At the most simplistic level, we view the whole kaleidoscopic experience of a journey as if we are consuming a bottle of soda.  This is beautiful: I want to look at it.  There is delicious food there: I want to eat it.  The beach is lovely there: I want to laze on it.  We then take these pictures and videos of ourselves in these places doing these things and return home to show others these images, thus demonstrating how wonderful our lives are.  Of course there is nothing wrong with doing these things in these places, and indeed it would be silly not to partake in local cuisine and see local beauty spots while you are there, and there is nothing wrong with taking pictures and videos while traveling.  When we have no other purpose to our travels (or to our existence), however, besides checking off a series of boxes, the entirety of a trip (or of our lives) can unfortunately be reduced to a series of such sensory experiences, each to be rated zero to five stars for its quantity and quality.  The complexity of the review may be sophisticated if we reach the level of a connoisseur, but the attitude behind it is not.  While it may appear cultured, it is an approach mired in selfish, materialistic attitudes toward life.  It is based on a gilded, superficial view, much like that of the wealthy, foppish protagonist brothers in the comedy series Frasier.  I once saw Chinese tourists who had been traveling in California on different itineraries compare notes, and they did exactly this.  How many hours did you get at Disneyland?  How many hours at Universal Studios?  How many hours at Yosemite?  It was like kids who went to different birthday parties comparing who ate the most cake and candy.  Their competitive, consumerist view of their overseas experience was like my dad’s attitude in the all-you-can-eat buffet when I was a kid: I had to at least eat an amount of food equal to the price of my admission, and if I exceeded that, we had “won” and “gotten our money’s worth.”  Treating travel as if it is the equivalent of consuming a candy bar makes us think and act like the swarm of ever-ravenous monkeys in the Buddha's parable.  They move like locusts from tree to tree devouring fruit.  They are never thoughtful, they never learn or grow, and they are never satisfied.  Their attitude represents the flame of desire that keeps the cycle of samsara spinning.  To see a perfect human representation of this seemingly perpetual state of hormone-driven craziness, just watch some footage of college students during spring break in Florida.

 

       Another problem with pop culture's view of travel is the growing lack of gratitude.  I have heard millennials with a decent income declare haughtily that travel is now considered a minimum measure of success, as if it is in a sense a God-given right, like cell phones and streaming video services.  The attitude is that the world is (or should be) your playground.  Problems like poverty, war, disease, oppression, global warming, pollution, et cetera, are a drag, and should not interfere with your enjoyment of Airbnb, sushi, bubble tea, tapas, margaritas, skiing, waterskiing, parasailing, ziplining, etc.  Any thoughtful and compassionate person must remain aware of the immense suffering that much of the world presently endures and be constantly on a mission to work in some way to alleviate its causes.  We must be thankful for what we have and remember our true purpose in life.  We must not allow ourselves to become snobs with an inflated sense of entitlement; we must maintain a higher vibration and set a better example than imperialist Europeans on tour in their colonies.

 

       I used to assume that travel automatically made people wiser due to their expanded wealth of experiences.  When I was overseas, I met people who showed me that it was in fact possible to travel widely and gain absolutely no wisdom.  For some, it means becoming more and more snobbish and arrogant.  When you have visited more than half the countries on earth, it is unfortunately all too easy to look down on those who have hardly been anywhere and want to tease them for their poverty or lack of worldly experience.  To assume that travel automatically leads to an increase in knowledge and wisdom is a terrible mistake.  If your mindset during your travels are on the same shallow and materialistic level as the above-mentioned troop of monkeys, you have zero chance of developing wisdom.  Your travels will probably just make you more blasé, egotistical, and difficult to please.

 

       A recent ad for Expedia stated: “Do you think any of us will look back on our lives and regret the things we didn't buy  or the places we didn’t go?”  It resonated deeply because this is the way most of us think.  When we meet someone, we regularly ask, “Where have you been?” as if this really mattered in and of itself in determining a person’s identity.  If we are traveling in order to party in Las Vegas, it is not in any way comparable to traveling in order to meet people and learn from them.  For example, this could include learning culture and language, studying with a professor, or spending time with a spiritual master.  Travel for the most profound of purposes is on such a higher plane than merely traveling to consume pleasant experiences that it deserves an entirely different name (the word travel, for example, could refer to either a forgettable, mindless lark or an epic odyssey of self-discovery).  

 

       We must also consider the environmental cost of travel when deciding whether or not to visit a place.  Until clean technologies become available, there is an added cost/benefit analysis that must be calculated.  Notice how Greta Thunberg admirably chooses to take a sailboat to cross the Atlantic rather than to fly.  At present, we are burning fossil fuels to travel most everywhere we go, and so we must weigh the reality of our carbon footprint when justifying the need to go anywhere, especially halfway around the world.  Flying to the Maldives to stay in a luxury hotel for a few days might not be worth it from an ecological standpoint, and will probably help put the whole island chain underwater.

 

       Yes, sometimes we all need to travel to take a break, get some exercise, or recharge our batteries, but sometimes this is just an avoidance technique, whereby we delay doing the things we know we should do.  I for one often put off meditating and go to the gym or for a hike instead.  When I was in college and knew I should be studying for a big test, I sometimes snuck off to the movie theater rather than act responsibly.  The idea of physical movement is more attractive than stationary focus: it is easier and more immediately gratifying.  It provides the illusion of achieving something simply because we are changing the scenery around ourselves.  If we tell people we went to a beach, a mountain, or any distant and scenic place, it provokes oohs and ahhs.  People are impressed.  We become the center of attention.  They ask detailed questions about what we did, what we saw, what we ate or drank, etc., as if it made any difference.  If, on the other hand, we tell them we read a book or meditated all weekend, people think it is boring and have no follow-up questions.  For a spiritual-minded person, it must be recognized that meditation is the most important thing for us to master in this lifetime.  If our travel helps with this, it is worth it to go to the ends of the earth.  If we are merely traveling to avoid learning to control the self, there is no point in it.  Sometimes it is better to stay put and practice.

 

       When we travel, we should have a greater purpose in mind.  We should make it our aim to learn something, to improve ourselves, or to solve a problem, always with the ultimate objective of helping others in the long run.  In the course of doing so, it is better to go to one place and drink deeply than to visit 100 places in a hurry, glancing briefly at the flowers before moving on to the next spot.  If our goal was to brag about how many places we visited or in how many languages we learned to ask for the bathroom, we should aim for quantity over quality, but this would be unwise.  View your trip through a mature lens and plan so as to do good for others directly, or else plan to gain knowledge or increase wisdom to improve the lives of others over time.  Think about how you will take the harvest from your experiences and pay it forward to benefit the next generation.  

 

       Don't automatically assume the more traveling you do, the better.  Yes, it seems really cool and fun, and it makes great videos that get people's attention, but none of that is important.  The main thing is to realize that you are put on this earth to work unceasingly for the good of others, not yourself.  This may require us to travel or it might not, depending on your situation.  I remember when I was traveling across Australia my first time overseas and realizing that I was only witnessing a glimpse of life passing by, without having any impact.  I had been studying history at the University of Queensland, and I recalled a saying about the Mongols: "You can conquer the world from horseback, but you can't rule it that way."  As I soaked my sore feet in a fountain in Adelaide, I invented a corollary to this saying: "You can see the world with a backpack, but you can't change it that way."  To change the world requires far more thought, planning, experience, and effort.  We all need to see through the ego-based cultural standards connected to travel and rid the self of its desire to be constantly on the move, enjoying a multitude of external stimuli.  Remember what Milarepa said on the subject.  He told his disciples not to waste their time going on pilgrimage, but to stay where they were and meditate instead.  “The Buddha is complete within you,” he reminded them.  Also remember the truth in the Daoist adage: “The entire world may be known without leaving the house.”  

​

Gun violence and the Second Amendment

 

THE SECOND AMENDMENT TAR PIT

 

       The Second Amendment to the United States Constitution reads: “A well-regulated Militia being necessary to the security of a free State, the right of the people to keep and bear Arms, shall not be infringed.”  This was written in the late 1780s, right after the American Revolution.  The idea of people owning their own muskets and swords went back to the experience with Cromwell in England, where Parliament found it necessary to fight for the people’s rights in the face of a tyrant who did not honor the social contract.  This right to bear arms was enshrined in the English Bill of Rights in 1689 after the Glorious Revolution.  The British residents of the 13 colonies inherited this right and put it to good use when the distant authorities in London began to deny them the right to negotiate their tax burden through parliamentary representation.  It was the British denial of the colonists’ right to bear arms that led to the opening of hostilities on Lexington Green, sparking the revolution.  With this war fresh in their minds, and still keenly aware of their vulnerability to attack by European nations, Congress immediately passed the first ten amendments the Bill of Rights including the right to bear arms.

 

       The Second Amendment is problematic for many reasons.  First, it is specifically intended to ensure the existence of a militia, not the type of private ownership of firearms we have today.  Second, at that time, there was no standing army nor military-industrial complex.  In the 1780s, the right to bear arms was a guarantee that the country would have the basis of an army that could assemble for defense as needed.  In a world with a multi-branched military of millions, supersonic planes, nuclear weapons, and a navy that spans the globe, there is little need for such a militia as originally conceived.  Third, the nature of the weapons at our disposal as individuals is completely different.  When the founders wrote the Second Amendment, they were thinking of inaccurate, smooth-bore muskets that took a minute to load, not today’s rapid-fire assault rifles.  A person with an assault rifle can kill dozens in one minute, and if they have enough ammo, fully-automatic upgrades, targets, and a place to hide, they can inflict hundreds of casualties, as occurred at the Mandalay Bay Hotel in Las Vegas in 2017.  If only the Founding Fathers could have seen all the mass murder events in the U.S. over the last 50 years, they definitely would have added some footnotes to this amendment.

 

       There are certainly times and places when it is advisable to be well-armed.  When the United States was facing the Empire of Japan in early 1942, and considering the possibility of Japan controlling the entire Pacific, the fact that Americans were well-armed made it next to impossible for Japan to seriously consider a land invasion.  In addition to resisting foreign incursions, other reasons to be well-armed include the possibility of internal instability, like times of natural disaster when unrest could result, civil war, or the need to resist a dictator like Cromwell who wants to deprive people of their rights.  All of these instances, however, could be best guarded against with a well-regulated militia, not the possession of lethal weapons by everybody regardless of training, coordination with neighbors, or state of mental health.  Switzerland is a good example of a state that has many firearms for national defense (in the form of militias), but which carefully regulates the private ownership of guns.  

 

       Since I was born in 1968, over 1.5 million Americans have been killed by gun violence: about double the number of Americans killed in all wars since 1776.  In 2023 alone, there were an estimated 42,888 people killed by firearms in the U.S.  That amounts to about 117 deaths per day.  Out of every 100,000 people, that amounts to 10.89 deaths.  Compare that with Japan, which has an annual 0.08 deaths by gun violence per 100,000.  Even in firearm-rich Switzerland, the rate was 0.1 death per 100,000.  Why should people in the 21st century live in fear of being shot like this?  We are no longer living in log forts with British soldiers besieging us from all sides and marauding pirates swarming offshore.  Why should we have to worry about being shot by nutcases in schools, malls, theaters, or street fairs?  Whenever common-sense legislation is proposed to restrict gun ownership by lunatics, the knee-jerk reaction is to honor the Second Amendment’s “right to bear arms.”  Notice how they ignore any mention of the “well-regulated militia.”  We can argue all day about the causes of gun violence and crime in general and get no closer to a solution.  Yes, it is technically true that people kill people, not guns (although, as David Letterman once noted, it's really those darned bullets that do it), and that if all guns were outlawed, then only outlaws would have guns (along with police and soldiers).  It is also true that it is a lot harder to kill people with knives than with guns, and that the presence of a gun in the house actually increases the risk of death by firearm (often by suicide or in a family dispute when tempers flare).  We can take a pacifist attitude or a survivalist attitude and there will be pros and cons to each.  In the short term, the best course for each family to take vis a vis firearms all depends on what happens next.  Whatever decision we make as individuals regarding gun ownership ​– to have or not to have​ – can potentially lead to life-threatening situation.  In terms of the national debate on the gun issue, we are currently as stuck as a saber-toothed tiger in the La Brea Tar Pits.  

 

       This impasse will not last forever.  Eventually, most likely sooner than anyone thinks, events that no one can fully foresee will change the entire landscape of the political, social, and economic status quo.  We will have an opportunity to reframe the entire fabric of our socio-economic existence such a way that our descendants will be able to protect their human rights without having to endure such an insanely dangerous world.  If we put our heads together, we can make war and invasion a thing of the past.  We can create communities that are prosperous, equitable, and safe enough that violent crime is extremely rare.  The best way to prevent crime it is to raise children properly and work together with neighbors to create an environment in which all members have a stake in society.  We can create governments that are decentralized and democratic, so that despotism is all but impossible.  There will always be a need to have something akin to a military or police force around in some form, but this can be low-profile and low-risk in terms of its ability to install a leader with force of arms.  We should not look at the carnage of today and imagine that it must continue into the future.  Creating a world in which people outnumber guns is not a betrayal of the Constitution.  The Founding Fathers wanted the people of North America to be secure in their right to life, liberty, and the pursuit of happiness.  An important prerequisite for this is a continued absence of bullet holes in our bodies.  We cannot expect people to go on accepting the collateral damage we have been tolerating for the last fifty-plus years.  Do you want to see your grandson or granddaughter blown away by someone who should be in a mental hospital?  If not, we need to make major changes pronto.  We cannot expect a better world to take shape by accident; we have to use wisdom and foresight to create a plan right now, and then work tirelessly to make it a reality.

​

       One option to consider is a modified Swiss model based on a community co-op arrangement.  Firearm ownership and training could be not only allowed, but mandated, so long as it was part of a militia system to maintain order in case of emergency.  Safety would need to be a top priority.  Target practice would be done in most cases as part of a community activity.  Instead of the situation we often have, where armed criminals drive through communities and law-abiding citizens huddle inside (usually without guns), each person would be part of a prosperous and proud community (and raised to know that the worst thing they could do is bring dishonor on their community), and each community would speak softly but carry a well-stocked armory.  In the case of a gun-loving community that follows a nutty cult leader (or does some other crazy thing) and becomes a danger to itself and others, the regional militia composed of all surrounding co-ops would be sufficient to keep a lid on the situation.  The idea of a world without guns may sound nice to many of us, but for the time being, it is most realistic to aim for a firearm policy based on minimal risks and responsible community preparedness.  The key to making any such policy work is the success of all our other systems social, economic, political, and educational so that hatred, poverty, crime, and war become things of the past.  Looking farther into the future, as our culture changes for the better, hopefully the need to use firearms for self-defense will diminish, and one day our descendants will be able to live their lives without ever having the slightest worry about being shot. â€‹

​

Black Mirror

 

NOT JUST ANOTHER 15 MILLION MERITS

 

       Spoiler alert.  In the Black Mirror episode, “Fifteen Million Merits,” people live in a future where most are relegated to pedal exercise bikes all day to generate power.  This activity earns the workers a kind of currency called merits, which they can use to purchase food or television privileges.  All day long, the bike-pedaling populace is glued to screens filled with mindless entertainment.  Workers retire to their cubicles at the end of the day little rooms where the walls are giant television screens.  What they are able to see or avoid seeing depends on the amount of merits in their account.  Bing is a bike-pedaler with an extraordinary amount of drive.  He accrues 15 million merits and gives them to Abi, a love interest of his, but this backfires and ends in heart-breaking separation.  Bing then redoubles his cycling efforts and raises another 15 million merits, which he uses to purchase a spot as a contestant on a talent show that can give him a whole new life.  He shocks everyone by holding a piece of broken glass to his own throat and launching into an extended rant about the madness of their dystopian society.  The judges are impressed enough to give him a television time slot.  In the final scene, we see Bing doing his bi-weekly rant before the camera, then turning to enjoy a luxury apartment with a view.  He has become part of the system that he despises.

 

       This is one of the most powerful episodes in the series.  It encapsulates a great deal of the current insanity of our daily lives.  Our amazing technology is used by millions around the world, not to learn or to better ourselves, but to watch silly videos of people dancing, engaging in hedonism, or doing stupid things on purpose.  Our minds have created mass media whose main purpose seems to be to destroy our minds.  Our days are spent doing things as pointless as pedaling bikes and getting nowhere, and yet we, the people, are not in charge.  The system that governs our lives was designed by distant authority figures and is not open to discussion.  Each of us does what we can to stay sane in an insane world, and we try in our own small ways to make the experience of life as pleasant as possible.  When one of us is observant enough to make an on-air impressive critique of our culture, it leads to a moment of fame and perhaps some wealth, but makes no impact on the rules of the game.  The person who so ingeniously points out the flaws in the system becomes co-opted by the system.  This comes from the universal need to make money, and is therefore completely understandable, but the result is ironic.  We become accustomed to hearing talk of change, but always in a way that turns it into safe infotainment, not a catalyst for actual change.

 

       When we turn on the TV, what we usually see is 99% insipid entertainment and 1% insightful observation by people who have become a part of the system about which they complain.  What we need now is not another 15 million merits.  This amounts to little more than an extra gallon of water over Niagara Falls.  Fame and fortune can come to people who write books or produce films that make brilliant observations, but if these works do not translate into a revolution in thought and action, what use are they?  What we need now is a practical plan for comprehensive change.  We must think in a brand new way if we are to make the fundamental alterations to our way of life that are needed to save ourselves and our planet.  We cannot keep going along to get along: prioritizing our own financial security (essentially selling out to the powers that be in exchange for money because we see no other alternative), assuming that the world of tomorrow will be just like the world of today.  We need to completely remake our socio-economic system, and with it the entire world.  We need to think of the whole more than our own piece of the pie.  We must dare to dream big.  Anything less will have all the effect of obtaining a better room on the Titanic.

​

The Roman Senate

 

THE GENIUS OF THE ROMAN REPUBLIC

 

       Rome is often depicted as having had the most impressive military and architecture in the ancient world, and by extension, the assumption has grown that Roman government and society were somehow a pinnacle of human achievement.  Nothing could be further from the truth.  Inequality, slavery, and corruption made the Roman Empire a hideous monstrosity that did not deserve to continue.  When it finally collapsed, the main question for historians should not have been, “Why did it collapse?” but rather: “Why didn’t it happen sooner?”  As thoroughly messed-up as the Roman Empire was, if we go back to the earlier days of the Roman Republic, there is actually a lot to admire.  If only the Romans had possessed the courage, morality, and foresight to make their society a place where neither slavery nor gross inequality could exist, perhaps they could have continued indefinitely.  So what exactly were the impressive qualities of the Roman republican system?

 

       The Romans built upon the Greek idea of citizenship.  Ideally, the Romans did not shirk from their duty; they upheld it with pride.  The early Romans embodied the spirit of JFK’s famous statement, “Ask not what your country can do for you; ask what you can do for your country.”  When everyone in society has this attitude, and when the benefits of citizenship are conferred on everyone equally, we all become brothers and sisters in the project of civilization.  Every problem is one for us to solve together.  Every benefit of our labor is one we can proudly share.  Unlike the Greeks, who offered citizenship only to people who were born in a particular city-state, the Romans conferred it upon everyone who moved there or belonged to any city within the growing confederation.

 

       The Roman Republic created an alliance of virtually independent cities.  They were united by a larger government composed of representatives from all regions, but the federal authority was not a remote dictatorship that micromanaged each community and removed their freedoms.  For the most part, the government was localized and decentralized.   While not equal or fully democratic, democratic principles of self-determination did apply.  The main binding impetus for all the cities in the Roman Republic was defense: when under threat, it was one for all and all for one.  Beyond that, it was mostly up to each city how they ran their own affairs.  The advantages of being included in the republic far outweighed the disadvantages, and although expansion usually came through warfare, citizenship was a coveted commodity.

 

       The Romans of the early republic had a stubborn determination to find workable solutions to their problems.  While they were perfectly willing to fight their neighbors, the early Romans believed that civil negotiations were essential among citizens and between classes.  As much as they loved bloodshed, the idea of using bloodshed to settle domestic disputes was (in the beginning at least) most definitely off the table.  While the division between patricians and plebeians was an unfortunate development based on ridiculous inequality, the Romans at least had the good sense to peacefully negotiate a compromise by which they shared power.  The Romans of this period had a positive, can-do attitude, and they did not take no for an answer.  Their political solutions were sophisticated and flexible.   As long as they maintained their steadfast commitment to the rule of law, they made an otherwise impossible situation work.

 

       These guiding principles are rapidly disappearing in our world today.  We are making many of the same mistakes as the inhabitants of the Roman Empire as if determined to repeat their self-destruction.  If we take the example of Rome’s rise and fall seriously, our inability to resolve our most pressing problems does not bode well for us.  In the case of the Romans, there was a famous turning point when the land reforms proposed by the Gracchus brothers was met by open violence in the streets.  Rather than peacefully negotiate a new compromise as they had in the Plebeian walk-out of 494 BC, the patricians were so unwilling to give up even a fraction of their wealth that they hired thugs to assassinate the leaders of a protest movement.  This new status quo, along with the Marian reforms that turned the army into a massive force composed of landless poor who would follow their leader anywhere, led to the need for an emperor to ride the tiger's back and prevent civil war.  If we wish to avoid the fate of the Romans, we need to rediscover the patience, willingness to compromise, and self-reliance that we had in the beginning of our own republic, as well as relearn some important lessons from the early Romans.  To greatly simplify the secret of their success, their policy was to turn enemies into friends and convert the whole world into a brotherhood of people who shared all the rights and responsibilities of citizenship.  Yes, the Romans were a hot mess.  Yes, we must remember that in addition to their inequality and slavery, their culture was full of other horrible traits like blood-lust, greed, hunger for power, endless corruption, and willingness to blindly follow a tyrant.  Without turning a blind eye to their shortcomings, we should recognize that in the early days, the Romans displayed a genius that no civilization had shown prior to them.  The earlier Persian Empire had been remarkably tolerant of regional culture and religion among conquered peoples, but it was a thoroughly top-down system.  The weakness of the Persian system lay in the fact that they demanded loyalty to a completely centralized authority, concentrated in the form of a single, fallible human being.  In the beginning, prior to the breakdown in respect for the rule of law, civil wars, and the need for a strongman to keep the peace, the Roman constitution gave citizens a sense of belonging and control.  It was intended to be a more or less bottom-up system where the policies of government were a direct reflection of the will of the people.

 

       Benjamin Franklin predicted the future of the American republic, writing,  “I believe . . . that this is likely to be well-administred for a Course of Years and can only end in Despotism as other Forms have done before it, when the People shall become so corrupted as to need Despotic Government, being incapable of any other.”  This is what happened in ancient Rome, when, as Juvenal wrote, “. . . long ago, from when we sold our vote to no man, the People have abdicated our duties; for the People who once upon a time handed out military command, high civil office, legions — everything, now restrains itself and anxiously hopes for just two things: bread and circuses.”  We are now at a pivotal juncture where our institutions are on the verge of breaking down.  According to the usual paradigm, this is the point where we descend into authoritarianism.  If we want to avoid this trap, we need to act with wisdom and courage, using all the lessons of the past as our guide.  Our amazing technology can either be used to create a dystopian surveillance state that makes 1984 pale in comparison, or we can harness it to save both ourselves and our planet.  To do so, we need to learn from both the successes and failures of the past.

 

       The mistakes of the Romans are obvious, most notably their unwillingness to stand up for human rights (by ending slavery, for starters) and to stop the unbridled accumulation of wealth in the hands of a few.  At the same time, the brilliance of their early system is important to study and adapt for our present needs.  The Romans left each city within the republic essentially free to run their own affairs as they saw fit, so long as they followed basic republic-wide norms and sent troops when needed for the defense of all.  Moreover, they held firm to the belief that non-violent negotiations among citizens could always lead to a compromise.  In the future, we can can adopt the best aspects of their decentralized model and improve upon it through voluntary and peaceful means.  When the people of the world decide they have had enough of war, we can, in very short order, create mixed, democratically-run, self-governing, intentional communities.  These would re-weave the tapestry of all our socio-economic systems, ending the age of warring nation-states and leading us into a golden age of peace and prosperity.  

 

Abraham Lincoln

 

LEARNING FROM LINCOLN

​

       The Gettysburg Address took less than two minutes to deliver, but it is one of the most famous speeches in the English language.  Although it is renowned today, it was not recognized as being particularly impressive or important by most people at the time.  Lincoln himself thought the speech was a failure.  The Chicago Times stated that all Americans must be ashamed by the president’s silly, flat, dishwatery utterances.  Its brilliance was appreciated by the main speaker in Gettysburg that day, Professor Edward Everett, who wrote, “Dear Mr. President, I should be glad if I could flatter myself that I came as near to the central idea of the occasion in two hours as you did in two minutes.”  In this speech, Lincoln masterfully combined brevity, eloquence, and insight.

 

       Lincoln had been a lawyer and was loquacious to a fault.  When necessary, he would speak for hours, as he did in his famous debates with Douglas in 1858.  As much as he loved to talk, whenever he was able to express the essence of his thoughts in fewer words, he did so.  If today’s 24-hour news programs, podcasts, and streaming channels had existed at the time of the Civil War, Lincoln would have been invited to do interviews and answer questions about what influenced him, his philosophy, his opinions, what inspired him to write his speech, exactly what he meant with his words, how he wanted people to feel, what demographic he was aiming for, how he felt about the latest poll about his speech, etc., etc., ad infinitum.  Lincoln said exactly what he wanted to say: no more, no less.  It would not have been Lincoln’s style to do the talk show circuit the way authors and politicians do today, shamelessly self-promoting, plugging his latest book, and looking to sell his own brand of stove-pipe hats.  Describing in more drawn-out language the meaning of the words you have already written or spoken is counterproductive.  Doing so essentially teaches people not to read and think for themselves, but to rely on you to summarize (simplify) the message of your own speech, play, song, poem, or book down to a one-liner for them.  Could you imagine Shakespeare going on television interviews and being grilled about what exactly he was inferring in every line spoken by every character?  Moreover, Lincoln would have been judged not for his words and their meaning, but for his affectation or facial expression when he was taken by surprise with questions that were random, inappropriate, off-topic, and/or personal.  People would have formed an opinion based on such irrelevancies as how he looked (gangly, gaunt, and exhausted), what his voice sounded like (Lincoln was said to have a reedy voice), his posture (Lincoln would sprawl and slouch), or his dynamism (he was often so quiet and unobtrusive as to go unnoticed).  He was not movie star material, and therefore public estimation of Lincoln would have plummeted if the populace back then had been accustomed to performance artists as we are today, and if they had seen him paraded across all the talk shows the way public figures are now.  The media of the 2020s is interested in generating viewership for money, which means dragging out conversations, gossiping, over-analyzing, and inventing trivial topics of inquiry or outrage, all the while ignoring or minimizing the most important problems and issues we face.  If we liken the news reporting of the present to the examination of a patient with a serious illness, the bulk of the discussion is focused on a graphic depiction of the most sensational symptoms; not an authentic investigation into the underlying causes and a realistic plan for a cure.  It might motivate us to reassess our attitudes if we remember that within the context of this analogy, we are not just passive observers, but the dying patient as well.  It is our own human-invented social, political, and economic systems that are failing, as well as the life-support system of our own planet.  

 

       In the 1800s, people did not judge candidates so much by their age, their appearance, the tone or volume of their voice, their phony, media-generated image, their mannerisms, theatrics, or other unimportant details.  They pored over the words and actions of the men running for office and deeply considered their ideas.  Yes, a literate public paid attention, read the newspaper, and people who ran for office were actually held accountable for the words they spoke and wrote.  Public figures were expected to be knowledgeable and their statements were supposed to be factual and meaningful.  Can you imagine?  Citizens were expected to possess common sense and not be under the spell of brainwashing agents of disinformation or propaganda.  They were expected to pay attention to current events, know their history, and most importantly of all  think for themselves.  When we stop reading and using logic, we are rendering ourselves incapable of functioning as citizens of a democratic society.  We need to turn off the media onslaught of mindless blather.  They feed us drivel because we keep accepting it like hypnotized consumers instead of empowered citizens.  Rather than passively accept whatever corporate media chooses to shovel into your skull, turn off the idiot box and read something meaningful.  We need to stop depending on others to chew our food for us.  If Leonardo Da Vinci were here now, he would be begged to explore in endless detail the meaning, intent, and motivation behind the Mona Lisa, but he would no doubt decline to be interviewed.  He would want us to think about it for ourselves.  In Chinese, there is a wonderful idiom (言多必失) which essentially means: when words are abundant, there is sure to be a loss.

​​

Supreme Court and the Three stooges
6f15f455a6dcba14d41ce077e1f89c86-2989703674.gif
alito-thomas.jpg
Amy Coney Barrett
Kavanaugh and beer
Larry Fine
alito-flag-937192567.jpg
Screenshot 2024-07-02 182044.png
giphy-696582874.gif
Screenshot 2024-06-30 142447.png
tenor-4015701876.gif
Screenshot 2024-06-30 142720.png

​

MICRO-PHONIES STOOGE TO CONGA

 

       The 1936 Three Stooges short, “Disorder in the Court” is a classic.  The mayhem includes a talking parrot that escapes and must be caught, a man having his toupee shot, a scandalous dance scene, Moe swallowing a whistle and becoming a human calliope, and half the jury getting knocked unconscious with a gavel.  The courtroom misbehavior of Moe, Larry, and Curly can be forgiven because they were entertainers and comedians.  The zany antics of this trio, even if they had taken place in a real courtroom, would have been far less destructive to life, liberty, and the rule of law than the recent behavior of the highest justices in the land, from whom we expect slightly better behavior.  If Moe replaced Alito, Curly replaced Barrett (Amy Curly Barrett?), and Larry replaced Thomas, there might be more eye pokes and pie fights, but in the long run we would probably get far more decorum, good intentions, and reasonable decisions for the American people.  Unfortunately, there are now six stooges on the court.  We would need to bring back Shemp and Curly Joe to help out, but that would still leave us in need of one more replacement (Benny Hill, perhaps?).

​

       To simplify things and avoid the most incendiary topics, this segment is not going to be about the Supreme Court’s jaw-dropping interference in politics or their scandals involving corruption, gifts, criminal lies to Congress, or strong personal biases that should invalidate a justice from serving.  Let’s limit the scope of this rant to (1) the basic problems surrounding the institution itself, (2) some of their more egregious rulings, and (3) a plan for a better way to run our legal system.  

 

       The Supreme Court is created by Congress, which is composed of partisan politicians.  A majority party in control of Congress could add more justices and allow a president to pack the courts, or play games with the appointment procedure.  Just look at how Merrick Garland’s nomination by Obama was ignored for months, ostensibly because it was an election year, but then Amy Coney Barrett’s nomination was rammed through right before the 2020 election.  How is this remotely constitutional?  Most justices to the Supreme Court are nominated based on lists created by extremist think tanks like the Federalist Society, not to create a representative balance, but to dominate the opposition and reshape the political landscape along the most extremist lines.  This is why Barrett’s name immediately came to mind: because she is like a real-life character from The Handmaid’s Tale.  Due to the use of this litmus test, the court has been completely transformed.  Under the guise of interpreting the law, the court has been redefining the role of government based on the radical ideologies of a small minority.  Justices to the Supreme Court are selected by the President, which is far too much power to be invested in a single person.  Supreme Court justices serve for life, which is another huge problem, and the current game is to appoint the youngest, most radical justices possible, so they can push one party’s agenda and control the nation’s culture for as long as possible.  Supreme Court justices can choose to retire when the President belongs to the same the political party they do, ensuring the type of successor they will get on the court.  Moreover, the Supreme Court can select which cases to accept and which to decline, when and how to do this, and what to say or not say about their reasoning or lack of reason.  They can take their own sweet time when it suits them to slow down a process.  They can reframe the question posed by a case to deliberately make it sound more complex than it really is, and then send it back to a lower court to untie the knot that they themselves created.  There is no accountability, oversight, or input from the people of the nation who are forced to live with the outlandish rulings of an institution gone bad.  The way the framers envisioned it, the Supreme Court was supposed to be a council of wise elders who were beyond reproach.  At this point, it has basically lost all credibility with the American people.  The majority of current Supreme Court members are knuckleheads (as Moe would say), and the foul odor of their ill-gotten appointments, lies, lack of conscience, and blatant, holier-than-thou judicial activism has made the institution an embarrassing anachronism from the 1700s.  When it suits them, they adhere to a strict constitutional interpretation, but when that fails to obtain the results they want, they conjure up any other bogus argument that achieves their objectives, no matter how incompatible such an argument may be with the Constitution or the intent of the framers.

 

       Let’s review some of the Supreme Court's most glaring disasters:

 

       Dred Scott v. Sanford, 1857 - When asked whether or not Dred Scott and his wife should be declared free because their owner had temporarily resided with them in a free state, the court went above and beyond to show its allegiance to slavery.  Chief Justice Taney, a slave owner from the South, wrote a decision that set the Constitution on its end.  The court ruled that not only had Dred Scott, as an enslaved person, lacked any standing to sue in court, but that slavery was protected everywhere by the Fifth Amendment, which stated that an owner could not be deprived of their property except through due process.  This meant that no law could make slavery illegal anywhere, effectively rendering all free states into slave states.  If only cotton had grown in the North and everyone had accepted this ruling, slavery would have instantly expanded and slavery never would have ended.

 

       The Civil Rights Cases, 1883 - The Supreme Court found the Civil Rights Act of 1875 to be unconstitutional.  In an incredible feat of mental gymnastics, the court decided that the equal protection clause of the Fourteenth Amendment did not apply to private individuals, as all sane people believed, but only to the actions or laws of state governments.  This meant the federal government would allow racial discrimination to continue unabated.  The people who were meant to be protected by the Fourteenth Amendment were now oppressed by it.

 

       Santa Clara v. Southern Pacific, 1886  - The court twisted the Fourteenth Amendment’s equal protection clause, originally meant for formerly-enslaved African Americans, to grant legal personhood to corporations.  From this point on, businesses have been treated as legal people (with freedom of speech, freedom to use their money to influence political decisions, et cetera), although they are of course not real people and need no legal protection.

 

       Plessy v. Ferguson, 1896 - The Supreme Court determined that African-Americans could be subjected to segregation under a so-called “separate but equal” policy, although there was in fact no equality to be found.

 

       Buckner v. Bell, 1927 - The court decided that incompetent people could be forcibly sterilized for the public good.  This ruling encouraged eugenic policies and led to routine sterilization of mental patients and others deemed by the authorities to be undesirable Native Americans in particular. 

 

       Korematsu v. United States, 1944 - The court found that the internment camps meant to hold Japanese-American citizens during World War Two were legal.

 

       Bush v. Gore,  2000 - In this 5-4 case, the Supreme Court determined the outcome of the presidential election by deliberately depriving thousands of people of their right to have their votes counted on a straightforward ballot.  Notice that Amy Coney Barrett and Brett Kavanaugh were both working on the side of the Bush team.  

 

       Citizens United v. Federal Election Commission, 2010 - The Supreme Court ruled that financial contributions to any political candidate, party, or political action committee were a form of speech, and therefore could not be restricted.  This allows the rich to buy campaign ads and exert an unfair influence over an election.  Rather than debate policies in an equal forum, vast amounts of untraceable “dark money” now flood local elections to smother a candidate with negative ads, brainwashing the voters to turn against them.  Some wit observed with a clever play on words: “If money is speech, then speech isn’t free.”

 

       McDonald v. Chicago, 2010 - Although the Second Amendment clearly states that the reason for the right to keep and bear arms is the necessity of “a well-regulated militia,” the court determined that no city can ban individuals from owning guns regardless of their lack of participation in a militia.  Cities with high crime rates are thus stripped of the ability to make reasonable laws to keep guns off the streets.  Through their own creative interpretation of the Second Amendment, the Supreme Court has rewritten it for us.  Since when was that their job?  Doesn’t that require a Constitutional Convention?

 

       Epic Systems Corp. v. Lewis, 2018 - The court ruled that companies can prohibit workers from filing class action lawsuits and require them to enter into arbitration.  Justice Ruth Bader Ginsburg wrote the dissenting opinion, calling the decision “egregiously wrong."  She said the ruling would result in “huge under-enforcement of federal and state statutes designed to advance the wellbeing of vulnerable workers” because of the difficulty in pursuing claims individually.  She wrote: “By joining hands in litigation, workers can spread the costs of litigation and reduce the risk of employer retaliation.”

​

       Rucho v. Common Cause, 2019 - The court determined that any challenge to a state district map on the basis of partisan gerrymandering is a political question and therefore non-actionable by the court.  Roberts said that some amount of partisanship was “inevitable,” although a requirement could easily be made to have computers draw lines that made voting outcomes perfectly match the ratio of party affiliations in a state.  Instead of declaring that citizens cannot be deprived of their right to participate fairly in an election, the Supreme Court passed the buck, essentially condoning gerrymandering.

 

        Shinn v. Ramirez, 2022 - It is illegal for a federal court to consider any evidence that would exonerate a person on death row if that evidence was not introduced due to the incompetence of state-appointed lawyers.  It is now considered better for 100 innocent people to be put to death than to allow one guilty person to go free.

 

       New York State Rifle and Pistol Association v. Bruen, 2022 - In this 6-3 case, the Supreme Court determined that it is the Second Amendment right of citizens to carry a pistol in public.  The idea is that the court should evaluate the regulation not in consideration of the public good, but in light of the “historical tradition of firearm regulation.”  Apparently, Supreme Court decisions do not need to take the public good into consideration.

 

       Dobbs v. Jackson Women’s Health Association, 2022 - To be fair, the lack of congressional legislation granting women the right to decide whether or not to continue a pregnancy, at least in the first trimester, is what made the Supreme Court so instrumental in determining the law regarding abortion rights in the first place.  Since the public did not demand it, and Congress did not deliver it, the opinion of the Supreme Court has been pivotal.  The 1973 Roe v. Wade decision granted women the right to decide what to do with their own bodies for the first three months of pregnancy, after which other laws and restrictions could apply, as determined by state governments.  This landmark ruling said that women had a constitutional right to an abortion in the first trimester.  Some two-thirds of Americans were satisfied with this status quo, but the new conservative supermajority was not.  In their ruling, they said that the right to an abortion was not mentioned in the Constitution, nor was it rooted in “the history and tradition of the Nation.”  The notion of “intimate and personal choices” that are “central to personal dignity and autonomy” was dismissed.  They ignored all history and tradition from 1900 onward, and focused instead on earlier laws that outlawed abortion at any stage.  The court’s decision stated, “Without any grounding in the constitutional text, history, or precedent, Roe imposed on the entire country a detailed set of rules for pregnancy divided into trimesters much like those that one might expect to find in a statute or regulation.”  And what is the result of throwing this question back to the states to decide for themselves?  States are arguing over a fair cutoff date beyond which a baby has the right to be born, and often coming up with a deadline of 15 weeks, which is two weeks longer than the length of a trimester as determined by Roe.  Tens of thousands of women in states that banned abortion since the ruling have been forced to travel to receive health care, many lives have been put in jeopardy because laws require that a woman’s life must be in danger before abortive procedures can be enacted, minors have been forced to carry rapists’ children to term, women have been subjected to C-sections to remove dead fetuses so doctors can pretend they were attempting to deliver a live baby (so they can protect themselves legally if prosecuted for breaking the law), and all data indicates that there has been a massive wave of personal and economic suffering as a result.    

 

       Glacier Northwest v. International Brotherhood of Teamsters Local Union No. 174, 2023 - The court upended decades of union policy by ruling that a company can sue a union for damages caused by a strike.  This puts more power in the hands of companies and allows them to threaten unions with punishing fines in addition to their lost wages for exercising their right to go on strike.

 

       303 Creative v. Elenis, 2023 - In this 6-3 decision, the court found that a web site designer had the right to refuse to make web sites for marriages if the couples were gay, because it went against her religious beliefs.  The truly shocking thing about this case is that the court heard it at all.  In order to bring a lawsuit, a person must have standing, but in this case, the web site designer in question had not been sued and she was not doing any work on anyone’s wedding.  She was merely thinking about beginning such work, and felt constrained by the law, so she preemptively sued the state.  Normally such lawsuits are dismissed for lack of standing, which means no one has been harmed and no need for the court to settle any grievance.  It seems abundantly clear that this Supreme Court is on a mission to change culture and is cherry-picking which cases to hear, irrespective of whether or not the litigants even have a right to be in court in the first place.  Justices are supposed to determine the outcome of cases as if holding up balanced scales while wearing blindfolds, like the Greek goddess Themis.  They are not supposed to plan out what acts of judicial activism they intend to perform, then find a case that gives them an excuse to make a ruling they had in mind all along.  It seems their fingers are now in every pie.

 

       Alexander v. South Carolina NAACP, 2024 -  The court reversed a lower court’s unanimous decision that the map of South Carolina congressional districts was racially gerrymandered and had to be redrawn.  The court is apparently fine with racist gerrymandering.

 

       Garland v. Cargill, 2024 - In a 6-3 decision, the court decided that the federal government cannot ban bump stocks, devices that turn automatic rifles into machine guns.  The logic behind this is that a bump stock did not literally make a semi automatic into a machine gun, even though the practical function of the device is to do exactly that.  In a country with such severe gun problems as we currently have, where tens of thousands of people die every year from the illegal use of firearms, it is only common sense to ban bump stocks.  To split hairs over the definition of an object based on its construction and not its use or lethality is the kind of abstruse, pedantic exercise that makes sense to dwellers in the clouds, not real people of flesh and blood who have families that need to be protected.  The next time a mass shooting is performed with a bump stock-equipped gun, we will know who to thank.

 

       Department of State v. Muñoz, 2024 - The court decided in a 6-3 decision that citizens who marry foreign spouses do not have the right to sue the government, even for so much as an explanation, when their marriage partners are denied entry into the country.  The majority opinion made it sound as if allowing such official requests for an explanation would be a complete capitulation by the U.S. Government to allow narco-terrorists to enter our country at will through fake or unwise marriages, when no such issue was ever under debate. 

 

       Snyder v. United States, 2024 - In another 6-3 decision, the court decided that prohibitions on rewards and bribery for government officials do not apply if the “gratuity” (we are no longer calling it a bribe, kickback, or payoff) is given after the favor is performed.  In the case in question, the mayor of Portage, Indiana awarded a contract worth millions of dollars to a trucking company and later showed up demanding $15,000 for the favor.  They agreed on a price of $13,000.  When charged with accepting bribes, the mayor unconvincingly claimed he had taken the money as a “consulting fee.”  The mayor was found guilty.  The Supreme Court reversed this decision and found Snyder innocent.  In the majority opinion, Kavanaugh wrote that this $13,000 transfer was basically a tip, no different from a gift card, a lunch, a plaque, a book, a framed photo, etc.  He says that payments must be accepted with “a corrupt state of mind” in order to count as illegal bribery.  This will be a difficult thing to prove in court.  As if the government wasn’t corrupt enough already, now all civil servants have the right to demand and accept “tips” of as-yet-unspecified maximum amounts for doing their jobs, so long as the quid pro quo arrangement is not explicitly stated in writing or on a recording.  The only other rule is that an official cannot legally accept the payments until after services have been rendered.  Yes, paying a public official like a mayor or governor (or president?) a sum of $13,000 for a job well done is no longer frowned upon.  It is basically the same as tipping in a restaurant.  This will definitely not lead to officials auctioning off their services to the entities that promise to kick back the biggest rewards, right?  Is it remotely possible that this pro-gift stance of the conservatives on the court has something to do with Justice Thomas accepting over four million dollars (and Alito taking over $100,000) in gifts from billionaires with interest in the outcomes of cases before the Supreme Court?  In her blistering minority opinion, Justice Ketanji Brown Jackson laid bare what a glaring error the court has made with this decision.  She writes that such an “absurd and atextual reading of the statute” that calls payments of up to $13,000 to officials “gratuities” instead of “rewards” is an interpretation that “only today’s Court could love.”  Ouch!  Speaking as the voice of reason, she adds, “Officials who use their public positions for private gain threaten the integrity of our most important institutions. Greed makes governments at every level less responsive, less efficient, and less trustworthy from the perspective of the communities they serve.”  What used to be common sense is now a dissenting opinion: a voice in the wilderness.  If we look into the mirror of history, corruption also played a major, corollary role in the final collapse of the Roman Empire, whose death spiral is looking more like a direct reflection of our current situation.  Relative lack of corruption among public officials is a major reason why the U.S. has normally outperformed other nations economically, but it appears this may no longer be the case. 

​

       Loper Bright Enterprises v. Raimondo, 2024 - In another 6-3 decision, the court overruled the 1984 Chevron decision, which stated that courts should normally defer to federal regulatory agencies in the interpretation of the law.  This grants far more power to corporations and presidentially-appointed federal judges, and takes power away from career professionals at the EPA, FDA, FCC, FAA, SEC, OSHA, and other regulatory agencies.  This is a massive re-allocation of governmental power that will weaken standards for air, water, food, consumer products, transportation safety, and worker safety.  It will open the floodgates to challenges of all regulatory agency policies, to be decided by the whim of each judge, and require clarification by higher courts, including the Supreme Court.  It will turn a relatively stable situation into the Hollywood version of the Old West.  The majority opinion states, "Perhaps most fundamentally, Chevron’s presumption is misguided because agencies have no special competence in resolving statutory ambiguities.  Courts do."  Justice Kagan's dissenting opinion states: “a rule of judicial humility gives way to a rule of judicial hubris. . .  In one fell swoop, the majority today gives itself exclusive power over every open issue  no matter how expertise-driven or policy-laden  involving the meaning of regulatory law.  As if it did not have enough on its plate, the majority turns itself into the country’s administrative czar."  She says that the court's new supermajority “disdains restraint, and grasps for power.”  She is right: this is a brazen power grab that will increase corporate profits but adversely affect the lives of everyone in the country for as long as the ruling stands.

 

       United States of America v. Donald J. Trump, 2024 - When it comes to the argument that presidents have complete immunity, the court could have either summarily affirmed the lower court's unanimous decision and allowed trial to proceed, or, if there was some special reason to do so (which there wasn't), they could have heard the case on an expedited basis and ruled in a matter of days.  Instead, they first refused to take the case on an expedited basis, then accepted it later, and then dragged their feet until the very last day of the session to make their ruling.  In a hearing in May, Justice Alito, in stooping to consider the preposterous notion that a president should have complete immunity and could order the assassination of political opponents with impunity, asked whether or not a president who knew they would be punished after their term in office might just abuse the law even more.  All those working in law enforcement and the judicial system could not believe their ears when they heard this.  By such logic, we might as well abolish all laws so that criminals aren’t pressured to do even worse things because they know they are on their way to jail.  To have a justice on the Supreme Court ask such inane questions is beyond belief.  All reputable legal scholars have pointed out that the court's partisan handling of this case is so transparent that it strains credulity.  On July first, the last day of their session, the court finally ruled that presidents have absolute immunity for official acts but not for unofficial acts, the determination of which are for lower courts to decide.  All they have done is waste six months in order to restate the obvious in a complicated legalese way that makes it nearly impossible to hold anyone in office accountable for the crimes they commit, since crooked office-holders will find ways to claim that their illegal acts were done in conjunction with their official acts (like communicating with other officials), for which they have absolute immunity.  Defenders of the court say this ruling is necessary to prevent presidents from being dragged into court for collateral damage done as a result of policy decisions, but when has this ever happened in the last 250 years?  Not one single time.  There is really no need for this protection.  If Richard Nixon had enjoyed the protection of this court and this ruling, he never would have been forced from office 50 years ago.  If an investigation had begun, he would have successfully stonewalled by claiming that everything he did was part of an "official act."  Why should the President, whose duty is only to see that the laws are faithfully executed, need to be immune from subverting those same laws?  Nixon said, "Well, when the President does it, that means that it is not illegal," and now the Supreme Court has made Nixon's fantasy into fact.  In her dissent, Justice Sotomayor wrote, "Today’s decision . . . reshapes the institution of the Presidency.  It makes a mockery of the principle, foundational to our Constitution and system of Government, that no man is above the law."  This unbelievable ruling not only dismantles decades of established protocol, it dismantles the whole constitutional balance and begins a new era of authoritarianism.  This ruling shreds the Constitution.  It was supposed to be a ruling for the ages, but it is not worthy of the stooges.  

​

       The court has damaged its own legitimacy to the point of no return.  A plan that sounded good at the Constitutional Convention back in 1787 now looks as well thought-out as the first draft of "Romeo and Ethel, the Pirate’s Daughter" from the movie Shakespeare in Love.  When Amy Coney Barrett became a Supreme Court Justice and replaced Ruth Bader Ginsberg in a last-minute rigging prior to the 2020 election, she made a point of declaring that she, Gorsuch, and Kavanaugh were not just political hacks.  Subsequent experience has shown that this is exactly what they are, which is not to say that they are any worse than Thomas, Alito, and Roberts.  At the end of 2021, Justice Sotomayor asked the world, “Will this institution survive the stench that this creates in the public perception that the Constitution and its reading are just political acts?”  Chief Justice Roberts recently chided the critics, saying, “Simply because people disagree with opinions, is not a basis for questioning the legitimacy of the court,” but a whopping seven out of 10 respondents to a June 2024 AP poll said they believed the court was now guided by ideology.  The three in ten who did not agree with this statement were the ones who obviously do not pay any attention to current events.  How are we supposed to take the court as anything but an embarrassment after such transparent partisanship and lack of professionalism?  The conservatives on the court were picked to be chess pieces in the culture wars, pure and simple.  These self-important robed clowns aren’t even making a pretense of following the Constitution anymore, they are just making things up out of whole cloth.  Unelected civil servants who accept the role of dictator for life as if it is their God-given right and ignore the will of the people run the risk of instigating such sweeping changes that the coveted, privileged positions that they now take for granted might disappear entirely. We are supposed to have three co-equal branches of government, but when the judicial branch is headed by a group of people who, instead of being neutral arbiters as intended, are hand-picked zealots, how can either the Supreme Court or the Constitution survive much longer?  When the highest referees in the country wear the uniform of one political party and openly accept gifts from the billionaire class, it is pretty much game over.  

 

       The entire way in which we conceive of our legislative and judicial processes needs to change.  Early in its history, the Supreme Court claimed the ability to do “judicial review,” which means the court can find a law unconstitutional, even though this power is not mentioned in the Constitution itself.  The fact that a particular law may not be fully compatible with the Constitution (and there is wide room for varying interpretations of this archaic document with its vague wording) does not mean that the original need for the law no longer exists.  By separating the court from the legislative body that creates laws, we have a situation akin to a company with a factory that produces a cell phone and a separate quality control department that does a safety test years later, after millions of products have been sold, and winds up recalling the model entirely.  The need for the cell phone still exists, and of course people have already built their lives around the apps, plugs, and accessories that fit with the devices they have been using.  This causes a huge amount of needless disruption.  Add to that the fact that the quality control department is full of ideologues who are not interested in real-world problems or the needs or opinions of flesh and blood citizens: the ones who do most of the working and living and dying in the country.  The separation of powers that our Constitution provides was a huge improvement over the Age of Kings, but in the computer age there are far better ways of doing things. It was assumed in the late 1700s that there must always be a separation between the public (the rabble) and the elected or appointed elite (assumed to be gentlemen of higher character) who would fill the three branches of government (and rule over us, supposedly making the far-sighted, proper decisions that the rabble itself was not capable of making).  With the technology of today, this continued separation between the governed and those who govern is unnecessary and unwise.  Just as we no longer need politicians to lead us as "constitutional monarchs" in the executive branch, we no longer need fallible, corruptible, and often psychotic Supreme Court justices to decide for us what our own laws mean and how they are to be applied.  The notion that a chief or a king needs a council of "wise men" to make good decisions is so outmoded as to be ridiculous.  The present application of this notion  the assignment of low-caliber ideologues who understand neither the core principles of the law nor the Constitution – amounts to saying that the straw-headed scarecrow had a brain simply because he was handed a diploma by the Wizard of Oz.

 

       Just as a direct democracy would allow us to make laws and policies ourselves by popular vote, we can also modify these laws and policies as problems arise.  When conflicts over the interpretation of laws and policies wind up in court, the final arbiters need not be a panel of appointed judges, but could be a panel of millions of citizens, or hundreds of millions.  The nuts and bolts of how all this would work how a judge (or referee, or referees) would be arranged and how they would be held to account, or how juries would be selected or open to participation by members of voluntary committees could all be fine-tuned based on suggestion, discussion, and vote.  If we can use encryption and blockchain technologies to buy cryptocurrencies or do bank transactions with our cell phones, it should be possible to use them (or some other device, with votes tabulated electronically in a verifiable manner) to interpret laws and policies as well.  What we need now is rewriting, editing, and road-testing by citizens who are themselves the lawmakers and adjudicators.  I do not pretend to have all the nuts-and-bolts answers for every particular detail that will arise, but this is impossible for any one person anyway, since it must be an ever-evolving system that responds to the needs of the future in real time.  It is said that two heads are better than one, and in this case it will take all of our collective brain-power to develop a representative, effective, and equitable system and to maintain its operation over the long run.  If we all have a voice in deciding the laws and the application of the laws under which we live, we will not always agree with the decision of the majority, but we will be able to live with these decisions more easily.  We will be comforted by the knowledge that the wishes of all were taken into account, and that in the future, policies might change as people change their minds.  We will no longer be in a situation where six extremists have the power to make a ruling that alienates two-thirds of the citizens of the country (220 million of them), who must then abide by their bad decision.  The fully-democratic judicial system of the future will give power to the people.  It will function more logically and effectively than any previous system.  When looking at the past, later generations will have one main question: why did it take us so incredibly long to figure this out?  We must now look in the mirror and ask ourselves, in the immortal words of Curly Howard, "Is everybody dumb?"

​

Nathanael Greene

 

NATHANAEL GREENE'S WINNING FORMULA

 

       When faced with a situation that seems hopeless, how do you turn it 180 degrees so that victory becomes inevitable?  Rather than succumb to despair and allow yourself to be defeated, there is always a way to win in the end if we play our cards carefully.  In the last phase of the Revolutionary War, the British were in a position that appeared invincible.  The patriots had just suffered the biggest loss of the war and it looked as if their cause was becoming hopeless.  Greene found the skillful means to reverse this trajectory and snatch victory from the jaws of defeat.

 

       The British had first attempted to quell the rebellion in New England with a head-on attack, but after their defeat at Saratoga and the subsequent French entry into the war, they were only able to count on fortified bases in New York and Canada.  The rest of the country was openly contended.  By using their immense navy to prevent trade between the United States and the rest of the world, they could squeeze the life out of the American economy the way a python crushes its prey.  Then the British leaders thought of a new strategy: they would invade the sparsely-populated South in strength, accumulate Loyalists to their cause, build a chain of impregnable forts, then work their way north to stamp out all opposition, essentially rolling up the land like a carpet.  It was beginning to seem to the colonists as if the war would never end.  People in America were seriously questioning their judgment in having challenged the might of British arms.   

 

       General Cornwallis opened his campaign by capturing the American stronghold of Charleston.  Some 5,000 American soldiers surrendered without firing a shot.  George Washington then sent the victor of Saratoga, General Horatio Gates, south along with the man he said would be best to replace him: Johann de Kalb.  Gates and De Kalb faced Cornwallis at the Battle of Camden in a set-piece engagement.  Gates seriously overestimated the ability of his militia to withstand the attack of the British regulars, and they wound up fleeing the field.  De Kalb stood his ground with a group of Continentals and fought hand-to-hand until they were bayoneted to death.  Gates fled in humiliation, so stunned and humiliated that he did not stop running for days.  After the battle, there were reportedly more colonists in the South who were in arms to support King George than George Washington.  According to one American officer, the fiasco at Camden, coming on the heels of other recent setbacks brought on "a dreadful gloom which now overspread the United States."  Into this dismal situation, where all hope appeared to be lost, Washington sent Nathanael Greene.  

 

       Greene began with the assumption that the majority of Americans in the South actually supported independence, and that they would show their true colors if given the opportunity.  Second, he made arrangements with American colonists for their voluntary support, ending a policy of forcible appropriations for supplies.  When Greene’s men were well-fed and adequately clothed, he began to attack his superior foe.  The English forts were spread out and required resupply convoys at regular intervals.  These became Greene’s special targets.  He planned hit and run raids, retreating so quickly that the British could not catch him.  This enraged Cornwallis and exhausted his forces in fruitless pursuit.  In the meantime, a Patriot victory in the Battle of King's Mountain made men think twice about enlisting on the side of the English.

 

       Greene then broke a cardinal rule of warfare: he divided his forces in the face of a larger foe.  He dared Cornwallis to chase him north while he sent half of his army west, commanded by the intrepid frontiersman Daniel Morgan.  Morgan was pursued by the ferocious Banastre Tarleton, who ordered his men to execute any rebels who tried to surrender.  Greene’s choice of Morgan led to Morgan’s masterminding of the Battle of Cowpens.  Morgan chose a hill with a bend in the Broad River behind his forces, so the militia knew that running away was not an option.  He arranged his snipers in the very front, with lines of militia next.  He assured his militiamen that they only had to fire three times before retiring to a safer spot behind the Continentals.  He hid his cavalry behind the hill, so the British did not even know they were there.  First, Morgan’s snipers took out key elements of the lead cavalry.  The snipers retreated to join the main body of men as the bulk of the British infantry approached.  The militia fired and reloaded twice.  Then, after their third volley, they moved to the rear as ordered, with the British fast approaching.  The line of Continentals slugged it out with the the British at a distance, and then an order for a reformation of the line was misunderstood as an order to retreat.  As the Redcoats saw this retreat, they imagined the Americans were on the run.  They rushed forward, only to be devastated by close-range fire as the Continentals turned and let loose a volley at a distance of only a few dozen yards.  At the perfect moment, the American cavalry exploded from behind the hill and encircled the British.  The American lines pursued the British down the hill, and the battle was over in a few minutes.  Most of the British were either killed or taken prisoner.  The British lost 110 men killed, 200 injured, and 712 captured.  In the course of this short battle, Cornwallis had lost one quarter of his forces.  The Americans only suffered 25 dead and 124 wounded.  It was a masterpiece of strategy on the American part, helped a great deal by such incredible luck that the Patriots recognized it as divine providence.  From the British perspective, it was a devil of a whipping.  The date was January 17, 1781.

 

       The victorious Americans hurried to rejoin Greene’s main force on the road north, while a furious Cornwallis hastened to overtake them.  In order to move more quickly, Cornwallis ordered his men to stop and burn all their excess baggage from January 25th through the 27th.  On the 28th he resumed his northern march.  The race was on.  Greene made it to the Dan River first, took his men across, and saw that the last boats full of his men were safely on the northern bank just as Cornwallis’s forces arrived on the morning of February 15.  Cornwallis had destroyed his own provisions and pushed his men on a punishing 18-day forced march for nothing.  With his men weary and demoralized, he ordered them to retrace their steps back to the safety of their forts.  In a move that would make Sun Zi proud, Greene took only the time he needed to build up his forces with new volunteers, their spirits raised by the victory at Cowpens, then re-crossed the river and pursued Cornwallis.

 

       The opposing armies met at the Battle of Guilford Courthouse.  American forces now outnumbered the British, and they fought toe-to-toe for two hours.  The battle was described as long, obstinate, and bloody.  Finally, the American army began to gain the upper hand.  They were on the verge of sweeping the entire field when Cornwallis, to prevent being routed, chose to fire grapeshot from his cannons, essentially turning them into giant shotguns.  He deliberately cut down some of his own men in order to scythe through the American lines.  Out of prudence, General Greene was forced to retreat.  Technically it was a British victory, since they controlled the battlefield when the day was done, but both sides knew that the hunter had become the hunted.  It was a standoff.  The British were at a distinct disadvantage, however, because they were running low on supplies and surrounded by miles of hostile territory.  Cornwallis reasoned that he would fall back on the one trump card the British always possessed: their navy.  Cornwallis moved to the coast and was resupplied at Wilmington.  Then, since he was already so far north, it made sense to send him even further north to assist British forces in Virginia who were fighting Lafayette.  Meanwhile, Greene returned to South Carolina and quickly dismantled the remaining British forces there.  

 

       Greene had already disrupted the English strategy of conquering the colonies by systematically rolling them up from the South.  From the British point of view, one decision had led logically to another, yet by forcing the English to play whack-a-mole, there had been a clear shift in momentum.  They were now only reacting to American actions, not steering events themselves.  After skirmishes with Lafayette in Virginia, Cornwallis was ordered to move to the edge of the Chesapeake and create a naval post that could shelter British warships.  In retrospect, this was the worst possible decision.  Washington and Rochambeau, who had been waiting for just such an opportunity, quickly coordinated with the French Navy and rushed French and American forces to the area.  Cornwallis was surrounded by land and sea.  The siege of Yorktown led to Cornwallis's surrender and the end of the war.  According to rumor, the British band played “The World Turned Upside Down” as they laid down their arms at the feet of the Franco-American army.  Above all, this blunder at Yorktown that led to final American victory was made possible by the indomitable Nathanael Greene.  It was his dogged and cagey campaign style that thoroughly discombobulated Cornwallis and lured him north in the first place.

 

       The take-away lesson from Greene’s story is that if we play our cards right, we can take what appears to be a hopeless situation and flip the script to score a long-lasting victory.  This means starting with a strong foundation of belief, then doing what we can to build up our resources and capabilities.  We have to find the right side of history, appeal to the goodness in people, be realistic about our abilities, look at the big picture, and keep our eyes on the prize.  The most important key to victory was Greene's certainty that he would win.  He fixed his mindset with the firm conviction that no matter what happened, he would be victorious.  He knew that the attrition and loss of will to fight on the British side would lead to their ultimate defeat.  In Greene’s own words, “We fight, get beat, rise, and fight again.”  If we look for the underlying good in all people and agree on the common ground which unites us – a desire for peace, freedom, and justice – we cannot lose.  All we have to do is find the right approach and be unwavering in our efforts.  We must be wise, far-sighted, resilient, and resourceful.  Opportunities to remake the world for the better will always present themselves to those determined to save it.  No matter what happens, never despair: the outlook is usually darkest right before the dawn.

​

Screenshot 2024-01-02 181425.png

 

THE ANXIOUS GENERATION

 

       Jonathan Haidt’s new book, The Anxious Generation: How the Great Rewiring of Childhood is Causing an Epidemic of Mental Illness, details the negative impact that the internet and social media have had on young people since smart phones appeared in 2008.  Haidt lays out the evidence for the damage that this new technology has wrought with data on numerous graphs.  He analyzes the various aspects of the problem in detail and finally shares his recommendations and proposed solutions.

 

       More kids today report that they feel unwell than ever before, but some have questioned whether or not this is merely due to the fact there is more self-reporting than there used to be.  Haidt goes out of his way to prove that, by any objective measure, things have deteriorating between 2010 and 2020, and social media/cell phone use is the cause.  First of all, the number of hours kids spend online has risen precipitously.  Along with that, the percent of U.S. teens who reported having at least one major depressive episode per year has increased dramatically.  For girls, the rate increased 145% between 2010 to 2020, going from about 12% to almost 30%.  For boys, the rate increased 163% between 2010 and 2020, going from about 4% to about 12%.  Among college students, anxiety and depression have both increased over 100% between 2010 and 2020, with ADHD increasing 72%.  Hospital visits for kids' accidents decreased dramatically, which would normally be a good thing, except that it seems to correspond to them becoming more sedentary and racking up too much screentime.  On the other hand, emergency room visits for self-harm increased 188% for girls (with about 450 visits per 100,000 people in the total population) over this same period, and 48% for boys.  Suicide rates for boys aged 10-14 increased 91% during this time frame, and for girls of the same age range it increased 167% (although boys have a higher rate overall).  The amount of time 15-24 year-olds spend with friends each week has dwindled over this period from just under 160 hours a week to just over 40 hours a week.  As of 2019, roughly 50% of girls in 8th grade through 12th grade got seven hours of sleep or less, along with roughly 40% of boys.  To counter those who might argue that all of this is due to some other cultural change in America and not caused by these technologies in particular, Haidt shows that these problems are not restricted to the U.S.  With a series of graphs, he demonstrates that similar trends have taken place in other Western countries where smart phone use is pervasive.

 

       Haidt says that in order to be healthy, kids have to go outside regularly and engage in play (involving some level of risk), with little or no parental oversight.  Anxiety and depression, he says, are the result of the combination of too much freedom in the virtual world of their phones and too little freedom of movement in the real world.  He says that college students are always scanning for dangers now, where before 2015, they were scanning for opportunities.  He writes that at this time, the experience of being in university went from feeling safe to being traumatizing, because people were raised by helicopter parents in silos of safety.  Too many college students now expect a trigger warning before anything serious is discussed.  Haidt shares a letter from a college student, describing how a full class of students will sit in silence prior to the teacher arriving, with everyone on their phone and afraid or unwilling to start a conversation with anyone else.  She writes that Gen Z friendships are shallow and their romantic relationships are superfluous, all largely governed by social media.     

 

       Haidt calls this change brought on by these technologies “the great rewiring of the brain.”  He makes a list of the harms this rewiring has caused: social deprivation, sleep deprivation, attention fragmentation, and addiction.  He notes that behavioral psychologists discovered that to get people’s continued attention, you cannot reward a person for an activity every time they perform it, otherwise it becomes boring.  There must be the possibility of a reward, but not a certainty.  This uncertainty makes the experience exciting, like gambling.  This thrill of not knowing when the reward will come leads to rising anticipation, associated with a release of dopamine.  Each reward reinforces the need for more, and soon you are addicted to the behavior.  This lesson is built into not only video games, but also apps and platforms of all kinds, to keep us hooked.  Young people’s brains are still forming and they are thereby far more susceptible to manipulation and addiction.  Not only that, but their neural pathways are being shaped by these experiences, hence their brains are literally being wired differently than those of any generation before them.  Many young people are on their phones all the time.  Everything else seems boring and depressing, and they are essentially going through withdrawals every time they look away from their screens.  Haidt mentions how his daughter, when she was very young, had the presence of mind to ask him to take her tablet away from her because she wanted to stop looking at it but couldn’t.  If only we all had such inner strength to request an intervention.  Most of us are turning into Gollum with his “precious.”

 

       He recognizes that this is a “collective action problem” that no one person or family can solve alone.  All right, but what should we do about it?  Haidt begins by mentioning “voluntary coordination” between kids, parents, and schools, “social norms and moralization” within a community, “technological solutions” like lockable pouches, and “laws and rules.”  Okay, that's all very nice, but how far can this get us?  Haidt then offers up his major solutions.  Number one: he suggests that corporations need to be more responsible.  Ha, ha, very funny.  This will happen after magical rainbow unicorns become the CEOs of all the major corporations.  Number two: he suggests raising the age of “internet adulthood” to 16.  Good luck on that.  Number three: facilitate age verification.  Yeah, right.  Number four: encourage phone-free schools.  I hate to break it to him, but they have been trying that for years, but without airport security, it isn’t happening.  One idea that might help is the requirement that every student turns in their phone every day when they arrive, but the mechanics and logistics here are daunting.  He also fails to consider the fact that when districts try things like this, parents (understandably) demand that their kids have their phones with them at all times so they can reach out in case of emergency.  Then he lists some things governments can do to help give kids better real-world experiences, including: stop punishing parents for giving children real-world freedom, encourage more play in schools, design public space with children in mind, more vocational education, apprenticeships, and youth development programs.  Okay, these might possibly do some good, especially the last two, but unless the accessibility of cell phones and the content they provide is curtailed, this is not going to make much of a dent in the problem.  The last chapter is a list of parenting tips, which, while decent, are mostly preaching to the choir.

 

       Haidt has done an impressive job on proving his case that dramatic negative health effects can be blamed on Zuckerberg and the rest of the nouveau-riche Bond-villains.  I don’t think many people doubted this was the case.  He follows in the footsteps of other authors like Johann Hari with Stolen Focus and Anna Lembke with Dopamine Nation in condemning the effects that the “internet/phone in your pocket age” is having on the latest crop of young cyborgs.  I agree with almost everything he has to say.  I think he is right on to call what is happening a great rewiring of the brain.  I have seen it all firsthand over the last couple of decades.  I especially love his expression “the race to the bottom of the brain stem.”  I enjoyed his quote from the psychologist William James in 1890, that when we choose not to give our full attention to one thing at a time, we are in a “confused, dazed, scatterbrained state.”  Tell me about it.  Haidt’s conclusions are sound, his evidence is persuasive, and his recommendations are all level-headed.  Haidt mentions the impact these innovations have had on students’ ability to focus and study in school, but I think there are about 100 more books that need to be written on that subject.  The recommendation of all these books would be the same as Haidt’s: put the phones down!  What is missing from the book are solutions that will actually work under our current, real-world circumstances.  Haidt comes up with tame, bland, ineffective solutions for these problems because, like most everyone else, he thinks inside the box and incorrectly assumes that the template for the world of today will be and must be the template we will continue to use tomorrow.  These problems (and others, like global warming) are in the process of swallowing us whole right now.  If we continue on our current trajectory and only attempt the changes Haidt suggests, they will not work at all, and the next generation will be doomed to live as cattle-like, techno-zombie consumers (not citizens) in an authoritarian, post-apocalyptic wasteland.  Take a good look around if you doubt it.  At present, the circumstances are too stacked against us.  We need to think outside the box and awaken to the fact that we, the people, have the power to change the circumstances in their entirety.

 

       The playing field has to be shifted in order for us to have a chance of winning this struggle.  As it is, for-profit, trillion-dollar companies see our kids (and us) as cash cows.  They have too much power to be brought to heel under the current economic regime, and parents living in separate houses in fragmented communities are far too incohesive a group to act in unison to do anything substantial vis-à-vis these mega-corporations.  Meta, Google, TickTok, Apple, Samsung and the like will continue to use their trillions to come up with ways to keep us and our kids addicted to our screens, no matter how much damage it does to our mental or physical health, because this is how they made their trillions in the first place.  They will buy our politicians, oppose us with advertisements, divide us, and continue to do what they do best for as long as possible.  Only when we create intentional communities along the lines of cooperative socio-economics will we be able to develop groupwide norms and rules about the use of such devices and platforms.  Only when everyone in a community is on board with such a covenant can iron-clad practices be put in place that clever kids cannot circumvent.  Only then can a culture of normalcy be built in each community (on a foundation of wisdom and common sense) instead of a culture dictated by the money-grubbing schemes, apps, algorithms, and platforms made by mind-controlling billionaires who want to peddle internet heroin forever.  When kids today meet other kids, their mode of communication is to chat about online events (the virtual world is their real world), so they need to be up-to-the minute in their knowledge of these events, which means they need to have their faces in their screens at all times.  When kids in a co-op environment (who have grown up in the real world, not the virtual world of cyberspace) meet other kids from a co-op environment, they will talk about the positive and constructive things they have been doing and learning.  This also involves a new and infinitely more positive “great rewiring” in which kids are taught to love reading and think critically.  They will be resourceful, loving, appreciative, inquisitive, and eager to make the best contribution they can to civilization and society.  The very existence of these devices that allow the temptations of the whole world to fit in your pocket, or these industries that pump out addiction-oriented social media, games, smut, etc., is something that will need to be altered or ended altogether in the future (when we govern ourselves through a fully-participatory direct democracy), but first we have to build communities that function in an organized manner according to common principles.  This collective creation of neighborhoods that function according to reason – and have the goal of raising decent, healthy, fully-capable, independent, addiction-free kids (instead of cash cows) to adulthood – won't happen by accident; it must be done according to a clear plan.  It cannot happen when each of us is hiding away in our own silos and not interacting with others.  Neighborhood communities like this will be the result of discussion, compromise, and agreement with others who live in the vicinity, the creation of charters and rules, the movement of some people to another group where they feel more comfortable, and finally, the follow-through to create an environment where people are not junkies enslaved by their phones.  This war for the mind is an integral part of the war for the future in all its aspects.  Our freedom, our civilization, our society, and our government are all dependent on our ability to think clearly, collect accurate information, discuss options with others, and vote democratically.  If we cannot regulate technology in order to preserve these basic abilities, the catastrophic results cannot be overstated.  If such a dystopian future were inevitable, we might as well just put a chip in our brains and hand the controls to Elon Musk.

​

bottom of page