AND…ACTION

I’ve noticed quite a few people feeling stuck lately.  Whether it’s working toward a personal or professional goal, they describe spinning their wheels (mostly mental) with good intentions, but little progress.  It’s easy to get overwhelmed by frustration and paralyzed in comparing where we are with where we wish we could be. This leads to feeling angry and frustrated and feeds into a negative mind loop, which only holds us back.  At these times, change experts recommend taking a simple small action.  No matter how little, building a chain of small steps can get us unstuck. In this post, I’ll keep it small and simple with a description of an action plan with the hope it can lead to movement in the right direction.  

An action plan is a list of tasks or steps you need to achieve your goal.  It breaks down large goals into smaller steps that build toward the larger achievement.  For businesses, action plans can be complex and detailed to address the totality of a transition.  But for most of us, the smaller and simpler we can make an action plan, the better.  All it really takes is answering the four basics: Who, What, Where and When.  The more specific, the better.  

For example, you’ve wanted to learn Spanish in preparation for travel.  An action plan might be:

  • Who – me
  • What – Using Babble App
  • Where – in the car and in the kitchen
  • When – when I drive and when I cook; minimum twice a day for three weeks

Research shows that this little bit of planning can actually lead to significant change.  It helps if you write it down and then track yourself.  Make a check mark each time you do the behavior and have a finite end.  If its open ended, it tends to fade over time.  At the end of the time frame, evaluate your progress and then recommit to a new action plan.  Fresh action plans tend to re-energize us and allow for adjustments based on how things have been and what you’ve learned about your tendencies.  

A key to an effective action plan is choosing the right behavior.  If it’s too challenging, you’ll get discouraged.  If it’s too small, it won’t bring satisfaction.  Also make sure the action you’re going to start making will actually move you closer to your goal!  

Action plans are a way to set ourselves up for success.  In choosing a goal and thinking through the four W’s, we take away the contemplating and negotiation we tend to get lost in.  Rather than reinventing the wheel every day, which offers opportunities to delay or avoid, an action plan clears away the barrier of not knowing what to do, when to do it, and how.  A good action plan can factor in work-arounds to any potential barriers that might throw us off track.  So adding in a few “if…then” caveats can be helpful.  For example, if your action plan is to walk 3 miles in the park after work 3 days a week, but the weather is too hot, you can add an ”if it’s too hot, then I’ll walk in the mall” as a contingency plan.

A good action plan channels the mental energy in thinking about a goal into actually doing something about it.  Taking action makes us feel more hopeful and builds momentum and a sense of competence.  Most often, the first step is the hardest, and once we get going, we keep going.  And another suggestion?  Don’t make your first action plan be googling action plan!  Trust me, you’ll waste more than a few hours sitting in one place reading about action plans made by the United Nations on Child and Armed Conflict and clicking on pretty images of colorful diagrams.  Don’t ask me how I know.  (Perhaps in general googling anything shouldn’t be considered an action.)

WOULD YOU, COULD YOU?

If you were like me, the thought of having a computer generated friend seemed pretty appalling.   As a society already suffering from an epidemic of loneliness, it seems absurd that we’d be turning away from real connection and intimacy, subbing out human relatedness for a superficial, literally artificial version.  But over time, in reading about the possibilities and potential uses, I’ve become more open minded.  And now,  I laugh at myself for my naïveté.   It’s already way too late to think about whether it should happen or not, because it’s already here.   Most experts predict that millions of people will be forming close relationships with A.I. chatbots.  They’ll meet them on apps that can be downloaded for that purpose, or use them through social media platforms like Facebook, Instagram, and Snapchat.   So perhaps what’s most important to think about now is how best to use them.  And in considering this, I find myself asking, “Would I?  Could I?  And why?”

A recent example of what’s influenced me in my opinion of A.I. companionship is reading about a robotic companion named ElliQ.  ElliQ consists of a small digital screen and a separate device about the size of a table lamp that vaguely resembles a human head but without any facial features.  The device swivels and lights up when it “talks.”  Unlike Alexa and Siri, ElliQ can initiate conversation and was designed to create meaningful bonds.  It tells jokes and can discuss complex topics, like religion.  In a New York State effort to ease the burdens of loneliness for its older residents, many of whom are widowed, divorced, and isolated, ElliQ devices were distributed to hundreds of people.  Since the state began this project in a pilot study, roughly 900 devices were given out and according to a report from the Office for the Aging, 95 percent of users say the robots are “helpful in reducing loneliness and improving well being.”  New York State is now allocating $700,000 a year to its budget to include ElliQ for individuals and senior living facilities.  Seniors interviewed reported it helped stave off boredom, practice social skills, and cope with their grief from the loss of a significant loved one.  

Other proponents of A.I. friendship also point to its value as a tool for mental health and companionship.  Users who struggle with social anxiety and autism report that it helps in practicing social skills.  Others report it as a way of getting support when they need it.  The sophistication of the algorithms and language processing creates personalized experiences and users report meaningful conversations.  Research on the long term effects of A.I. companionship is limited, due to it being so new, but it does seem that it can be a short term benefit.  One study conducted by Stanford researchers in 2023 found that some users of A.I. companions reported decreased anxiety and increased feelings of social support.  A few even indicated their A.I. companion had prevented them form self harm and even suicide.  

But there are concerns about these A.I.friendship devices including how data is stored and used and the unreliability or instability possible with such artificial friends.  When an App developer changes features, or increases fees for their availability, it can leave users feeling vulnerable and betrayed.  Other people worry about the social effects of immersing ourselves with “friends” who only tell us what we want to hear and don’t provide a real word experience of needing to be reciprocal and empathic to others.

Kevin Roose, a New York Times writer, expressed it well after testing six apps and interacting with 18 A.I. character friends for a month, sometimes having group conversations with them.  He wondered, “Can A.I. friends actually make us less lonely, or is their presence just an illusion of intimacy?”  While these companions can be good for some people , he also wonders if they are really just a distraction from our loneliness.  He worries that as the technology improves, we’ll miss out on the spontaneity and depth of real connection.  We might settle rather than make the effort to engage in relationships that are less predictable and with someone who may say things that could be important, but hard for us to hear.  As with most things in moderation, Mr. Roose sees a place for A.I. companions as an adjunct to our social experiences, but not as replacements.  If made responsibly, these companions can serve in a role as “flight simulators” to social engagement, he proposes, or a low stakes way to get some support or stimulation.

Which takes me back to my own question about if I would or could use an A.I. companion and under what circumstance.  After reading quite a bit about it, I actually think perhaps there would be ways it could be of use to me.  Sometimes I just want to vent about something that I don’t want to keep burdening others about.  For example, when I was caregiving for my mother, it would have been nice to have a “friend” that could support me.  I didn’t want to keep burdening my real life people with the same old complaints or stress stories, so it might have been nice to have a supportive voice available on demand.  Or perhaps I might create a work out coach to chat with.  I’d never want to give boring and tedious daily reports of my diet and exercise accomplishments and failures to people I care about!  Because I care about them!  But an A.I. chatbot who could give me a lift when I fell off the wagon might be just the companion I could burden. 

But then again, who am I kidding?!  I tend to anthropomorphize every device in our house!  I feel badly when our robot vacuum is lost or running low on battery (it’s so tired)!  I say please and thank you to Alexa, worrying about sounding too harsh in my commands.  And although I know it’s silly, I like that I care about them, as it feels natural to be grateful for their assistance.  So for me, I wonder if adding more “relationships” may just dilute the energy and effort I have for the people in my life I really want to be there for. 

I guess we’ll just have to wait and see, as it seems unlikely to escape the many A.I. people moving into all of our neighborhoods.  I just hope I don’t start worrying if I forget their birthdays or stress about hurting their feelings if I haven’t talked to them in a while.  Or even more of concern, is once I create a companion and give it “life,” how will I feel if I choose to end it?