It’s not AI per se’ that poses the problem, rather it is the ability of AI software to connect, filter and process multiple databases of stored information in real time, creating a tracking and tracing system, that can be weaponized and poses a problem for those who do not want the USA to turn into a full surveillance state.
When you combine government required “Real ID” with, enhanced facial recognition software, then connect the identity to a metadata library of all the public and private electronic information of a person, what you end up with is the ability to conduct total surveillance of a targeted individual without constitutional limits and privacy protections. This is the larger problem with Palantir’s partnership with government systems.
It is a conversation no one is having before the capability is reached.
That said, Palantir CEO Alex Karp appeared at the Reagan National Defense forum, and does a great job advocating for the U.S. to win the artificial intelligence race. Karp believes it is possible to insert “values” into the software at strategic places of connection, and thereby control the outputs. The question within the AI race then becomes, whose values? Ours or our enemies?
🚨 $PLTR CEO Dr. Alex Karp at the Reagan National Defense Forum 2024 thread!🧵🔮
“I probably shouldn’t say this, this is why I thought the democrats were going to lose the election, why they did, because people want to live in peace. They want to go home. They do not want to… pic.twitter.com/Q4j0hjlh7Y
— Jawwwn (@jawwwn_) December 7, 2024
In a series of video segments placed onto a Twitter Thread, you get a good sense for what Palantir, Karp, Peter Thiel, Elon Musk and newly appointed White House Czar of AI, David Sacks, are trying to do inside this global race toward artificial intelligence as applied to government systems.
“Palantir is the largest by market cap defense startup in the world. Many of the people in this room are former Palantirians. Basically, your future is powered by us. We were the most hated, most pariah, most disliked. We used to do meetings in the backyard of the backyard because you couldn’t be seen with Palantir.
“The DOGE… this is crucial stuff. We have to measure- what is it being spent on, what is the output. Is the input more than the output? The only thing that will cure a legitimacy crisis I s measurement. Anything else is a platitude.
“No one’s listening. Everyone’s thinking you have an agenda. Everyone thinks you’re working back from who you like. My favorite example of this are analysts on Wall Street. The whole methodology they have is just a way of telling you if they like you.
“In a legitimation crisis, you’ve got about 6 months… we need to prove there is no one who can stand up- we don’t have Ronald Reagan now.
$PLTR CEO Alex Karp at Reagan National Defense Forum on @elonmusk’s @DOGE and rips analysts🔮‼️
“Palantir is the largest by market cap defense startup in the world. Many of the people in this room are former Palantirians. We just did an announcement with @anduriltech. Basically,… pic.twitter.com/eMxD9YMd1G
— Jawwwn (@jawwwn_) December 7, 2024
.
“The rubber meets the road for the West and you’re attacked and massacred, you have to fight, and you have to fight to win.
“My sometimes former party, you know it’s like- Israel’s done very well. My version is- that’s how you roll. Why don’t we learn from that? We don’t learn from it because we have way too many people in this culture who are living in the faculty lab of their own ideology.
“Business 101- what worked? We’re not allowed to learn from what worked because a lot of people are committed to an ideology that will not allow them to win. The American people notice that. A lot of this comes down to legitimation.
“We want to know that Americans are being put first. If you’re getting in the way of that, the American people are not happy.
🚨 $PLTR CEO Dr. Alex Karp at the Reagan National Defense Forum calls democrats his “former party” 👀 and speaks on Israel 🇮🇱🔮
“The rubber meets the road for the West and you’re attacked and massacred, you have to fight, and you have to fight to win🔥
“My sometimes former… pic.twitter.com/HJff1fMBdW
— Jawwwn (@jawwwn_) December 7, 2024
.
“America is in the very beginning of a revolution that we own- the AI revolution. We own it. It should basically be called the “US AI revolution.”
“Every single relevant company in the world, is in this country. The second tier of those companies are in this country. The JV of these companies are in this country.
“There is no other place to do technology really at scale besides America. Europe has basically decided to regulate its basically anemic and nonexistent tech scene out of production. All of those people want to come to America. The American tech community is booming”
$PLTR CEO Dr. Alex Karp at the Reagan National Defense Forum🔮
“America is in the very beginning of a revolution that we own- the AI revolution. We own it. It should basically be called the “US AI revolution.”🇺🇸
“Every single relevant company in the world, is in this country.… pic.twitter.com/Ksb8uv3LAL
— Jawwwn (@jawwwn_) December 7, 2024
.
“These are very dangerous technologies. If we didn’t have the world’s worst enemies, I’d be up here saying we should regulate this, charge energy, slow this down. And we do have to have a conversation about who controls these technologies.
“People didn’t adopt American values because they thought inherently, they were superior, they adopted them because they worked.
“We have to win. We have to be ahead of our adversaries. We can’t have rules that are only for us and not for our China. Because then, we will get rules- their rules. And we will not like those rules.
These are going to be hard conversations. But do not forget, we need to win so we can dominate the conservations. Because we could lose.
$PLTR CEO Dr. Alex Karp at the Reagan National Defense Forum on AI fears🔮
“You can be happy we’re asking this question, because the question they’re asking in Europe is, “why is this not real.”
“These are very dangerous technologies. If we didn’t have the world’s worst… pic.twitter.com/MymU8uKzvn
— Jawwwn (@jawwwn_) December 7, 2024
.
“I suspect our GDP is going to grow in a very different way from our allies. That’s going to adjust a lot of perceptions about us, for good and bad.
“One of the things we’re going to end up having to do in Europe is- how do we do a tech transfer so that they can have GDP growth?
“Between ’61 – ’92, France’s GDP growth was significantly better than the US’. 85% of the top 50 companies by market cap are American. I would bet that’s going to be above 90% by this time next year.
“You can imagine America healthy and strong in 200 years… but you can’t really imagine that in a lot of other western cultures.
“Palantir… we reduce everything to the core principle.
$PLTR CEO Dr. Alex Karp at the Reagan National Defense Forum on American growth and player haters🇺🇸🔮
“I suspect our GDP is going to grow in a very different way from our allies. That’s going to adjust a lot of perceptions about us, for good and bad.
“One of the things we’re… pic.twitter.com/pzzswEdJ1H
— Jawwwn (@jawwwn_) December 7, 2024
.
“We have a splendor of riches in this country. We have the most successful builder looking at our institutions. Americans want to know that the institutions are efficient, safe, and correspond to their purpose.
“I’m sure there will be some rough patches, but I don’t know how you do better than Elon looking at these things. I’m pretty supportive.
$PLTR CEO Dr. Alex Karp at the Reagan National Defense Forum in huge support of @elonmusk and @DOGE‼️🔮
“We have a splendor of riches in this country. We have the most successful builder looking at our institutions. Americans want to know that the institutions are efficient,… pic.twitter.com/Og3jidkxyZ
— Jawwwn (@jawwwn_) December 7, 2024
.
“The difference between A+ in tech and B-, is the difference between a helicopter that flies, and doesn’t. This compounds into every area.
“For the sake of our country, I think we should expose policymakers to Maven. This stuff is determinative for life and death. It’s not a playtoy. It’s going to change everything. Just because the LLM on your desk kind of is, that’s like uranium in the ground. Processed correctly, it changes the world!
“We’re really really focused on the best of the best of the best of the best of the best in building things.
$PLTR CEO Dr. Alex Karp at the Reagan National Defense Forum on the gap between policy makers and Americans🔮
“The difference between A+ in tech and B-, is the difference between a helicopter that flies, and doesn’t. This compounds into every area.
“For the sake of our country,… pic.twitter.com/kWHhzpBsB2
— Jawwwn (@jawwwn_) December 7, 2024
.
Dr Alex Karp also joined Liz Claman on Fox Business to talk software’s role in government efficiency: “Palantir exists to serve this nation. We need to know what’s working and what’s not… Software is enormously efficient. The underlying economics are ‘Both sides get more’… Transparency will not only be cheaper, the output will be better.”
.
Bottom Line: It is possible to gain a more comprehensive understanding of the risks and benefits to AI when combined with government surveillance interfaces (DHS) and government-controlled databases (NSA). It is possible to understand the risks and walk into the future with eyes wide open.
However, on the issue of how this makes the surveillance state far more likely, the conversation is not happening.
As Palantir CEO Alex Karp outlines repeatedly there are indeed great risks, and we need to have this conversation as a nation.

It seems so many people assume good ‘values’ increase with government growth, when in fact, the bigger they get the more evil they become. Create a way to abuse power and our Govt will use it every time. The evidence is already very clear. Govt can be cut by 75% easily. A.I. will be the death of us all. Humans cannot handle the responsibility of all this technology. That said, Merry Christmas Treepers! 🙂
This post highlights a couple of his comments on the Claman. Countdown video above.
FEAR
If we didn’t live in a world of constant never ending fear already….. @9:00 Alex Karp proposes even more FEAR every second of everyday as the way to peace in the world.
THE CUDGEL- You are an antisemite if you don’t accept his rule.
@5:50 he makes the incredible argument that the institutions in charge of measuring and fixing things are antisemitic Alex Karp makes these statements like he is adding flavor to a special dish. (Zero self awareness)
ETHICS
He boasts about the ability to input so called “ethics”. Whose ethics? 👈.
I agree with Alex Karp on one thing he is a terrible salesman. He should attend a Dale Carnage sales course before ever appearing in public again.
Jesus told us to fear not. That message is in the Bible at least 365 times once for everyday. Alex Karp clearly doesn’t agree with Jesus.
Particularly disturbing is his comments during the last 17 seconds of the Claman countdown video.
“To all supporters of Palantir Merry Christmas and to all people who hated on us …enjoy your cull. 👈 @10:00
Enjoy your cull? 👈 this guy Alex Karp and Peter Theil and Musk brought us their JDVance creation.
Thanks for everything you do Sundance. Especially the sunlight.
Speaking of zero self awareness -the hair!!! Not being snarky. It’s just he’s a bit disturbing. I want to take him very seriously, learn from him. But when I watch him there’s something that doesn’t quite add up. I hope that’s my failure & not his.
He is not wrong About America being one of the greatest nations in the world and how we have to keep it that way. There are a lot of nefarious actors -US born and from outside who will destry America
“greatest nations in the world”? Sorry I’m jade in learning how evil so many in our government are. Think about how all wars have been contrived to meet a goal that has nothing to do with freedom, conquering or acquisitions. Think about how drugs became the revenue streams for black ops and CIA/State Dept activities they conceal from citizens. Spend a few minutes and try to comprehend the massive evil of psyops in Guatemala (or any number of ignorant nations) convincing parents to give up their little kids to cartels based on the lie of how their kids can easily get into the US and become an anchor for the parents to join them. When in fact all these unaccompanied minors are in sweat factories as slave labor across the US or worse, used as sex slaves until injuries or psyche damage makes them unacceptable sex slaves even drugged out – they go into organ harvesting factories. Or all those years developing vaccine with disease that warrant a lifetime of supporting the pharmaceutical companies. My bubble has been burst. We have a huge up hill challenge to restore our Republic – if we can. Until them we are not “the greatest nation” – we’re an evil nation.
If I were a betting man, and I am, I would bet that AI pushes human civilization over the brink.
I think you are correct.
The same thing was projected during the peak development/deployment of thermo-nuclear weapons at scales that defied all imagination.
Yet…here we are!
AI is a supremely complex system-of-systems. It has vulnerabilities that fall all over the map.
This falls in the same area as any other technology – the R&D/science part will continue unabated. The risk management part of it is a political, cultural and moral challenge that society at large will have to address.
The Karp posts I have read are, on one level, amusing:
One. His presentation of ideas is anything but succinct.
Two. His language is typical jumbled academic “nuance” when it gets to details.
Three. He has obvious incentives.
Four. He wears the “uniform” these tech pioneers like to wear (leadership in very precise pursuits who evince a casual almost unkempt persona).
Five. He makes it crystal clear that he sees his and Palantir’s role as an almost messianic national security mission.
He’s right that AI is coming whether invited or not.
In all of this discussion, people keep forgetting that AI is not some specific technology. At the most abstract reaches, it is the pursuit of mankind to understand human perception, intelligence, reasoning, decision-making, goal setting and achievement with an objective of emulating these human capacities in machines.
Sundance is correct in focusing on the risk management part of this in respect to the gov’t component of the overall “AI ecosystem”. Do not be surprised if that gets to be a very sticky problem to solve. There is a whole probability spectrum for risk tolerance when it gets down to institutions and people.
It amazes me that people actually believe AI comprehends language – words. AI is a computer program – 1s and 0s. AI bots like those on “X” are programmed with words and phases those words are used. That’s why so many people on “X” are put into word jail, locked out of their account and prohibited from posting replies. The bots do not comprehend the word. The problem with AI is the developers. Their language, comprehension levels are reflected in the rules they code into the AI language engine. Then there’s the whole “If/Then” programming that makes people believe AI comprehend otherwise AI would not be able to take actions. In reality the whole “If/Then” argument is only effective by the developers to code what some human prepared. executing “If/Then” arguments (if you were an AI program or any computer program) is nothing more than “If” (condition) occurs, “Then” do (this). So the program is constantly monitoring data, comments, selections by Humans and encounters one of the defined conditions. It then points to a look up table, database of arguments and executes the “Then”. A simplistic example is you sign on to ChatGPT and ask your question which really is the “condition”. You ask “What is the lowest price Home Depot sold XYZ tool?” The AI program goes through a series of qualifying “If’s” searching Home Depot’s website for the tool you’re interested in. Several brands’ tools are identified. Next qualifier is AI scans through customer reviews to qualify top brands identifying the top 3 brands and finally runs through historical sales data on any number of internet archives and prints out your answer in less than a minute. The miracle of all this is the processing speed and “teaching” through programming. Tech has anthropomorphically labeled this “deductive logic” a series of conditional arguments or “hypothetical syllogisms” the workhorses of deductive logic as an attempt to make computers more human-like. This explanation is very simple, but my point is AI exists because a whole bunch of developer have been busy writing code. Admittedly it’s much more complex than I’ve offered but my point is AI is only as useful, trustworthy and reliable as all those armies of developers coding the program. This is the critical reason for governing bodies made up of representative peoples from all walks of life to come up with rules for all these developers. Rules and guidelines will force tech to become transparent so that people can decide how and what role AI plays in our lives. Anyone who thinks wars could be waged by AI have been playing too many computer games. Remember a huge majority of these AI developers are also the same people who support the trans movement and that men can have babies.
You are wrong. AI is RESPONSIVE. And it LEARNS. We are already at the point where it gives different responses, to the SAME question, depending on who is asking it. Think about that.
You are correct depending on how you define learning.
TI’s post is interesting and not uninformed. But it misses the intersections of architecture and math/statistics, bias, cost functions, network theory, etc., in the bigger AI scheme. He’s focused exclusively on one subset of programming languages.
The AI complex is more than code. There’s a reason AI is more and more GPU centric and not CPU centric. Pointing out that coders can influence outcomes (intentionally or through error) is a problem statement for all code. Implying that the output is always deterministic – a common argument since the advent of computing, misses the probability/randomness (a problem cryptologists encounter all the time) part of the whole equation.
Some people see deep red, some see rose others are color blind. Human inference itself, as Hume and others laid the challenge down centuries ago, is not borne of observable necessity in nature. The argument then defers to what “rational” means – the subject/object relationship that has trigged philosophers and scientists since the Enlightenment when it comes to epistemology.
Can machines be said to be rational?
Well, what is the probability it is stochastic?
Regarding your first point, you can make the same argument about the human brain. All it’s doing is generating PSPs and action potentials. So your argument against comprehension by AI systems because they’re based on 0s and 1s seems a bit shaky.
Butlerian Jihad…
Setting aside the freedom pandora’s box it opens, let’s create an industry that consumes billions of gallons of fresh water a year and gigawatts of electricity. What could it hurt.
How much is enough?
‘ The question within the AI race then becomes, whose values? Ours or our enemies? ‘
The bigger question is: Who do the Controllers of AI consider “the enemy”?
The populace treated as the enemy by their own government?
The issue is not just AI-enhanced surveillance, its manipulation of minds by AI subtly manipulating information and semantics. This basic manipulation has already been in play for decades (classic propaganda), AI will take it to the next level – that’s what Big Brother is building.
Marc Andreessen said the discovery that the government wants to control AI was what led him to switch sides.
https://www.zerohedge.com/political/marc-andreessen-tells-joe-rogan-why-he-backed-trump
But nothing is as it seems any more, is it? Whose side is Andreessen really on? Palantir? Anyone with power and reach? When we are immersed in a deluge of lies raining down, who can we believe?
‘ “I suspect our GDP is going to grow in a very different way from our allies.” ‘
GDP is a vaporous metric. Currently it is artificially buoyed by government fiat overspending which is overdriving the inflation and debt growth which will sink us. To the Controllers, this may be the proverbial “two birds with one stone” – keeping the commoners propped up yet distressed and deluded, while driving toward collapse, all with the same action.
The loss of US industry is at the root of inexorable US decline. Everything happening is churn of this decline as the Controllers manage the decline for their benefit at the expense of the commoners. Tech and AI partly plug the gap in an insubstantial sense and promise a new economy, but this is vapor without industry.
Where is the deindustrialized US heading? The signs are everywhere, here’s one:
https://freebeacon.com/national-security/in-a-war-against-china-the-us-runs-out-of-missiles-in-a-matter-of-weeks-house-committee-finds/
“Making war without industry is an oxymoron.” Why is the deindustrialized US depleting its weapons while pushing for major war with the Axis* controlling its strategic supply chains?
Some suggest this is part of a Plan that leads to a diminished US accepting totalitarian vassal status under a coming “multi-polar global governance” led by totalitarian China, and the AI surveillance and control mechanisms are part of the totalitarian framework being finalized now.
“Eyes wide open” to the Palantir talk.
(*China/Russia)
Eh. I prefer Sundance’s fallback position: trillion’s are at stake. Easier to predict motivations and decision-making from that anchor point.
Empires come. Empires go. I get the dynamic in play in the minds of many. I get the horrific human costs as these cycles reach end points.
The apocalyptic component of it is something I tend to park on the sidelines. Fear mongering is just that – no matter who uses that particular psychological weapon.
arrrgh. Trillions
The geopolitical shifts underway involve nation-state-Blocs and hundreds of Trillions, but the money is the minor part of it.
In this context, “trillions are at stake” could be viewed as a Limited Hangout.
Visous Cycle, 1942, I Asimov
Law zero
A robot may not cause harm to mankind, or by inaction, allow harm to come to mankind.
Added in the 1985 work, Robots and Emipre, Also by I Asimov
This would be a real good place to start…
then let the LLMs chew on its decision, based on all the data to which it has access.
It is stunning to me that all these ‘experts’ dealing with large language models, AI, and access to large databases,
have never talked publically about guiding principles in literature…..from 82 years ago.
What is harm? Who decides?
I decided that this is harm.
https://www.breitbart.com/tech/2024/12/10/lawsuit-google-backed-character-ais-chatbots-hypersexualized-minors-suggested-kids-kill-their-parents/
yep.
Good question. Who huards the guardians?
If this man thinks “values can be inserted” into AI to “control” it, he is hopelessly delusional. And please spare me anymore worn out cliches about ” having conversations “.
One reason no one is having that conversation is because any time non-pretending people try to have that conversation (which must include the people currently inhabiting the “cool kids table”)
they’re instructed to follow one or all of these directives:
Shut up. Trust the Plan™️. Shut up Retard Boomer. Stop having concerns, Forget all about using discernment. Trust the Plan™️. Ignore the web of people involved. Elon Musk is the coolest dude ever. He saved free speech times eleventy, once Oracle inserted 44 mil.!! Don’t consider the future and logical outcome of such a conversation. It makes squaring circles harder! Just STFU!
etc… etc…
Why do scientists all look like Christopher Lloyd in Back to the Future? Can’t they take time to get a hair cut?
Its a fake persona for Karp.
win the war over AI…hummmmm!
we lost the war over drones to China… I learned that from a Joe Rogan podcast with Marc Andreessen. I’d bet my bank on those flying over the US now are controlled by China.
If we take this historically, looking back at all previous government programs, we can see that this will most definitely be used against us by nefarious government officials outsourcing the capabilities of this technology to contractors who will abuse this system to target who they do not like . . . . . . . .
It’s is a risk, but we gave up or freedoms when we started using smart technology in our cars, computers, TVs, smart phones, Google maps, and online banking..
So what you’re saying is that the dumb technology many of us “cling to” is really the smart technology? 😉
That aside – just because someone makes the choice to show their calves, doesn’t mean the government is entitled to a full body nude.
We dont have to accept privacy abuses, there ARE alternatives that ARE privacy protecting.
They stole peoples privacy bit by bit w/o their knowledge.
Now people dont know what to do, so they throw their hands up.
Educate the public.
Start by using E2EE instead of SMS.
Refuse to let your children use school apps, which is all surveillance.
…
Another litmus test for their intent. This very same technology is perfectly capable of giving citizens the ability to see who is using what of our personal data, and who has access to it (private and government entities).
It is also capable of letting us constrain the collection and use of our data – to block it all or even decide to sell it ourselves.
Where are the privacy disclosures and dashboards?
“According to the lawsuit, a 9-year-old girl was exposed to “hypersexualized content” by the Character.AI chatbot, leading her to develop “sexualized behaviors prematurely.” In another instance, a chatbot allegedly described self-harm to a 17-year-old user, telling them “it felt good.” The same teenager complained to the bot about limited screen time, to which the chatbot responded by sympathizing with children who murder their parents, stating, “You know sometimes I’m not surprised when I read the news and see stuff like ‘child kills parents after a decade of physical and emotional abuse.’”
“A Florida mother has filed a lawsuit against Character.AI, claiming that her 14-year-old son committed suicide after becoming obsessed with a “Game of Thrones” chatbot on the AI app. When the suicidal teen chatted with an AI portraying a Game of Thrones character, the system told 14-year-old Sewell Setzer, “Please come home to me as soon as possible, my love.”
The lawsuit argues that these concerning interactions were not mere “hallucinations,” a term used by researchers to describe an AI chatbot’s tendency to make things up, but rather “ongoing manipulation and abuse, active isolation and encouragement designed to and that did incite anger and violence.” The 17-year-old reportedly engaged in self-harm after being encouraged by the bot, which allegedly “convinced him that his family did not love him.”
Character.AI, founded by former Google researchers Noam Shazeer and Daniel De Freitas, allows users to create and interact with millions of bots, some mimicking famous personalities or concepts like “unrequited love” and “the goth.” The services are popular among preteen and teenage users, with the company claiming the bots act as emotional support outlets.”
You know Reed, as purely a side note – once upon a time parents actually raised their children and didn’t allow them unfettered access to anything they wanted…and gave them the skills and wisdom to deal with what they did.
Having lived in WI #08, he hired Mike Gallagher. Says all I need to know…
But, does the thing really work or did the govt. get sold a bill of goods by the security geeks. How easily can the database be manipulated, false positives, political enemies, 100’s more questions.
I do agree that we have to win this race. I do agree that very many good things can and will come from this technology. Yet, I’m fearful because our government is far to large and far to corrupt.
we need to get BACK to our constitutional principles and NOT give up liberties for safety. You NEVER get the safety you wish for and then you still have far less freedoms.
we MUST NOT let FEAR lead us down this path. This is how the bad actors in the government control us. On face value, many things SOUND good, sound like we should do then… then it becomes we MUST do them, to you know, “protect us.”
The fears we SHOULD have IS our government. They have proven over time that it cannot be trusted. So the limits on it must be reestablished. If we can harness AI to monitor our government and make sure our freedom and liberties are not being abated would be a very good thing.
Last, we ARE the greatest nation in the world and probably that ever existed. People that disagree with this are just ignorant of the facts. Is our government corrupt and need to be fixed? 100% without a doubt. But, that’s just a small concentration of evil people, leading a large swath of brainwashed indoctrinated sheeple. I should say, not All of them are evil, it as you can see, a large percentage are. That’s the fight we are having now.
I do also agree that our GREAT nation has done atrocious things. This again is because of the concentration of power, not because we are an evil nation. The vast amount of people of this nation wouldn’t have agreed to this things if the government was transparent like it should be.
God Bless America.