From Redfin:
The Typical Home Is Taking Nearly 2 Months to Sell. That’s The Slowest Pace in 5 Years.
Homes are selling at their slowest pace since the start of the pandemic, and fewer homes are turning over, as mortgage rates and home prices remain elevated. This is according to Redfin data as of the four weeks ending January 26:
- The typical U.S. home listing that went under contract sat on the market for 54 days before the seller accepted an offer, the longest span since March 2020 and a week longer than this time last year. At this time in 2022, during the pandemic-driven homebuying boom, the typical home was selling in 35 days.
- There were 5.2 months of supply on the market, the most since February 2019 and up from 4.9 months a year earlier. Months of supply is the length of time it would take for the existing supply of homes to be bought up at the market’s current sales pace; a longer span means homes are sitting on the market longer and signals a buyer’s market.
- Pending home sales were down 9.4% year over year, the biggest decline since September 2023.
Sales are slow because it’s very expensive to buy a home, with mortgage rates sitting near 7% and home prices up 4.8% year over year. The median monthly housing payment is $2,753, just shy of April’s record high. Additionally, extreme weather–including snow and frigid cold in the Midwest, South and Northeast and wildfires in Southern California–are keeping would-be buyers at home.
The market may pick up in the coming weeks as mortgage rates fall–at least slightly–from their early January peak, and new listings tick up. Additionally, Redfin agents expect some buyers to step off the sidelines soon as they get tired of waiting for rates and prices to come down.
“Prospective buyers have been cautious because they’ve seen homes sitting on the market and they’ve heard interest rates and prices may drop. When the market isn’t competitive, some buyers think they should wait for costs to go down,” said Jordan Hammond, a Redfin Premier agent in Raleigh, N.C. “Now it’s pretty clear that sellers aren’t slashing asking prices and mortgage rates aren’t plummeting, so mindsets are shifting. People are starting to believe that if they want or need to move, and they can afford to, they should do it.”
Uno!
NEWS ANALYSIS
From Anguish to Aggression: Trump Goes on Offense After Midair Collision
President Trump at moments of national tragedy has always been more comfortable finding fault than providing comfort or expressing empathy.
After TWA Flight 800 crashed in New York in 1996, President Bill Clinton asked “to pull together and work together.”
Five years later, when American Airlines Flight 587 fell out of the sky, President George W. Bush predicted that the “resilient and strong and courageous people” of New York would get through the tragedy
In 2009, after a Colgan Air plane crashed near Buffalo, President Barack Obama said that “tragic events such as these remind us of the fragility of life.”
And then there was President Trump. In the wake of this week’s midair collision near Washington, Mr. Trump was more than happy to jump to conclusions and pull the country apart rather than together. After declaring it to be an “hour of anguish for our nation,” Mr. Trump just five minutes later let anguish give way to aggression as he blamed diversity policies promoted by Mr. Obama and former President Joseph R. Biden Jr. for the crash, which killed 67 people.
“Trump doesn’t lead with empathy,” said Olivia Troye, who served on the White House Covid task force staff before later publicly criticizing the president’s management of the pandemic. “He exploits tragedy for whatever political grievance he’s peddling at the moment, never offering the comfort or stability a president should.”
Damn
Some late thoughts on Deepseek.
1) Their training costs are wildly understated and unrealistic. What wasn’t obvious the other day, it’s becoming far more obvious now. Their $5m was basically the power/rental cost for GPU hours, and didn’t include any human labor to get there. Similar sized models would probably come in at $10-15m calculated similarly, so sure, a bit more efficient, but not anywhere near an order of magnitude. Deepseek’s headcount is in the hundreds, they have hundreds of researchers cited in their papers, they likely have hundreds more people doing data work.
2) Their training costs were for the tiny v3 model, not the r1 model everyone is comparing against from a performance perspective. Again, an odd sleight of hand. v3 is based on “distilled” data from R1. So again, lots of synthetic data, and a far smaller model. Even using 10,000 gimped H800’s, this is serious hardware. Oh, BTW, now they have another 10,000 A100’s they admitted to. Again, still, no comments on the hardware used to train R1, only V3. By the way, this is NORTH OF $500 million in hardware, not including datacenter, networking, power, building, labor, etc. So maybe close to a billion in total investment?
3) It’s going to turn out deepseek is using huge datacenter in Singapore. Again, this is only starting to bleed out. Looking at Nvidia’s sales data, a huge portion of sales have been going into Singapore. This is simply Chinese companies operating offshore. It’s going to turn out that Deepseek has well over 10k modern GPUs that were used for training. Some estimates have the number north of 50,000, which is not a stretch.
4) There are two interesting innovations. The first is a huge reliance on synthetic data for AI model training, which likely came from OpenAI. The second is something called multi-head latent attention, which overly simplified, is just a more efficient way of managing the matrix multiplication associated with “attention heads”. Everyone sees this, it was published. Literally everyone is now adding this “innovation” into their AI models. By the way, this MLA technique was widely shared publicly by deepseek more than 6 months ago, for example: https://towardsai.net/p/artificial-intelligence/a-visual-walkthrough-of-deepseeks-multi-head-latent-attention-mla-%EF%B8%8F
5) Inference time compute is still absolutely massive. I am running the R1 model locally and it’s an absolute PIG. I ran a quantized (compressed, dumbed down) model in approximately 450gb of RAM/VRAM. I just purchased a whole new set of RAM for my AI box, upgrading the 384gb of RAM to 768gb of RAM, and then layering on another 120gb of VRAM for a total of just shy of 900gb of total ram. In modern datacenter GPU terms, this would require an 8x H200 GPU server, which currently runs just under half a million a box. By rig gets me just about 1 token a second, brutally slow. A single 8x H200 GPU can probably serve only 3-4 users simultaneously. Inference still requires massive compute. If you read this correctly, yes, I did spend $700 on more ram just to run the full size Deepseek R1 locally. My local AI machines (including my 2x super micro cluster) probably cost me $30k or something now. The cluster requires a 240v 50a circuit to run, BTW. 4 2kw 240v power supplies, they pull damn near 8kw when inferencing.
6) Deepseek R1 is not monumentally better than OpenAI o1. It hallucinates like crazy, it’s completely biased (to what extent, nobody will ever really know). It also has nowhere near the multilingual capability. I suspect that Deepseek simply doesn’t have the headcount/manpower to manage the data. It’s arguable that if you cut adjusted the proportion of multilingual data in OpenAI models, you could see similar “performance” improvements in English.
7) The thought that fewer GPUs will be required is completely nonsense. The argument that China is beating the US is completely nonsense. What we should now realize is that it’s incredibly easy to be a fast follower. Innovation absolutely stands on the shoulders of giants here. Without Google Deepmind/Transformers, Huggingface, OpenAI, Meta’s LLAMA, Deepseek would have nothing. Literally 90-95% of what Deepseek is doing, is based on what is “standard” right now.
8) Do not, do not, do not, fall for Nvidia’s purported performance improvements of new hardware. They routinely compare very different metrics from generation to generation to show huge performance gains, when in reality, these are relatively minor. An 8-year old P40 GPU is still pretty damn good. Putting it side by side to a modern PCI-card GPU, is maybe 50%-100% faster. 8 years worth of improvements, at best twice as fast, this is not a big number.
9) Deepseek was well known in the AI community, open source community, for a while now. Deepseek released their Coder model like 14-18 months ago, and tons of people played with it at the time (tens of thousands, maybe more). So congratulations to the market for finally acknowledging them and the billion they spent to get there. Disregard the nonsensical origin story being concocted that this was a side project of 5 guys at a hedge fund. This is a huge, modern technology company with billions in funding and likely some very shady violations of trade law.
Should I download the app on my iPhone or use via reflexity?
grim says:
January 31, 2025 at 6:52 am
Some late thoughts on Deepseek.
Grim, nice analysis and nice pivot from the daily onslaught of the mentally I’ll muppets.
Tariffs coming for Canada tomorrow.
Take a knee Treadeau and spare the people of the north.
https://www.cbc.ca/news/politics/trump-tariffs-goal-unclear-1.7444985
Mexico is going to be a tough nut to crack to stop the flow of fentanyl…will trump shut the border completely?
“Now it’s pretty clear that sellers aren’t slashing asking prices and mortgage rates aren’t plummeting, so mindsets are shifting. People are starting to believe that if they want or need to move, and they can afford to, they should do it.”
Buy now or be priced out forever.
Mexico is going to be a tough nut to crack to stop the flow of fentanyl…will trump shut the border completely?
165,000 pills were seized in Denver in one day. Multiply that by every corner of the U.S. China has been killing us softly and it took some common sense to realize it.
Is it gonna make eggs great again?
Juice Box says:
January 31, 2025 at 7:25 am
Tariffs coming for Canada tomorrow.
I am of course completely right again. Bigly winning as Trumptards would say.
Thanks Grim for your detailed write up on Temu AI.
You get what you pay for, in this case GIGO ( garbage in and garbage out).
The bottom line though, the moment the first LLM was released, it became a commodity, as sythetic /distilled training is a lot cheaper.
I also heard a few more things that made deepseek cheaper-
1) they coded in a lower level language and could optimize better, although not as interoperable.
2) they used samsung chips alongside nvidia, proving that with careful architecture, nvidia is not the only option.
3) and we dont need an llm that will do everything. We could have separate llms for medicine, oil industry or robotics. We dont need one for all.
– what i learnt from a guy in the ai industry.
My personal experience with ai is dated. I had a paper published in 1995 on how to recognize patterns using neural networks.
Grim is going for an exaflop of AI processing power in his basement……
Of course Elon has the best rig 100,000 GPUs all connected with a special 800 GB/S switch. SN5600 switches cost like $70,000 each and only have 64 ports. Elon’s rig has allot of ports and switches. Grok is a Funny AI too……
Save the Robots….
Japan bending the knee to Trump.
Supports new 800 mile long LNG pipeline in Alaska.
https://www.reuters.com/markets/commodities/japan-weighs-alaska-lng-pipeline-pledge-win-trumps-favour-2025-01-31/
1) they coded in a lower level language and could optimize better, although not as interoperable.
We’re seeing this everywhere though now. Spend an hour going through the llama.cpp repo in Github. There are still years worth of minor performance tweaks to be made.
https://github.com/ggerganov/llama.cpp/pulls – 300-ish, open pull requests all focused on low-level performance improvements and new functionality. This is an open source project, written in C, for exactly the same reason. Data scientists are sloppy, the code in interpreted languages like Python, slow as shit.
Moving this much data is completely rethinking the software and hardware paradigm. To the extent that a supercomputer, just two or three years old, is completely useless for modern transformer-based AI. Useless, garbage. I don’t care if it cost a half a billion dollars, it’s useless. The whole reason we’re talking GPU at all.
Bulgarian developer Georgi Gerganov (guy who started llama.cpp) should get way more credit than even Deepseek. What he did with GGUF was totally foundational to the explosion in open source AI we’re seeing, and totally foundational to what Deepseek is doing.
How this dude isn’t getting paid $5m a year from OpenAI or Google is beyond me.
Nvidia’s control on the supply side, especially consumer-grade high VRAM cards, has a huge part of the open source community gearing up to replicate CUDA on AMD hardware.
Literally, because there are tons of used, cheap, AMD GPUs that you can get on eBay.
These guys will get there, they’ll open up new architectures, just because the prices of 4090s are stupid high, and 5090s sold out in 2 seconds, and you can still find AMD 32gb HBM2 cards on eBay for $400 bucks.
Literally, Nvidia is entirely risking losing market dominance because it’s not making cheaper GPUs available to hobbyists. Ironic that the broad market thinks Deepseek will bring down OpenAI and nVidia. In reality, a bunch of cheapo hobby developers pissed off at Nvidia’s gaming cards being hard to get, and too expensive, are going to down them.
I see anyone trying to be the first to do with ai as a losing proposition.
Once it’s out, it will be reverse engineered or distilled. Unless of course it is totally close sourced, like openAI – which even so, had competitors launch within months.
The reasoning steps, where deepseek, navigates using phrases instead of words as tokens is giving so much more better results. This may require a big refactoring for american algorithms.
Lets give credit where it is due. Deepseek beat us. Even if it mean shoving other leaders to the side in an ugly way like Trump did at a global meet.
Intel are idiots by the way too.
They could easily be winning this race, but their CPU hardware just don’t have enough memory bandwidth to compete with GPU.
This is why Mac pros have gotten so popular. Modern AI is a game of memory bandwidth first, processing power second.
This is why the last gen supercomputers are worthless. They have huge processing power, and shit memory bandwidth. Completely worthless for AI.
It’s why Jensen and others are calling Digits and Mac Mini’s supercomputers. They have bandwidth that far exceeds even server memory bandwidth.
My dual xeon rigs pull 200gb/sec. More modern ones pull 400gb/sec.
A 5 year old Nvidia 3090 gaming graphics card pulls 930gb/sec.
The top Mac mini studios can pull 800gb/sec.
This is why 5 year old gaming cards can hold a candle to $20,000 A100 GPUs. We’re talking 1tb per second vs. 2tb per second. Memory bandwidth is the limiter by FAR. Nvidia’s 5090 release yesterday, still only 1.8tb/sec, only twice as fast as a dirt cheap 3090 when it comes to AI.
Intel could have been a superstar right now. Redesign CPUs for a huge number of parallel memory channels, pull a >1tb/sec using dimms. Jesus we’d be running even more enormous models.
Gary – go fuck yourself you little cretin.
The reasoning steps, where deepseek, navigates using phrases instead of words as tokens is giving so much more better results. This may require a big refactoring for american algorithms.
Um, still tokens. Nothing different.
Recognize that these “thinking” steps that o1 and r1 use are the equivalent of piggybacking 2, 3, 4 prompts together sequentially.
Prompt 1: Come up with a plan to do whatever the user asked
Prompt 2: Refine the plan you just generated
Prompt 3: Act on the plan, generate an answer.
Prompt 4: Review the answer
It’s all still tokens, it’s all still basically the exact same architectures. Recognize that you can make any LLM act in a nearly identical way by doing this.
Folks are starting to realize that a core issue of the LLM architecture is that prompt-to-response passes the entire model in one pass to yield an answer. There is no thinking or reasoning here, it’s one pass. This huge new innovation, is just allowing more passes through the model. However, given model sizes, there will be limits to this. How many recursive cycles is necessary? 3? 6? 289? This scales miserably because each pass through the models is increasing the input token count, adding more information and instructions into the next prompt for processing.
Grim, thanks for the very well written report.
These guys are clearly not a hedge fund. It is definitely a cover. It is most likely funded by multiple sources and well-planned ahead. Their hires are top notch and they are fine engineers.
They did a lot of engineering enhancements that many will follow. AI2, NVIDIA, Cerebras, MSFT, DellTech all released their workflows using R1. Even llama used R1 to speed up some part of his code and credited on github.
Their tech reports, open source (open weights mostly) availability, and their academic and business impact should not be underestimated. Last week so many coding AI programs added them, I used it myself and performed better than others. Copilor/GPT is a joke compared to R1. Overall inference will rule out the business and competition is fierce in that land. Last numbers from NVDA NIM 3800 tokens/s, Cerebras 1500 tokens/s. These with using the full 670B models I believe. Impressive throughput given they deployed the fullest model.
In my expert domain, we have a bigger US exceptionalism. But it will also decay as the new talent is non-existent, and China develop its semi/litho capabilities on their own in the upcoming decade.
>> But it will also decay as the new talent is non-existent
You forgot immigration.
America is/ was great because of immigration.
I am running R1 in my basement. Do not for one second think that any of this is difficult to implement, it’s ridiculously easy. Unsloth had R1 model weights quantized and up on Huggingface in hours of them being released, you could run the GGUFs using llama.cpp on pretty much anything with enough memory to hold it.
By the way, tons of companies are saying R1, and deploying V3, especially the V3 fine tunes of Qwen or Meta’s Llama. These are totally different.
https://huggingface.co/unsloth/DeepSeek-R1-GGUF
Follow a tutorial, you can have Deepseek R1 running on your laptop this afternoon if you wanted to.
It’s a fun model. But I’m sure Meta’s llama4 is going to blow it out of the water, and then someone will blow them out of the water. Rinse, repeat.
I will even go so far and say illegal immigration is a boon to america.
It maintains a working class that cannot move up easily and makes sure the guys who flip burgers continue to flip burgers and do the work for the higher classes.
Robots are expensive, illegal labor is cheaper.
Enterprise AI (where the real money is) will be far less demanding than AI chat that needs to quickly sever up responses to millions of concurrent users.
There will be lots of CPUs run inferencing. This is what AMD and Intel are building now. Intel has (MCR) DIMMs….and other tech on the way geared towards inference workloads.
>> But it will also decay as the new talent is non-existent
>You forgot immigration.
I know, but even the young talent doesn’t want to do hard-core engineering, hard-core science. Talent is moving to AI/CS. The pay is orders of magnitude higher there.
Offshore labor is still far less expensive than AI.
Philippines is less than half the cost of gpt-4o real time, and India is about a fifth.
OpenAI prices need to come down by at least an order of magnitude to be cost effective against offshore labor.
Market returns from Tuesday forward support grim’s analysis, including weakness in NVDA. The other issue is about strength of hyperscalers moats. grim is suggesting that those moats will persist, because DeepSeek does not crack the need-to-spend. It is why there was a broad selloff that reversed. Also power requirements are maintained, also supported by valuations.
Not even talking about fancy models like OpenAI’s “Operator”, Agentic models that can look at a screen and click a mouse.
They are far too slow, and far too expensive. We need that capability, 10x more performant, and 1/10th the cost of the currently inexpensive GPT-4o.
We’re a LOOONG way off people, a LONG way off.
At this rate, the ROI to replace a lawyer is far better than to replace a call center agent.
RentL0rd says:
January 31, 2025 at 9:05 am
“illegal immigration is a boon to america”
Have you mentioned that Laken Riley and Jocelyn Nungaray’s families?
re: “We’re a LOOONG way off people, a LONG way off.”
Same for AGI……
Sam Altman is as Elon says. “Swindly Sam”
Power requirements. Lol. $30 a day is what my rig costs if I let it grind non-stop.
8kw, 24 hours, $0.15kwh – $30
I could charge my Tesla 3 times in that same timeframe, with that much power.
Let’s hope we don’t have a Tesla in every driveway, and an AI machine in every house. Our poor grid can’t deliver.
Fire up the coal plants China. It’s not going to be AI that kills people, it’s going to be the pollution.
imho power requirement will come down. The quantized small models can run on iphones and such. Also agentic/operator submodels will be specialized with small mem requirements as well as small power requirements. The power is not on the compute but on the memory movement. HBM’s will help.
I’d say the AI will be brought to edge, enterprise will remain on hyperscaler, but Apple still have a chance with its better architecture with good enough bandwidth to run o1/r1 like models on the iphone offline. Huawei may beat them to it though
Small man 9:19,
No, but the las vegas massacre and every school shooting victims parents know who is a bigger threat. Including Natalie Rupnow’s friends.
For sure.
But right now, Deepseek R1 is not any faster or more performance than Llama 405b running similar model sizes/quants.
Anyone thinking R1 is running high end AI on 1/10th of the hardware is misguided. It needs exactly the same hardware.
The Big Myth: How American Business Taught Us to Loathe Government and Love the Free Market (Oreskes & Conway, 2023), written by Naomi Oreskes and Erik Conway, is a vital resource for those trying to navigate a world where the government is demonized by many and corporations receive the rights of citizens from our courts. The blame, the authors contend, is the seemingly wrong-headed ideology of economic freedom which seeks to prevent governmental efforts to regulate corporate behaviors at odds with the wellbeing of society as a whole. This ideology also forms the foundation of actions seeking to overturn established public policies that have been addressing these excesses for decades if not longer. The authors previously collaborated on the bestselling book Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming (Oreskes & Conway, 2010), a treatise on how corporations conspired to subvert proven science to continue to be able to sell their products despite evidence of the harmful nature of their products. As the authors state in their acknowledgements, Merchants of Doubt is in essence a “what they did” while The Big Myth is a “why they did it.”
We are decades from high quality on edge devices unless you are talking about tiny models doing transcription, summarization, grammar suggestions, and generating tiny emoji graphics.
Apple’s reliance on edge AI could be its downfall.
The infrastructure required to for one single question to an AI “Tell me a joke about a rabbi, a priest, and a quantitative analyst walking into a bar” is exactly the same a thousand people asking a thousand questions. Occasional use doesn’t scale down locally.
Small man 9:19,
Same can be said for many killed in Tesla car fires, Should we deport Elon too? Remember the famous Ford Pinto case. Tesla wasn’t punished for these fires and lost people.
Rentlord,
Where did you get a hold of our super secret plan?
Who leaked it? Someone from the Walton clan, the weak Koch brother, some soft hearted?
RentL0rd says:
January 31, 2025 at 9:05 am
I will even go so far and say illegal immigration is a boon to america.
It maintains a working class that cannot move up easily and makes sure the guys who flip burgers continue to flip burgers and do the work for the higher classes.
Robots are expensive, illegal labor is cheaper
re: “killed in Tesla car fires”
A 2024 Chevrolet Corvette Stingray runs a quarter-mile in 11.2 seconds.
A Bugatti Veyron 2010 car will run a quarter mile in 9.9 seconds.
The Tesla model 3 can do a quarter mile in 10.9 seconds.
Why would you let your kid drive a supercar?
Even Star Trek did not put the AI on a tricorder.
I am Landu you have intruded. Pull out it’s plug Mr. Spock!
https://www.youtube.com/watch?v=51Hwf5uK37I
For TESLA quarter mile comparison vs Fast and the Furious.
In the Fast and the Furious, the RX-7 ran a quarter mile in 14.6 seconds.
Brian’s Eclipse could run a quarter mile in 16 seconds.
Vince’s Maximo could run a quarter mile in 14.5 seconds.
Dom’s 1970 Dodge Charger R/T quarter mile2000 could run a quarter mile in 9 second..
Seriously don’t let your kids drive a Tesla or other electric unless you can lock it with “Chill Mode” which is a slower gradual increase in speed that is limited to a top speed of 85mph and acceleration goes from 0-60 mph of 3.1 seconds to 7 seconds. This is still faster than most cars on the road.
If you pull into a high school you will be surprised to see the number of Teslas with a “Be patient. Driver in training” sticker.
What used to be a hand me down Honda is now a brand new Tesla.
BMW had the crown for most overrated signal light department before Tesla took over.
I don’t and will not own a tesla, so have a real question. Is there really no mechanical way signal left or right?
Tesla’s were always ugly pieces of shit.
My son wants a Talaria which is a 50 mph dirt bike, a cheap motocross a bit better than Temu quality and it’s $4,000….. They are not street legal yet I see kids riding them everywhere. They are not electric bikes.. Those legal electric bikes with Pedal and a chain and sprocket are limited to a 750 watt motor and 20 mph top speed and are legal in NJ. The Talaria dirt bike has a 2000 watt motor and takes off like a bat out of hell.
I have been saying NO for months now. My wife caved and ordered a cheaper version another Alibaba knockoff. I made her cancel the order.
My son will be asking me every day until he gets his driver’s permit in a year from now.
I remember being his age, some kids got one of those 50 cc dirt bikes. Every single one of them crashed and broke an arm or worse back then.
“Oil and gas companies in the United States are bracing for the possibility that President Trump will thrust their businesses into disarray and will drive up prices at the pump by imposing 25 percent tariffs on goods from Canada and Mexico.
The United States is the world’s largest oil producer, but the country’s refineries are designed to turn a mix of different types of oil into fuels like gasoline and diesel. Roughly 60 percent of the oil that the United States imports comes from Canada, and about 7 percent comes from Mexico. Many refineries are set up to use those imports and cannot easily switch to oil from other places.”
Crashing a bike is the right of passage for learning a bike. I did too. Luckily Insurance wasn’t a thing growing up. And you could sort things out (if you live) with the other guy.
re: “cannot easily switch to oil from other places” Nah we had lots of refineries and they were all closed……
Phillips 66 in L.A. is to be closed this year…to many ugly flames coming out of it, and it smells.. Cali only has 31 million gasoline powered cars… Everyone should go electric instead and put solar on their roofs to charge their cars!! Go Green everyone!! Save the planet….
Looks like they want to create a mini-Bell Works down the street from Netflix.
https://www.app.com/story/money/business/main-street/whats-going-there/2025/01/30/bell-works-commvault-building-fort-monmouth-metroburb/78027924007/?trk=feed-detail_main-feed-card_feed-article-content#
Chi – I will only go there if the sell $20 hamburgers and $17 tequilla drinks to wash down the bill.
https://www.mabelatbell.com/
Solar takes time to payoff, will it pay off before the house burns from a wildfire. That’s the gamble.
Kids, 19 and under have no business having access to that acceleration. I did, and it was a dumb mistake my father made. I won’t do the same with my kid.
Some kid crashed into my father going 100 mph about 25 years ago. Father was in an F-150, saved his life. Kid died on impact. When we went to the lot to fill out some paperwork for the mangled truck, we saw all the most recent wrecks. The guy at the lot said every single one was a kid 17 or 18. Trust me, I watch them all drive out of the lot on a daily basis. They all drive like lunatics (and that’s on the 25 mph road). Just imagine what they are doing on the highways.
where the price of eggs at?
where the price of eggs at?
those darn price gougers
11:41 My power bill was $6 last month. I’ve got 23 solar panels on my roof. Were there when we moved in.
Wow, this White House Press Secretary is cute! I didn’t realize she was so pretty… definitely easy on the eyes.
I pulled up next to a Tesla Plaid once. 1000 Horsepower. Cannot imagine.
Neighbor just got a Lightning F150 and took me for a ride. It was quiet and fast. Build quality seems pretty nice. Plus, if there is ever a power outage here, he can flip the switch on his charger and it’ll power his house.
1:19 Yep. Another Alabama Chi O clone with a cross around her neck.
How did I miss the drone news? They were real after all? Top Secret Done program?
“Criticism is escalating over what the Federal Aviation Administration knew all along about those mysterious drones that triggered high anxiety in communities across New Jersey in November and December.
Monmouth County Sheriff Shaun Golden expressed disappointment after the White House revealed on Tuesday, Jan. 28 that many of the drones flown in large numbers over the state were authorized by the FAA.
The White House claimed they were for research and various other reasons.
“They should have told the American people,” Sheriff Golden said. “It’s not fair. And certainly they could have quelled it. Somebody from the FAA certainly had to see all the national reporting that was going on, and could have made the phone call. They didn’t.”
According to Golden, this lack of communication meant that numerous law enforcement agencies were kept in the dark about the drones and an unnecessary deployment of resources.
The Ocean County Sheriff’s department launched its own drone surveillance.
The National Aerospace Research and Technology Park is situated next to the FAA’s Tech Center in Egg Harbor Township.
The president of the park said he can’t speak specifically to the recent drone activity but says current research at the park involves new approaches to how radar is used.
“The next phase of aviation will involve a degree of autonomous flight,” NARTP president Howard Kyle said. “Radar provides the sort of coverage you need to direct aircraft that might be flying autonomously.”
https://www.nbcphiladelphia.com/news/local/new-jersey-drones-faa-answers/4092737/
Teslas are great getaway cars I imagine. Except, you need to drive without crashing after a robbery
https://www.tapinto.net/towns/princeton/sections/police-and-fire/articles/princeton-police-pursue-shoplifting-suspects-two-apprehended-others-still-at-large
I thought Sur-Ron was the hot ticket in electric motocross bikes. Those things are insane, 2000, 3000, 4000 watt motors.
They are faster than the fastest gas dirt bikes ever were. Absolutely not toys.
Check out the Ultra Bee.
I want one, and am afraid to buy it.