Business Insider / Andrew Harnik New York, NY – It was a sunny afternoon on a sunny day in late November.
It was the middle of a snowstorm in Pennsylvania.
Andrew Bostrom, a researcher at the University of Pittsburgh, had just finished writing a paper on the foundations of AI, and was about to leave his apartment when a police officer called to tell him he had to leave.
He had just been pulled over for speeding.
“It was very intimidating,” Bostroom told Business Insider.
“He pulled me over and told me, ‘I’m going to check your license plate,'” he said.
Bostrum said he didn’t know how to explain that the cop was a police detective, but he knew what he had done was wrong.
The cop told Bostrogs license plate was an address, which he later said was wrong, but the fact that he had asked was wrong too.
The officer also took Bostras car and told him he could go home, but Bostros car was parked outside the police station.
He said he was very frustrated.
“I had my camera with me, so I couldn’t just walk out of there and walk back to my car,” he said, adding that he felt “extremely threatened.”
He also said the officer told him if he did not get out of his car, he would “put a gun to my head.”
He said the police officer then proceeded to tell Bostrodom to go to the nearest gas station and buy gas.
Biotroom said he took his phone and went to the gas station.
When he returned, Bostram said he saw an officer sitting at the counter with his arms outstretched, pointing his gun at him.
“That’s when I knew that was the end of the road,” Biotrogs said.
The next day, he was issued a ticket for speeding, which was eventually reduced to a $30 fine, he said in an interview.
Boodroom and Bostrovs are the latest researchers in the field of machine learning to face backlash after their work was leaked online.
Researchers have faced harassment, death threats, and even physical attacks online for their work.
In December, the researcher Andrew Boodrovics was killed when a gunman opened fire at a party in California, killing two people and injuring six others.
His work has also been criticized by some AI researchers who say the data they are using is too noisy and does not provide enough context.
Boadroom has since resigned from his job at the National Bureau of Economic Research, citing “personal reasons,” according to Business Insider’s investigation.
In March, the computer scientist and a colleague at Harvard University resigned after the university discovered they had received death threats.
The Harvard group was not alone in receiving threats.
A few months ago, a hacker group called Lizard Squad attacked the offices of two MIT computer scientists.
The group was later unmasked as a team of researchers who had hacked into the Gmail accounts of MIT researchers and other researchers.
The MIT group said it was investigating the claims and said it had “zero tolerance” for such attacks.
“We are actively working to address this problem and we will continue to do so,” the group said in a statement.
MIT and Harvard declined to comment for this article.
But in June, the University at Buffalo released a statement on its website that said, “We recognize that a number of researchers, including Andrew Bodrovics, are concerned about their safety, and we have engaged in discussions with the individuals involved.”
Boodr and Boadrovics say they are taking precautions to make sure their work does not fall into the wrong hands.
“What I’m doing is trying to make sense of data that’s out there that’s so noisy, that doesn’t give enough context to what it’s saying,” Boodrob said.
“There are so many variables to it that if you’re trying to interpret the data, you need to make that as clear as possible.”
Bostr and his co-author, Aaron Boodrogs, have already published their findings in the Journal of Machine Learning Research.
But their research has faced controversy.
A number of studies in recent years have shown that many of the algorithms that drive the search engines, financial markets, and other applications that rely on human judgment fail to do a great job of spotting and predicting fraud, according to a review by the Center for Strategic and International Studies in May.
One study from Stanford University published in June said that the algorithm that helps computers predict stock prices failed to identify a fraud by more than 10% in some cases.
In May, the Institute for Advanced Study in Princeton, New Jersey, published a study that found the algorithm used to predict stocks on Wall Street failed to spot and prevent some of the largest stock market bubbles in history.
The Stanford study was one of the first