Search the Community
Showing results for 'movies that move we'.
-
Google's New AI Text-to-Video Tool Is Fun to Look At. But What Next?
Story by Lisa Lacy • 38m
Google has teased an AI-based video generation tool, but it's not clear when — or if — anyone outside the search giant will be able to kick the tires. It's certainly fun to look at, though.
On Wednesday, Google's Research arm released a video highlighting this new text-to-video model, which is called Lumiere.
In a LinkedIn post, team leader Inbar Mosseri said the tool "generates coherent, high-quality videos using simple text prompts" that New Atlas says run up to five seconds. Sample inputs include, "A fluffy baby sloth with an orange knitted hat trying to figure out a laptop" and "An escaped panda eating popcorn in the park."
In the year or so that generative AI has been the hottest technology going, much of the attention has been focused on tools like ChatGPT that produce text answers to prompts, or those like Dall-E that create still images. Video creation from text prompts is arguably the next frontier, so if Lumiere really can "demonstrate state-of-the-art text-to-video generation results" as Google says, we may already be evolving beyond the "grotesque abominations" of the AI-generated images of 2023.
As the video illustrates, Lumiere's capabilities include text-to-video and image-to-video generation, as well as stylized generation — that is, using an image to create videos in a similar style. Other tricks include the ability to fill in any missing visuals within a video clip.
That includes the ability to animate famous paintings, like Van Gogh's Starry Night ("A timelapse oil painting of a starry night with clouds moving") or Da Vinci's Mona Lisa ("A woman looking tired and yawning"). While the Starry Night example works almost flawlessly, Mona Lisa looks far more like she's laughing than yawning.
And while many of the animals — such as "a muskox grazing on beautiful wildflowers" and "a happy elephant wearing a birthday hat walking under the sea" — look realistic, there's something off about some of the dogs. Both a toy poodle riding a skateboard and a golden retriever puppy running in the park are close to passing as real, but their faces — and perhaps their eyes specifically—betray the fact that they're CGI.
Nevertheless, the video editing tools hold a lot of promise. Using a source video and prompts like "made of colorful toy bricks" or "made of flowers," users can purportedly change the style of the subject completely. And with inputs like "wearing a bathrobe," "wearing a party hat" and "wearing rain boots" to add said items to an image of, say, a baby chick, Lumiere may very well make fiddling with videos more accessible to those of us who didn't major in graphic design.
Though the assets shared so far certainly make Lumiere seem like it's user-friendly, the description of how it works isn't. (Google didn't respond to a request for additional comment.)
A project page < https://lumiere-video.github.io/ > describes Lumiere as "a space-time diffusion model," which sounds like something Doc Brown was working on in Back to the Future. Google Research said this means the text-to-image model learns to generate a video by processing it in multiple space-time scales, which helps create videos that "portray realistic, diverse and coherent motion."
According to Google, this is superior to existing models, which "synthesize distant keyframes followed by temporal super-resolution."
Jason Alan Snyder, global chief technology officer at ad agency Momentum Worldwide, explained it this way: "It's like the difference between watching a puppet show and experiencing a ballet at Lincoln Center."
That's because Lumiere "doesn't just focus on snapshots, it crafts smooth, flowing motion for every frame," he added.
In other words, if you think about the traditional method of making a movie, you'd have to build key scenes and fill in the gaps later.
"Lumiere is different. It sees the whole movie in its mind, understanding how characters move, objects interact and everything changes over time," Snyder said. "It's like drawing the entire flip book simultaneously, ensuring every page flows perfectly."
So this "space-time thinking" helps Lumiere create videos that feel real, which, he added, means no more jumpy transitions or robotic movements. (Except maybe for puppy eyes.)
Time will tell.
In the meantime, as fans of Beauty and Beast will know, Lumiere is French for "light."
Editors' note: CNET is using an AI engine to help create some stories. For more, see this post. < https://www.cnet.com/ai-policy/#ftag=MSF491fea7 >
URL
-
Man cleared in a 1996 Brooklyn killing said for decades he knew who did it. Prosecutors now agree
By Associated Press New York State
PUBLISHED 9:36 PM ET Jan. 18, 2024
NEW YORK (AP) — A man who served 14 years in prison for a deadly 1990s shooting was exonerated Thursday after prosecutors said they now believe the killer was an acquaintance he has implicated for decades.
“I lost 14 years of my life for a crime that I didn’t commit,” Steven Ruffin told a Brooklyn judge after sighing with emotion.
Although Ruffin was paroled in 2010 and has since built a career in sanitation in Georgia, he said that getting his manslaughter conviction dismissed and his name cleared “will help me move on.”
“If you know you're innocent, don’t give up on your case — keep on fighting, because justice will prevail,” Ruffin, 45, said outside court. “That’s all I’ve wanted for 30 years: somebody to listen and really hear what I’m saying and look into the things I was telling them."
Prosecutors said they were exploring whether to charge the man they now believe shot 16-year-old James Deligny on a Brooklyn street during a February 1996 confrontation over some stolen earrings. Brooklyn District Attorney Eric Gonzalez said after court that charges, if any, wouldn't come immediately.
“You have to be able to convict someone beyond a reasonable doubt, and we have to make sure that that evidence is sufficient to do so,” said Gonzalez, who wasn't DA when Ruffin was tried. “You have a lot of factors working against us procedurally, but also factually — unfortunately, this is 30 years ago.”
Ruffin's conviction is the latest of more than three dozen that Brooklyn prosecutors have disavowed after reinvestigations over the last decade.
Over a dozen, including Ruffin's, were connected to retired Detective Louis Scarcella. He was lauded in the 1980s and ‘90s for his case-closing prowess, but defendants have accused him of coercing confessions, engineering dubious witness identifications and other troubling tactics. He has denied any wrongdoing.
Prosecutors said in their report on the Ruffin case that they “did not discover any misconduct by Scarcella" in the matter. A message seeking comment was sent to his attorney.
Prosecutors said the police investigation — and their office's own at the time — “were wholly inadequate” and tunnel-visioned, failing to look into the person they now believe was the gunman.
The mistaken-identity shooting happened as Ruffin and others were looking for a robber who had just snatched earrings from Ruffin’s sister. In fact, Deligny wasn't the robber, authorities say.
Tipsters led police to Ruffin, then a 17-year-old high school student, and the victim's sister identified him in a lineup that a court later deemed flawed. Scarcella wasn't involved in the lineup, but he and another detective questioned Ruffin.
The teen told them, twice, that he saw but wasn't involved in Deligny's shooting, according to police records quoted in prosecutors' report.
Then Scarcella brought the teen's estranged father — a police officer himself — to the precinct. The father later testified that he told his son to “tell the truth,” but Ruffin said his father leaned on him to confess.
And he did confess, saying he fired because he thought Deligny was about to pull something out of his jacket. Ruffin told the detectives they could retrieve the gun from his sister's boyfriend, and they did, prosecutors' report said.
Ruffin quickly recanted to his father, who didn't tell the detectives his son had taken back his confession, according to prosecutors' report. The teen went on to testify at his trial that he didn't shoot Deligny but saw and knew the killer — his sister's boyfriend, the one who'd given police the gun, broken up into parts and stuffed into potatoes.
Jurors at Ruffin's trial heard from the boyfriend, but only about his relationships with the defendant, his sister and others in the case. When the jury was out of the room, the boyfriend invoked his Fifth Amendment right against self-incrimination and declined to answer other questions, including where he'd been on the night of the shooting.
Prosecutors didn't release the boyfriend's name Thursday, and the names of lawyers who have represented him weren't immediately available. He told prosecutors during their recent reinvestigation that he had nothing to do with the shooting and didn't give detectives the gun. He also said he never confessed to anyone, though prosecutors say Ruffin's stepfather, sister and late mother all have said he made admissions to them.
Asked Thursday about the boyfriend, Ruffin's lawyers noted that the prospect of any prosecution now is uncertain.
“We only wish that in 1996, Detective Scarcella and others had performed the investigation they should have and been able to get this right the first time," attorney Garrett Ordower said, noting that Deligny's family may now never have the finality of a conviction in his death.
As for Ruffin, he's focused on his future, including promotion opportunities at his job in Atlanta. His now-voided conviction, he said, “never defined me.”
“This never really spoke of the person I was or the man I was going to become,” he said. “So this, to me, is a great closure of a chapter my life, but my life is still going up.”
URL
-
an online profile named Myles Daye said the following
Pronpt: drawing , thick lines, white background, only color is red from paint dripping, featuring punisher from marvel with a city apart of his jacket.
Starting to feel bad for artist cause these came out amazing.a computer returned the following
My reply
I am an artist, don't feel bad, star trek's holodeck has the same idea as this computer generated interpretation. certain computers today have enough memory plus speed to accept literal statements and compute based on their data storage plus algorithms the literal statements into graphical images. enjoy use it. if anyone wants to draw they still can,no is has halted the ability of anyone to draw straight from their imagination. but, for those who are not artists, now they have the computer to make art they want and if the computer isn't close enough, you can go to artists and have them make details changes. for example, if you want the punisher's cityscape jacket to have a particular skyline with certain buildings certain places you can go to an artists and they can manipulate it. Don't feel bad. enjoy the modern tools, enjoy their financial affordability, if you can remember the computer isn't an artist it isn't imagining anything, and if you want specifics that the computer is having trouble reaching, human artists are out there with the skills to adjust the base image.IN AMENDMENT
someone then replied
please reiterate this to all the AI haters that want to be offended on your behalf. They swear it's an affront to all artists that use "natural" mediums. I'm pretty sure inkers & pencilers felt this way once digital showed up
My reply
in the usa, many workers in many manufacturing or mining towns have been waiting for jobs for fifty or forty years. They are joined by inkers and painters, and in twenty years will probably be joined by truckers/taxi drivers. Your correct, people online spew a wild violent negativity, unwilling to comprehend a person who supports what they oppose, to be patient to their own lives delicacy , to not be afraid. And some people simply like to spread negativity as loud as possible especially online while they are very secure offline . But the negative vibe about computer made art is in a long line of labor group complaints energized by an ever growing populace of people who are being told all is for the collective better while they know they are not too big to fail. I admit I am lucky. I have never been hungry. I have never been told my artistic desires are unworthy or have no value. And I have a loving home where even when things go against me or not conveniently I have people offline who comfort my fears or worries. but most people in the usa don't have that and thus the vibe. Sad. And, the solution sadly is not through online discourse, it is through CEO's elected officials managing industries better. Why didn't the usa government pay for artists in college or working in firms as digital art arose, to have free digital drawing pads plus software from sea to shining sea, every state? why didn't they force firms to pay to make sure their entire labor force including retired employees had tablets plus software. move the labor force into the industry and the labor force will not be as threatened. And if the job pool is gone, which is usual, the government need to pay to redirect people, reach to them, firms don't have to care as much, but a truly caring government should.