ComfyUI hits $500M valuation as creators seek more control over AI-generated media
ComfyUI, whose tools give creators more control over AI image, video, and audio generation, just raised $30M.
AI Insider·
ComfyUI, the node-based AI creative workflow platform, has closed a $30 million funding round at a $500 million valuation, led by Craft Ventures with participation from Pace Capital, Chemistry, and TruArrow. Founded in 2023 as an open-source response to the limitations of early diffusion models, ComfyUI gives creative professionals precise, step-by-step control over AI-generated image, […]
Read full articleComfyUI, whose tools give creators more control over AI image, video, and audio generation, just raised $30M.
The associate professors of EECS and chemistry, respectfully, are honored for exceptional contributions to teaching, research, and service at MIT.
The associate professors of EECS and chemistry, respectively, are honored for exceptional contributions to teaching, research, and service at MIT.
With every passing year, local AI models get smaller, more efficient, and more comparable in power with their higher-end, cloud-hosted counterparts. You can run many of the same inference jobs on your own hardware, without needing an internet connection or even a particularly powerful GPU. The hard part has been standing up the infrastructure to do it. Applications like ComfyUI and LM Studio offer ways to run models locally, but they’re big third-party apps that still require their own setup and maintenance. Wouldn’t it be great to run local AI models right in the browser? Google Chrome and Microsoft Edge now offer that as a feature, by way of an experimental API set. With Chrome and Edge, you can perform a slew of AI-powered tasks, like summarizing a document, translating text between languages, or generating text from a prompt. All of these are accomplished with models downloaded and run locally on demand. In this article I’ll show a simple example of Chrome and Edge’s experimental l