Popular
join now

Live from the OpenAI Developer Conference, GPT-4 Turbo the strongest ever?

AI information5months agorenew AItools
1,951 0

On the road to AI development, it takes people who are brave enough to climb the ladder.

At the just-concluded developer conference in San Francisco, hundreds of developers and OpenAI teams from around the globe came together to preview new tools and network.The AI AI marketplace will be taking a more open approach, welcoming more aspiring developers to join in the fun. Just under an hour of live online streaming released a lot of important news, just as OpenAI is growing faster and faster.

The goal of the developer conference is to encourage companies to use OpenAI's technology to build AI-based chatbots and autonomous intelligences that can perform tasks without human intervention.The OpenAI team also hopes to attract more developers to pay for access to OpenAI models and to build new AI ecosystems using its models.

Prior to the conference, Sam Altman said on Platform X, "There's going to be some really great new stuff released."

Live from the OpenAI Developer Conference GPT 4 Turbo ever!

GPT4 Turbo released, stronger version of GPT4
The Turbo version of the GPT4 data is up to date with current times, as it was constructed using online data as of April of this year.

Therefore, compared to the original version, which only had access to data as of September 2021GPT-4The Turbo version is even more up-to-date, as it already knows not only about the epidemics or economic upheavals that the world has just experienced over the past few years, but also about the vast majority of world events up to April of this year.

Not only that, but GPT4 Turbo also offers a 128k context window, which means it can hold the equivalent of over 300 pages of text at a time (it's hard to imagine having that many prompts at once and not crashing).

Compared to the original GPT-4, the Turbo version has done a significant reduction in charges, with its input token charged at $0.01, three times cheaper, and its output token at $0.03, two times cheaper.

Live from the OpenAI Developer Conference GPT 4 Turbo ever!

The Turbo version is available for all paid developers to try via gpt-4-1106-preview in the API, and OpenAI plans to release stable production-ready models in the coming weeks.

OpenAI has also updated GPT-3.5, launching a Turbo version with default support for 16k context windows GPT-3.5, the new version supports improved instruction tracking, JSON patterns and parallel function calls. The new version's input token and output token charges are also three times and two times cheaper than the original version, priced at $0.001 and $0.002 respectively.

The new version adds new multimodal capabilities including visualization, image creation (DALL-E 3), and text-to-speech (TTS).The Turbo version of GPT-4 can accept images as input in the Chat Completions API, enabling use cases such as generating captions, analyzing real-world images in detail, and reading documents with graphics.

OpenAI plans to provide visual support for the main Turbo version of the GPT-4 model. Pricing for visual input depends on the size of the input image. For example, sending a 1080 x 1080 pixel image to the Turbo version charges $0.00765. For image creation, OpenAI offers different format and quality options, starting at $0.04 per image generated.

Live from the OpenAI Developer Conference GPT 4 Turbo ever!

The new version of GPT, which is open to paid subscribers, has major feature updates including:

Supports multi-format file upload

In the old version, users need to use "Advanced Data Analysis" and other functions to complete the PDF document upload and information extraction; in the new version, users can directly upload PDF, data files and other formats, the future may support more file types;

OpenAI's GPT-4 model has achieved one-stop integration of tools in its latest release, saving users the tedious step of switching between different modes. Previously, GPT-4 had four separate feature modes: Image Upload, Plugin, Code Runner, and File Upload + GPT-4. However, in the new version, these features are unified, enabling users to accomplish a variety of tasks through a single dialog window.

It is worth mentioning that GPT-4 not only enhances the original text generation capability, but also adds a new multimodal capability. This means that GPT-4 can not only understand and respond to textual input, but also process image input. When receiving image input, GPT-4 can generate comprehensible responses to the image content. For example, in one demonstration, when given an image containing an anomaly, GPT-4 accurately recognized the question in the figure, showing its strong visual comprehension capability. However, this feature is still in the research stage and is not publicly available.

In addition, OpenAI has also collaborated with Jonathan, Apple's former chief designer, to develop a smart glasses that supports GPT-4 or higher. Such hardware products will further enhance the application capabilities of AI, but their requirements for terminal chips have also increased.

Overall, OpenAI has provided users with a more powerful and convenient tool experience through the upgrade and integration of GPT-4, and is also pushing AI technology towards higher goals.

Live from the OpenAI Developer Conference GPT 4 Turbo ever!

OpenAI Developer ConferenceThe organization is not only inevitable at the current stage of development, but also a vehicle to promote the gradual evolution of big models into AI intelligences and build a new ecosystem based on ChatGPT big models.

Even though the online developer conference lasted a mere 45 minutes, its content caused a stir in the AI industry. As we can see, OpenAI is aiming very high and the decision to establish itself as a platform independent of existing app stores and distribution methods is not going to happen overnight. The next step in commercialization will be a direct challenge from industry giants like Apple and even its long-time sponsor, Microsoft.

© Copyright Notice

related articles

en_USEnglish