Mar 24, 2024

[Claude AI Tips] How Claude makes your business work better - 1

 How Claude makes your business work better - 1



Notes: This article is adapted from a white paper posted on Antropic's homepage 'Prompt engineering for business performance' [1]

This is an in-depth analysis and adaptation of the whitepaper with the same title. (** Adapted using Claude)

Prompt engineering is an important tool for optimizing Claude's performance. Well-designed prompts improve Claude's output results, reduce deployment costs, and ensure that the customer experience is on brand.

One Fortune 500 company leveraged effective prompt engineering to build a Claude-powered assistant that answers customer questions more accurately and quickly.

When building generative AI models in your business, crafting effective prompts is critical to achieving high-quality results. With the right prompts, businesses can unlock the full potential of AI to increase productivity across a variety of tasks.

Anthropic's Prompting Engineering team is helping Fortune 500 companies build customer-facing chat assistants that answer complex questions quickly and accurately.

Benefits of designing effective prompts include

  • Improved accuracy:Effective prompts can further reduce the risk of inaccurate output.
  • Maintain consistency: Well-designed prompts ensure that Claude produces consistent results in terms of quality, format, relevance, and tone.
  • Increase usability: Prompt engineering helps Claude deliver experiences that are customized for the desired audience and industry.
  • Reduce costs: Prompt optimization can minimize unnecessary iterations and save money.

Claude is an AI assistant that can perform a variety of tasks through natural conversations with you. You can give Claude instructions in everyday language, just like you would ask a human, and the quality of the instructions you provide can have a significant impact on the quality of Claude's output. Clear, well-organized instructions are especially important for complex tasks.

The directives you give Claude are called "prompts". Prompts are often in the form of questions or instructions, and serve to guide Claude to generate relevant output. For example, if you give Claude the prompt "Why is the sky blue?", Claude will generate an appropriate answer. The text that Claude produces in response to a prompt is called a "response", "output", or "completion".

Claude is an interactive assistant based on a Large Language Model (LLM) that works through sequence prediction, which means that it considers both the prompt you type and the text it has generated so far, and builds a response by predicting the next word or phrase that will be most helpful. At the same time, Claude can only process information within a context window of a set length, so it can't remember previous conversations unless you include them in the prompt, and it can't open links.

If you want to have a conversation with Claude, you can use the web interface at claude.ai, or you can get started quickly via the API. The maximum length of your prompts is limited by the size of Claude's contextual window, so be sure to check the contextual window size of the model you're using.

More advanced techniques and tips for creating more effective prompts are covered in the topic 'Prompt Engineering'. The Prompt Engineering guide explains in detail how to design prompts with best practices, things to watch out for, real-world examples, and more. We encourage you to try different prompts and techniques and observe how they affect Claude's response and performance.

Anthropic also provides a large collection of prompt examples for different use cases in the 'Prompt Library'. If you need ideas or want to see how Claude can be utilized to solve a specific problem, the Prompt Library is a great place to start.

Finally, Anthropic also offers experimental "helper metaprompts" that guide Claude to generate prompts based on guidelines you provide. These can be useful for creating initial prompts or for quickly generating different prompt variations.

As you can see, Claude is a powerful AI assistant that can perform a variety of tasks through conversations with you. With prompt engineering and a library of prompts, you can unlock Claude's full potential. We encourage you to try out different prompts and share your results with Anthropic's Discord community.

Prompt engineering is a critical tool for companies looking to leverage Large Language Models like Claude to drive business outcomes. Well-designed prompts can help improve the quality of Claude's output, reduce deployment costs, and ensure that customer experiences are consistent and on-brand. In fact, one Fortune 500 company built a Claude-powered customer-facing assistant through effective prompt engineering, which led to significant improvements in accuracy and speed.

As organizations adopt generative AI models, effective prompting is essential to achieving high-quality results. With the right prompts, you can unlock the full potential of AI to increase productivity across a variety of tasks. Effective prompts can improve the accuracy of your output, ensure consistency in quality, format, relevance, and tone, and deliver experiences that are tailored to your desired audience and industry. They can also save you money by minimizing unnecessary repetitive tasks.

Here are three tips for utilizing prompted engineering in your business.

First. Apply step-by-step thinking

When solving a complex problem or making a decision, step by step is a technique for breaking down a problem into smaller steps that are analyzed and solved sequentially. This allows you to understand the problem more clearly and approach it in a systematic way. Especially when working with AI models, applying step by step can make the model's reasoning process explicit, increasing the logic and reliability of the answer.

There are two ways to apply step-by-step thinking to prompted engineering. The first is to use the "<thinking>" tag, and the second is to include "Think step by step" directly in the prompt.

1. Use the "<thinking>" tag:

The <thinking> tag allows you to explicitly represent the model's reasoning process. The user can see the model's thought process step by step, making it easier to understand the rationale behind the answer. In addition, the content within the <thinking> tag can be excluded from the final output or processed separately, avoiding exposing unnecessary information to the user.

2. Include "Think step by step" prompts:

Including "Think step by step" directly in the prompt is to instruct the model to analyze the problem step by step and show the intermediate steps. This method can be simpler and more intuitive than using the <thinking> tag.

The main differences between the two methods are

1. output format:

- Use the <thinking> tag to clearly separate the model's thinking process from the final answer.

- With "Think step by step", the model's thought process and final answer are presented in one output.

2. Easy to extract information:

- The <thinking> tag makes it easy for users to extract the part they want (final answer or intermediate thought process).

- When using "Think step by step", additional post-processing may be required to extract only the part the user wants.

3. Prompt engineering considerations:

- When using the <thinking> tag, you must design your prompts with the tag usage and structure in mind.

- When using "Think step by step", the model's thought process is directly exposed in the prompt, so the prompt should be written with this in mind.

The Think step by step technique can be used in a variety of areas, such as analyzing legal issues, constructing investment portfolios, creating marketing strategies, human resources assessments and feedback, and managing project schedules. However, you don't need to use it in every situation, and it's flexible enough to adapt to the nature of the task and your needs. For simple questions or clear instructions, it might be more effective to skip the step-by-step thought process and just present the final answer. On the other hand, for complex problems or tasks that involve helping users make decisions, it might be useful to utilize a step-by-step thought process to explain in detail the model's reasoning process.

It is very useful to analyze and solve problems using a step by step thinking technique. The <thinking> tag allows you to explicitly show the reasoning behind your model, making your answers more logical and reliable. Here are some examples of how it can be utilized

The step-by-step thinking technique utilizing the thinking> tag can be utilized in a variety of areas. When a model explicitly shows intermediate thought processes, users can easily see the logical flow of answers and ask additional questions or request corrections as needed. It also helps to debug and improve the model's reasoning process. However, this technique should not be used in every situation, and can be flexibly applied to suit the nature of the task and the needs of the user.

1. Analyze the legal issues:

Prompt: "Identify the issues in the case presented, and analyze your anticipated ruling based on relevant law and precedent, using the <thinking> tag to explain step-by-step."

2. Investment portfolio construction:

Prompt: "Considering the client's investment objectives and risk tolerance, suggest the optimal asset allocation for him/her. Show the portfolio construction process step-by-step with <thinking> tags."

3. Develop a marketing strategy:

Prompt: "Analyzing the characteristics of the product and the target audience, suggest effective marketing channels and messages. Using <thinking> tags, describe the strategy formulation process step by step."

4. Human Resources Assessment and Feedback:

Prompt: "Evaluate an employee's job performance and competencies, and suggest ways to improve. Use <thinking> tags to show the steps in the evaluation process and the process of deriving feedback."

5. Manage project schedules:

Prompt: "Analyzing the project's work and resources, develop an optimal schedule for the project. Using <thinking> tags, describe the process of developing the schedule step by step."

For example,

Prompt: Evaluate an employee's job performance and competencies, and suggest ways to improve. Use the <thinking> tag to show the steps in your evaluation process and feedback."

Explanation: This prompt asks Claude to analyze an employee's job performance and competencies and provide suggestions for improvement. By specifying that Claude thinks step-by-step using the <thinking> tag rather than simply drawing a conclusion, it encourages Claude to analyze the problem systematically and provide an accurate response.

The difference in prompting with and without the <thinking> tag can be described as follows

1. with the <thinking> tag:

  • You are explicitly asking the model to show you the thought process of the intermediate steps.
  • The model will explain in detail, step by step, the reasoning process that led to the final answer.
  • Users can easily understand the model's thought process and see the logical basis for the answer.
  • The content within the thinking> tag can be excluded from the final output or handled separately, so you don't expose unnecessary information to the user.

Example:

Prompt:

Evaluate an employee's job performance and competencies, and suggest ways to improve. Use the <thinking> tag to show us step-by-step the process of evaluation and deriving feedback."

Claude answers:

<Thinking>

Step 1: Know who your employees are and what they do

  • Name: Jiyoung Kim
  • Job Title: Representative
  • Department: Marketing.
  • Primary responsibilities: Marketing campaign planning and execution, market research, customer data analysis.

Step 2: Evaluate your performance

  • Planning and executing marketing campaigns: You have above-average performance based on goal achievement and ROI analysis of recent campaigns. Creative ideas and organized execution stand out.
  • Market research: Analyzed competitors and identified trends, generating useful insights, but needs to work on better utilization of findings.
  • Customer data analysis: Good data analytics capabilities, contributing to customer segmentation and targeting. However, strategic recommendations based on analytics are somewhat lacking.

Step 3: Assess competencies

  • Communication skills: Communicates and collaborates well within the team and with other departments. Possesses persuasive presentation skills.
  • Problem-solving skills: Proactively seeks solutions to work challenges and offers creative ideas.
  • Subject matter expertise: You have knowledge and skills in marketing and data analytics, but need to continue learning and developing your skills.
  • Leadership: Your ability to lead and motivate team members has room for improvement.

Step 4: Identify ways to improve

  • Better utilize your market research findings: Develop specific action plans based on research findings and enhance sharing with relevant departments.
  • Strengthen data analytics capabilities: Focus on deriving strategic insights from customer data analysis and incorporate them into marketing strategies.
  • Professionalize your work: Learn the latest trends and techniques in marketing and data analytics, and develop your skills through participation in external training programs.
  • Develop leadership skills: Provide coaching and feedback to grow and empower team members, and drive communication and collaboration within the team.

</thinking>

Evaluation results and feedback:

Ms. Kim has shown excellent performance in planning and executing marketing campaigns, conducting market research, and analyzing customer data. We particularly appreciate her creative ideas, systematic execution, and data analysis capabilities.

However, we believe that you need to focus more on utilizing market research results and deriving strategic insights through data analysis. To this end, we recommend that you develop specific action plans and strengthen collaboration with relevant departments.

We also encourage you to further professionalize your work through continuous learning and competency development. We encourage you to learn about the latest trends and techniques in marketing and data analytics, and to participate in external training programs.

In terms of leadership skills, we expect you to be committed to the growth and empowerment of your team members. Providing coaching and feedback to team members, and driving communication and collaboration within the team will help you become a better leader.

We recognize your talent and growth potential and look forward to your continued contributions to the company. We hope you will take the suggested improvements into consideration and further develop your performance and capabilities.

2. without the <thinking> tag:

  • The model presents only the final answer and does not reveal the intermediate thought process.
  • It's hard for users to see how the model reached its conclusion.
  • The rationale or logical flow of answers may be unclear.
  • You may have difficulty debugging or refining the model's thought process.

Prompt: Evaluate an employee's job performance and competencies, and provide suggestions for improvement

Claude's answer:

Based on the employee information and job description, we evaluated your performance and competencies, and found that you performed well in the areas of marketing campaign planning and execution, market research, and customer data analysis. In particular, your creative ideas, systematic execution, and data analysis skills stood out.

However, there are some areas for improvement. We need to put more practical use of our market research findings and focus more on deriving strategic insights from data analysis. To this end, we recommend that you develop a concrete action plan and strengthen collaboration with relevant departments.

We also encourage you to continue to learn about the latest trends and techniques in marketing and data analytics to enhance your professionalism. We encourage you to develop your skills by participating in external training programs.

In terms of leadership, I would like to encourage you to provide coaching and feedback to your team members to help them grow and strengthen their capabilities, and to lead communication and collaboration within the team.

We recognize your high level of competence and growth potential, and look forward to seeing you take the suggested improvements to further develop your performance and capabilities.

Let's take a look at a possible application of "Think step by step" in the enterprise.

Example: Developing a marketing strategy for a new product launch.

Prompt:

My company is launching a new smartphone. The product's primary target audience is young people in their 20s and 30s, and we want to emphasize its innovative features and sleek design. Let's think step by step to create an effective marketing strategy.

Claude answers:

Step 1: Analyze your target audience.

- Understand the characteristics and needs of young people in their 20s and 30s

- Analyze the key features and design elements they expect in a smartphone

- Research their lifestyles, spending patterns, media consumption behaviors, etc.

Step 2: Identify product strengths and differentiators

- Innovative features: AI-powered camera, 5G support, high-capacity battery, etc.

- Sleek design: premium materials, bezel-less display, multiple color options, etc.

- Analyze how you stack up against the competition to highlight your points of differentiation

Step Three: Select marketing channels

- Select social media platforms frequented by your target audience (e.g., Instagram, YouTube, TikTok, etc.)

- Utilize online and mobile advertising channels (e.g., Google search ads, mobile app ads, etc.)

- Consider offline channels (e.g., outdoor advertising in college towns and younger demographics, pop-up stores, etc.)

Step Four: Develop a creative concept

- Establish key messages that highlight your product's strengths and differentiators

- Develop visuals and copy that appeal to the emotions of your target audience

- Maintain a tone and manner consistent with your brand identity

Step 5: Plan promotions and events

- Generate initial demand through pre-booked events

- Provide product experience opportunities (e.g., pop-up stores, road shows, etc.)

- Collaborating with social media influencers to promote your products

- Promote free gifts or discounts for customers who make a purchase

Step 6: Measure and optimize performance

- Set and monitor performance metrics for each marketing channel

- Optimize creative and messaging through A/B testing and more

- Analyze customer feedback and responses to modify and refine marketing strategy

Step 7: Manage long-term customer relationships

- Keep customers engaged by providing product usage tips and updates

- Foster brand loyalty by running loyalty programs

- Run promotions for existing customers at launch

By creating and executing your marketing strategy in the steps above, you'll be well on your way to a successful new product launch. It's important to be flexible and responsive to the insights gleaned from each step, and to continually monitor customer feedback to optimize your strategy.

Let's explain the difference between "Think step by step" with and without the prompt.

Prompt 1 (without Think step by step):

My company is launching a new smartphone. The main target audience for this product is young people in their 20s and 30s, and we want to emphasize its innovative features and sleek design. Please present your ideas for creating an effective marketing strategy.

Prompt 2 (Give Think step by step):

My company is launching a new smartphone. The primary target audience for this product is young people in their 20s and 30s, and we want to emphasize its innovative features and sleek design. Let's think step-by-step to create an effective marketing strategy.

Your model's answer to prompt 1 is likely to be a list of ideas for a marketing strategy-for example, they might list ideas like social media marketing, influencer collaboration, and giveaways. In this case, the connection or prioritization between the ideas is not clear, and it's difficult to see a systematic strategy development process.

On the other hand, if you assigned "Think step by step" to prompt 2, the model would present a step-by-step approach to creating a marketing strategy: analyze your target audience, identify product strengths and differentiators, select marketing channels, develop creative concepts, and so on, which would lead to subsequent steps such as planning promotions and events, measuring and optimizing performance, and managing long-term customer relationships.

This "Think step by step" allows the model to show a systematic thought process for solving a problem, so users can understand the context of strategy formulation and see the connections between each step. In addition, this step-by-step thought process can be used as a roadmap for actually creating and executing a marketing strategy.

In sum, without "Think step by step," you're more likely to get a list of sporadic ideas, whereas with it, you're likely to get an organized, sequential problem-solving process. So, for complex problems or situations that require strategy formulation, the "Think step by step" technique can be more effective.

---------------------------------------------------------------------

In the next post, we will discuss: 2. Utilizing Few-shot prompting , and third. Utilize prompt chaining techniques in more detail.


Mar 23, 2024

[Claude AI] Why Claude AI is called the next generation of generative AI?

 Claude 3 Model Series: The Standard for Next-Generation AI[1]


This content is an adaptation of the 'Introducing the next generation of Claude' white paper, published on the Anthropic (the company that developed Claude) website at https://www.anthropic.com/news/claude-3-family. The white paper has been analyzed using Claude 3 Opus to make it more easily understandable. Please note that all sentences and expressions have been generated by Claude._**


As artificial intelligence technology continues to infiltrate every aspect of our lives, leaps and bounds in language models are gaining traction. One of the companies leading the way is Anthropic, which recently unveiled its Claude 3 model series, breaking new ground in AI technology.


This graph compares the performance and price of the three models that make up the Claude 3 model series: Haiku, Sonnet, and Opus. The horizontal axis shows price, which is the price per million tokens on a logarithmic scale, and the vertical axis is the benchmark score, which is a proxy for intelligence.

As seen in the graph, Haiku, positioned on the bottom left, is the model that offers basic performance at the lowest price. Opus, located on the top right, boasts the highest performance but also comes with the highest price tag. Sonnet sits somewhere in the middle, emphasizing value for money.

Overall, the Claude 3 models exhibit an upward curve, indicating a clear trend of increasing performance as the price increases. This suggests that users can choose the right model based on their budget and required performance level.

Interestingly, the performance gap is quite large compared to the price difference. The gap between the low-end and high-end models on the logarithmic scale and the contrasting vertical axis demonstrates that the performance difference between these models is significant. This indicates that the Claude 3 Series was designed to offer differentiated performance to cater to the needs of various users.

In summary, this graph illustrates that the Claude 3 model series targets a market segmented by price point. Users with a larger budget can opt for the top-end Opus, while those seeking value for money can choose the Sonnet. Entry-level users or small business owners can select the Haiku. It is evident that Anthropic has structured its model lineup with different customer segments in mind.

Claude 3 Model Overview and Features

Claude 3 is a family of three versions of the model, named Haiku, Sonnet, and Opus. Each has its own unique characteristics and benefits, allowing users to choose the right model for their application. In common, they all outperform their predecessors, but differ in terms of capacity, speed, and price.

Claude 3 models excel in a variety of AI evaluation metrics, including MMLU, GPQA, and GSM8K. Furthermore, their ability to process visual information such as images, charts, and graphs has improved significantly, enabling them to effectively analyze unstructured data, which makes up a significant portion of enterprise data.


The table presented compares the results of various benchmark tests of the Claude 3 model series and competing models. The table lists the name of each model in the columns and the evaluation criteria in the rows.

First, let's look at the differences between the Claude 3 models: Opus scored the highest on most items, followed by Sonnet and Haiku. Opus's advantage is particularly pronounced for undergraduate-level specialized knowledge (MMLU), graduate-level specialized reasoning (GPQA), and math problem solving (GSM8K, Multilingual math). On the other hand, there was no significant difference in scores between the models on multiple-choice questions (MC-Challenge) or common knowledge.

It's interesting to note that the Claude 3 models generally performed well even against strong competitors like GPT-4. In reading comprehension, math, and coding, the Claude 3 models actually outperformed GPT-4. However, GPT-4 scored higher on items like mixed assessments and Knowledge Q&A.

On the other hand, GPT-3.5 and other models (Gemini 1.0, Ultra, and Pro) did not perform as well as Claude 3 or GPT-4, and in some cases were not evaluated at all. This shows that Claude 3 and GPT-4 are the current leaders in AI technology.

Taken together, Claude 3 Opus has some of the best natural language understanding, reasoning, and problem-solving capabilities available, especially in areas that require specialized knowledge. Sonnet and Haiku also seem to be worthy of consideration, depending on the application.

Of course, it's hard to draw conclusions given the limited number of evaluation items and the fact that some results are not yet publicly available, but this benchmark test gives us a good idea of the potential and competitiveness of the Claude 3 model series. We'll be able to draw more definitive conclusions in the future with more evaluations and real-world use cases.

The quality of the model's responses has also improved. Fewer unnecessary answer rejections have improved the user experience, while factual accuracy has increased and the rate of misinformation has decreased. The ability to pinpoint the desired information from a vast knowledge base is also a benefit of Claude 3.

The chart presented compares the accuracy of Claude 3 Opus and Claude 2.1 models' responses to complex and difficult questions. The chart organizes each model's answers into three types: Correct, Incorrect, and I don't know / Unsure.

Looking first at the correct answer rate, we can see that Claude 3 Opus answered about 60% of the questions correctly, while Claude 2.1 only answered about 30%. This means that Opus' correct answer rate has improved significantly, almost doubling compared to its predecessor. This is a clear indication of Opus' enhanced comprehension and reasoning skills.

On the other hand, Claude 2.1's incorrect answer rate is around 40%, compared to Opus' 20%. The more difficult the question, the more likely the previous model was to be inaccurate or give incorrect information. In contrast, Opus succeeded in minimizing the chance of error while increasing accuracy.

Interestingly, the percentage of "unsure" responses in Opus increased compared to Claude 2.1. This seems to indicate that Opus has shifted to humbly acknowledging its uncertainty rather than literally answering "I don't know" or giving a nuanced response that it's unsure.

In fact, it's often better to say you don't know than to give an incorrect answer, so this change in Opus' behavior is likely a positive for trust.

Taken together, these charts demonstrate that Claude 3 Opus is capable of providing highly accurate and reliable answers to difficult questions. Of course, there is still room for improvement, but it is clear that we have made a quantum leap forward from our previous model.

This is likely due to improvements in contextual understanding and logical reasoning, rather than simple memorization, as well as the aforementioned ability to systematically learn large bodies of knowledge and use them to approach complex problems.

It's also worth noting that Anthropic will soon be building citations into the Claude 3 model, allowing users to specify the basis for their answers. This will add even more credibility to the models and make it easier for users to understand the context of the answers.

As we continue to improve the performance of Claude 3, we will continue to work on making the answers more transparent and usable. We believe that a language model that is both highly accurate and descriptive will greatly increase user trust and adoption.

Claude 3 Opus - the highest performing premium model

Opus is the flagship model of the Claude 3 series and the most powerful to date. It answers the most complex and challenging questions with human-level understanding and fluency, even analyzing long documents of over 1 million tokens.

The graph in the image shows the results of the 'Recall accuracy over 200K' test, which demonstrates the Claude 3 Opus model's ability to understand long context and recall information.

The horizontal axis represents the length of the context of a given fingerprint and the vertical axis represents the percentage of recall accuracy. In other words, we evaluated how well Claude 3 Opus can understand a long fingerprint and answer related queries.

What's striking is that the height of the bar graph remains constant at over 99% regardless of the length of the fingerprint. In other words, Claude 3 Opus is able to almost perfectly grasp key information and answer questions even in very long sentences of over 200,000 tokens. It's as if it can recall exactly what I just read in an article.

This is a very impressive achievement that borders on the human level. After all, it's not every day that you can read a long document once and still remember almost all of its details, especially when it's tens of thousands of words long, as in the graph.

What's more, according to the description below the graph, Claude 3 Opus is able to go beyond mere memorization and make inferences based on the information it recalls. What's amazing is that it passed an assessment called the Needle In A Haystack.

NIAH is a test that requires students to find a short sentence intentionally inserted by the assessor in a large stack of passages. Claude 3 Opus was even able to spot this artificial manipulation. It literally demonstrated an amazing ability to find a needle in a haystack.

In the end, this graph is a testament to Claude 3 Opus's excellent long-form comprehension, information processing, and exquisite memory for detail. It's a great demonstration of the core capabilities of a very large language model.

As mentioned in this article, Claude 3 models are capable of handling long text inputs of over 1 million tokens by default, and the performance of Opus in this graph is a clear demonstration of that potential. We look forward to seeing Claude 3 Opus in research and enterprise applications that require large documents and datasets.

With this overwhelming performance, Opus can be utilized for advanced research and development, strategic planning, and automation of complex tasks. It's also perfect for analyzing massive papers or patent documents in a fraction of the time and uncovering hidden insights.

Claude 3 Sonnet - A great balance of performance and speed

Sonnet is a high-performance, affordable, all-around model that rivals Opus. It's designed to meet the needs of large enterprise customers, with the ability to quickly process large data and knowledge bases.

It can be used for everything from sales strategy to personalized marketing to inventory management. If you need to generate code or analyze images, Sonnet can handle that as well. It's as powerful as Opus at a fraction of the price, so it's sure to appeal to many companies.

Claude 3 Haiku - Specializing in affordable and fast response times

Haiku is optimized for real-time services with its compact size and fast response time. It's perfect for simple questions and answers, chat bots, content monitoring, and more.

It's lightning fast at answering simple, straightforward questions, while still being able to carry on a natural conversation. It's also competitively priced, so it's likely to be useful for startups and small businesses to automate their work.

Applications of the Claude 3 model and its use cases

The Claude 3 model has the potential to revolutionize many areas of business, and real-world companies are excited about it, starting with the automated analysis of unstructured data, such as PDFs, presentations, and diagrams, which make up more than 50% of corporate data.

We're excited to see Claude 3 in customer service, marketing, sales, and logistics. From answering live chats, to personalized product recommendations, to complex analytics like sales forecasting, these are all areas where AI can be put to good use.

Claude 3 will also play a big role in research and development (R&D). For example, analyzing huge amounts of papers and experimental data in a short time and suggesting promising research directions. This is especially helpful in fields such as drug discovery and advanced materials research.



The table presented compares the document and image processing performance of the Claude 3 model series and its competitor models (GPT-4V, Gemini 1.0 Ultra, Gemini 1.0 Pro) across a range of metrics. Specifically, we evaluated math/reasoning ability (MMLU), visual Q&A of documents, pure math (MathVista), scientific diagram comprehension, and chart Q&A.

Looking at the performance of the Claude 3 models, Opus performed the best in most categories, followed by Sonnet and Haiku. In particular, all Claude 3 models scored around 89% accuracy in the Visual Q&A of documents, outperforming GPT-4V (88.4%). Scientific diagram comprehension was also 86-88%, significantly outperforming GPT-4V (78.2%), indicating a significant ability to process visual information.

In math/reasoning and pure math, Sonnet scored slightly lower than Opus, but outperformed Haiku and GPT-4V. In charted Q&A, the Claude 3 models all performed well above 80%.

When compared to the Gemini models, the Claude 3 advantage is even more evident. Gemini 1.0 Ultra and Pro lagged behind the Claude 3 models across the board, with the gap widening significantly on tasks involving visual information, such as visual Q&A of documents, scientific diagrams, and chart Q&A. In the math/reasoning domain, the Gemini models performed as well as or slightly better than Haiku.

To summarize these results, we can say that the Claude 3 model series performed very well in visual information comprehension and processing, outperforming the GPT-4V and significantly outperforming the Gemini models.

However, in more abstract areas of thinking, such as math and reasoning, the Claude 3 was slightly behind the GPT-4V, but that's only for the higher-end models like the Opus and Sonnet, and it's encouraging to see that even the smaller Haiku outperformed the competition in its class.

Finally, Anthropic's emphasis on Claude 3's ability to handle visual information seems to be driven by the needs of enterprise customers. Given that a large portion of enterprise data is unstructured, such as PDFs and diagrams, Claude 3's ability to analyze this data effectively is of interest.

It remains to be seen how Claude 3 will perform in the enterprise, but its strength in visual data is expected to be of great value. If Anthropic continues to improve its technology and develop customized solutions for enterprises, Claude 3 could be the next big thing in business AI.

Finally, it's worth noting the chart that summarizes the pricing structure for each model. We've clearly compared the price per token so that you can choose the model that fits your needs and budget, so you can choose the best AI partner for your organization.

The Claude 3 model series represents the current state of the art in next-generation AI technology, but also points to a bright future. Its combination of power, affordability, and ease of use paves the way for collaboration with humans across a wide range of industries.

Of course, Anthropic is also wary of the potential dangers of AI. They emphasize "responsible AI" to minimize misinformation, misuse, and bias, and they're working on ethical considerations alongside technology development. They're not perfect yet, but they're definitely on the right track.

I think it's important to keep an eye on the changes that models like Claude 3 will bring to human life and industry as a whole, as they have the potential to support creative and innovative activities that go beyond simply increasing productivity. At the same time, we need to keep our eyes on the limitations and risks of AI, and seek a desirable direction through social consensus.


Jul 11, 2023

[How to use GPT-4/BingChat for Drug Discovery or Other Research Development White Paper (w/ Prompt Engineering)]


Backgrounds

The motivation to write this white paper was inspired by an article in Hankyoreh on July 5 (Generative AI designs new drug in 46 days...enters phase 2 for the first time ever[1]which I read as a non-expert in the field of drug development. To summarize the content of the article, it is as follows.

A drug candidate designed using artificial intelligence (AI) by biotech company Insilico Medicine has entered Phase II clinical trials. The drug, a treatment for idiopathic pulmonary fibrosis, will be administered to 60 patients in China and the United States. Insilico developed the drug using a combination of generative AI and reinforcement learning, which cut drug development costs by a tenth and time by a third. The company currently has more than 30 AI drug development programs underway, three of which have entered clinical trials. This entry into Phase II clinical trials is considered a major milestone in the field of drug development using AI.

Based on this article, I tried to explain how generative AI (ChatGPT, BingChat) can be well utilized in drug discovery with an example of the process of applying prompt engineering techniques to derive concrete results step by step.

 

This course will help you understand why prompt engineering is necessary and how to apply it. It will also show you how we uncovered hidden prompt directives that are very useful in drug discovery.  

Here are the prompt directives (in the form of #hashtags) that we discovered while writing this paper: (*Note: The hashtag directives below should be run in BingChat rather than GPT-4 to achieve the desired results.')

  • #patent_search
  • #trend_analysis
  • #information_query
  • #information_detail
  • #scenario_write
  • #scenario_simulate
  • #content_generate
  • #expert_interview
  • #educational_content
  • #risk_assessment
  • There are many other

These hashtags are not only useful for drug discovery, but also for other research and development. We encourage you to apply these hashtag directives to a variety of research areasFor example, among the hashtags below, "#patent_search" is a command to search for patent information.

A drug discovery researcher can ask a question in BingChat with "#patent_search: rituxim"] or give a command with "Give me the latest patents for rituxim"]. However, there are differences between these two commands.

The '#patent_search: rituxim' command uses a hashtag to search for patent information. It searches for domestic and international patents with the keyword rituxim and provides information such as patent name, application number, filing date, inventor, and summary.

"Show me the latest patents for rituxim" is a typical question to retrieve patent information: it performs a web search with the keyword rituxim, finds patent-related sites or documents in the search results, and provides the information.

Drug discovery is an important field that improves human health and quality of life. However, drug development is an extremely difficult, time-consuming, and costly process. Drug candidates need to be discovered, validated for efficacy and safety, tested in clinical trials, and approved before they can be brought to market. There is a lot of failure and waste along the way, and the probability of success in drug development is very low.

To address these challenges, artificial intelligence (AI) technologies are increasingly being used in drug discovery. Generative AI, in particular, is an AI technology that generates new data by learning from existing data, and can be used to design drug candidates, predict their efficacy and safety, and even simulate clinical trial results. A typical example of generative AI used in drug discovery is the Generative Adversarial Network (GAN).  

Generative Adversarial Networks (GANs) and GPT-4 are both deep learning models, a branch of artificial intelligence, but they differ in their purpose and how they work. Both models can be utilized in drug discovery, but the way and context in which they are used is different.

1. Generative Adversarial Networks (GANs): GANs are generative models in which two neural networks, a generator and a discriminator, learn by competing with each other. The generator tries to create fake data that resembles real data, and the discriminator tries to determine whether the data created by the generator is real or fake. Through this competition, the generator gradually creates fake data that is indistinguishable from real data, which is then used to generate new images, speech, and more.

GANs  are often used in the molecular design phase of drug development. GANs can be used to generate new molecular structures, which can help find new drug candidates. A constructor creates a new molecule that is similar to a real molecule, and a discriminator determines how similar it is to the real molecule. In this way, GANs can be used to explore and generate new molecular structures in drug discovery.

2. GPT-4: GPT-4 is a model used in natural language processing (NLP) that focuses on understanding and generating textual data. GPT-4 can learn large amounts of text data to understand context, generate appropriate text for a given input, provide answers to questions, translate text, and more. GPT-4 is based on the Transformer architecture, which is designed to process all words in an input sentence simultaneously to better understand context. Natural language processing models like GPT-4  can be used in other aspects of drug discovery. For example, these models can be used to analyze and understand large amounts of medical text data. This can help analyze research findings, interpret clinical trial results, or search and summarize medical literature.

So, the main difference between GANs and GPT-4 is that GANs are used to generate different types of data, such as images and speech, while GPT-4 is primarily used to process and generate text data. Also, GANs learn by having two neural networks compete against each other, while GPT-4 learns by training a large amount of text data to understand context and generate text.

GANs and GPT-4 can be utilized in different ways at different stages of drug development. By leveraging their respective strengths, these two models can contribute to improving and accelerating the drug discovery process.

This paper describes the process of applying prompt engineering techniques to analyze how GPT-4 can be utilized in the drug discovery process.

 

In order to utilize generative AI for drug discovery, prompt engineering techniques are required. Prompt engineering is the art of providing an AI model with the right inputs (prompts) to achieve a desired outcome. Prompt engineering can help you increase the performance and efficiency of your AI model, tailor your AI model to your desired purpose, and reduce the limitations and risks of your AI model.

I am a non-expert in the drug discovery field. Based on my experience in general prompt engineering,I have been studying the process of how generative AI (ChatGPT, BingChat) can be utilized in drug development.


Much of the content in this whitepaper was generated by utilizing GPT-4 and BingChat as appropriate.


For more information, download the PDF file here 


-------------------------------------------

Published Book: Mastering ChatGPT-4 Prompt for Writers: (Author:Charly Choi)

Apr 25, 2023

[10 tips for ChatGPT prompting examples to spark creativity]

"Mastering ChatGPT-4 Prompts for Writers: The Ultimate Guide to Unlocking Your Creativity and Boosting Your Writing Skills with ChatGPT-4 (amazon kindle store)"

This book provides prompting tips to help writers utilize ChatGPT-4 to get more creative ideas. Some examples include: 

1. Think from different perspectives: If you're writing about an "alien invasion," try thinking about it from the aliens' point of view as well as the humans'. You'll get more interesting ideas.

2. Spark your imagination: If you're writing about an "alien invasion" topic, try using questions that spark your imagination, such as "Why would aliens invade Earth?". 

3. Take ideas from similar topics: Take ideas from other works related to "alien invasion" and utilize them. This will help you develop your existing ideas.

4. Create a paradoxical situation: Create a paradoxical situation in the same topic as "alien invasion", for example, "If aliens hadn't invaded, wouldn't the Earth be doomed?".

5. Take ideas from other genres: On a topic like "alien invasion," try borrowing ideas from works in other genres. For example, you can find ideas related to alien invasions in crime fiction instead of science fiction. 

6. Use historical facts: In a topic like "alien invasion," consider using historical facts. For example, you could use a famous battle or war in human history as the backdrop for an alien invasion. 

7. Imagine a future situation: In a topic like "Alien Invasion," imagine a future situation. For example, you could imagine how Earth would change after an alien invasion. 

8. Focus on emotions: In a topic like "Alien Invasion," focus on emotions. For example, you could tell a story centered around emotions like fear or anger that humans feel about an alien invasion. 

9. Create a tragic situation: In a topic like "alien invasion," create a tragic situation. For example, you could depict a situation where humanity fails in their fight against an alien invasion.

10. Create a character-driven story: In a topic like "Alien Invasion," create a character-driven story. For example, you could center your story around the relationship between humans and aliens. By utilizing these prompt tips, writers can get more creative with their ideas and have fun and excitement during the writing process.

By utilizing these prompting tips, writers can get more creative ideas and have fun and excitement in the writing process.

For example, try asking ChatGPT the following questions and check out the creative results from ChatGPT. (Try it on ChatGPT, BingChat, and Google Bard for different results. What unique technology would aliens have?) 

Prompt: I am writing a science fiction novel and need five ideas for unique technology related to alien planets.

------------------------------------------------------

Book: Mastering ChatGPT-4 Prompts for Writers: (Author Charly Choi)

Amazon link


Apr 24, 2023

[Mastering ChatGPT-4 Prompts for Writers]

The Ultimate Guide to Unlocking Your Creativity and Boosting Your Writing Skills with ChatGPT-4]


Introduction
Chapter 1: The Age of the AI Writer and the Writing Revolution
Chapter 2: Understanding Generative AI and Its Potential for Writers

2.1. Understanding ChatGPT-4

2.2. Role of prompts in ChatGPT-4

2.3. ChatGPT-4 for Writer

Chapter 3: Mastering Prompts for Writing: A Comprehensive Guide

3.1 What is Prompt?

3.2. Basics for writing prompts

3.3 Creative uses of prompts

3.4. Tailoring prompts for different genres

3.5 Use prompts for book organization and ideas

3.6 Optimize prompts

3.7 Test and iterate on your prompt

3.8 Prompt Tips for novelist

Chapter 4. Use Case:Write my business book

4.1. Create a book title.

4.2. Write a table of contents

4.3. Identify the main ideas in each chapter

4.4. Create an introduction

4.5. Create draft content for each chapter

4.6. Write a conclusion

Chapter 5: Scalability and Future Prospects for Using ChatGPT-4

5.1 Possibilities for other content creation

5.2 Limitations and challenges of AI writing

5.3 The future of ChatGPT and the publishing industry

Conclusion: Insights from my journey as an AI writer and challenges ahead
Frequently Asked Questions (FAQ)
Appendix

The "prompts" I used to write this book.

https://amzn.to/41yJYR8

Introduction

In today's rapidly advancing technological era, every aspect of our lives undergoes significant transformations, and the writing world is no exception. The emergence of artificial intelligence (AI) and natural language processing (NLP) technologies, such as Generative AI, BingChat, and Google Bard, enables collaboration with machine-generated text and opens up an unprecedented realm of possibilities in content creation. We are entering the age of the AI author, and a writing revolution is on the horizon.

Mastering Prompts for Writers: The Ultimate Guide to Unlocking Your Creativity and Boosting Your Writing Skills with ChatGPT-4 is your essential guide to this fascinating new world. This comprehensive resource not only provides insights into cutting-edge AI tools like Generative AI, BingChat, and Google Bard but also shares techniques for effectively using these tools to augment your writing process and enhance your creativity.

Generative AI, a groundbreaking technology that generates human-like text, has revolutionized the way we approach writing. BingChat, Microsoft's innovative conversational AI, offers an interactive writing experience, enabling dynamic and responsive collaboration with your AI-powered writing assistant. Google Bard, meanwhile, is a powerful AI-driven poetry generator that can help craft beautiful verses, offering inspiration and guidance to poets and lyricists alike.

Mastering Prompts for Writers teaches you how to harness these remarkable technologies' potential by providing expert tips, techniques, and prompts to challenge and inspire you in unlocking your creativity. You will learn how to work seamlessly with AI authors, utilizing their capabilities to expand your writing skills and bring your ideas to life in new and exciting ways.

Discover the powerful synergy between human and artificial intelligence, and be part of the writing revolution that is changing the face of content creation. Mastering Prompts for Writers: The Ultimate Guide to Unlocking Your Creativity and Boosting Your Writing Skills with ChatGPT-4 is your key to exploring and embracing the limitless potential of AI-powered writing tools like Generative AI, BingChat, and Google Bard.

My passion for writing and sharing knowledge led me to publish two books: "Advanced Chrome Device Management & 2017 Essential Guide for Chromebook Users" in 2017 and "New Advanced Chrome Device Management: Chrome Enterprise" in 2020. Both of these books were self-published through Amazon's Kindle Direct Publishing (KDP) platform. Throughout my journey, I discovered the potential of self-publishing and how it empowers writers to have complete control over their work. I firmly believe that anyone can become an author and share their knowledge through self-publishing on Amazon.

This book's motivation stems from my experiences in the self-publishing world and the exciting opportunities presented by AI-generated text development. In this third attempt to publish on Amazon KDP, I aim to bridge the gap between human authors and AI-generated writing, helping you tap into the power of both.

The book focuses on ChatGPT-4, an AI writing-generating model developed by OpenAI. By harnessing its capabilities, authors can unlock their creativity, boost their writing skills, and develop engaging content that captivates readers. The key to achieving this lies in mastering prompts—a critical aspect of working with AI-generated text. Through well-crafted prompts, writers can guide the AI's output and create pieces aligned with their vision.

I will delve into AI-generated writing's world, exploring its applications and the impact on writing's future. We will discuss Amazon self-publishing implications and how it has evolved due to AI authors' rise. With this knowledge, we aim to equip you with the tools and strategies necessary to become a successful self-published author.

Throughout the book, I will discover:

  • An in-depth understanding of ChatGPT-4, its strengths, limitations, and its transformation of the writing landscape.
  • The art and science of crafting effective prompts to guide AI-generated writing and create high-quality content.
  • The role of human creativity and authorship in the new era of AI-driven content, and how to maintain your unique voice in collaboration with AI-generated text.
  • Practical strategies for editing, revising, and fine-tuning AI-generated content to deliver value to your readers and meet the high standards of self-publishing.
The process of creating a business book using ChatGPT-4.

This book is for all writers, whether you are an established author or an aspiring one, who wish to enhance their craft and explore the benefits of collaborating with AI-generated text. It is a comprehensive guide that will help you navigate the world of AI-driven writing, understand the dynamics of self-publishing on Amazon, and ultimately, create content that resonates with your audience.

By combining the power of AI-generated writing with your creativity and unique voice, you will be able to unlock new possibilities in your writing career. Mastering Prompts for Writers: The Ultimate Guide to Unlocking Your Creativity and Boosting Your Writing Skills with ChatGPT-4 is the key to embracing the limitless potential of AI-powered writing tools and enriching your journey as a writer.

Please note that 90% of the content in this book was generated with the help of the generative AIs ChatGPT-4 (85%), BingChat (3%), and Google Bard (2%).