Advanced Chat
Advanced chat completion using DeepSeek and other high-performance models.
POST /4/completions
Required attributes
- Name
model
- Type
- string
- Description
The model to use for completion.
Default:
deepseek-ai/DeepSeek-R1
- Name
prompt
- Type
- string
- Description
Text to generate completion for.
Optional attributes
- Name
temperature
- Type
- number
- Description
Controls randomness in the response. Lower is more deterministic.
Default:
0.7
- Name
max_tokens
- Type
- integer
- Description
Maximum number of tokens to generate.
Default:
256
- Name
top_p
- Type
- number
- Description
Controls diversity via nucleus sampling.
Default:
0.1
- Name
frequency_penalty
- Type
- number
- Description
Reduces repetition of token sequences.
Default:
0
- Name
presence_penalty
- Type
- number
- Description
Reduces repetition of topics.
Default:
0
- Name
stream
- Type
- boolean
- Description
Whether to stream the response.
Default:
true
- Name
logprobs
- Type
- boolean
- Description
Whether to return log probabilities of the output tokens.
Default:
false
Request
curl -X POST "https://api.bitmind.ai/oracle/v1/4/completions" \
-H "Authorization: Bearer {token}" \
-H "Content-Type: application/json" \
-d '{
"model": "deepseek-ai/DeepSeek-R1",
"prompt": "",
"temperature": 0.7,
"max_tokens": 256,
"top_p": 0.1,
"frequency_penalty": 0,
"presence_penalty": 0,
"stream": true,
"logprobs": false
}'
Response
{
"id": "cmpl-123ABC456DEF",
"object": "text_completion",
"created": 1739374705,
"model": "NousResearch/Meta-Llama-3.1-8B-Instruct",
"choices": [
{
"text": "The XY problem is a communication issue that occurs when someone asks for help with their attempted solution (Y) rather than their actual problem (X). It happens when a person has a problem X, decides on a solution Y, gets stuck implementing Y, and then asks for help with Y without mentioning the original problem X.\n\nThis is problematic because:\n\n1. The solution Y might not be the best approach to solve problem X\n2. People trying to help waste time addressing Y when a completely different approach might be better\n3. The actual problem X remains hidden, making effective help difficult\n\nFor example, someone might ask, \"How do I check if a file exists using grep?\" when their actual problem is determining if a file exists. The better solution would be to use commands specifically designed for this purpose (like 'test -f filename' or 'stat'), but because they only asked about their attempted grep solution, they might receive suboptimal advice.\n\nTo avoid the XY problem:\n- Explain the ultimate goal first\n- Then describe what you've tried\n- Be open to alternative approaches\n- Provide context about the original problem",
"index": 0,
"finish_reason": "stop",
"logprobs": null
}
],
"usage": {
"prompt_tokens": 5,
"completion_tokens": 1024,
"total_tokens": 1029
}
}
POST /4/chat
Required attributes
- Name
model
- Type
- string
- Description
The model to use for chat completion.
Default:
deepseek-ai/DeepSeek-R1
- Name
messages
- Type
- array
- Description
Optional attributes
- Name
temperature
- Type
- number
- Description
Controls randomness in the response. Lower is more deterministic.
Default:
0.7
- Name
max_tokens
- Type
- integer
- Description
Maximum number of tokens to generate.
Default:
256
- Name
top_p
- Type
- number
- Description
Controls diversity via nucleus sampling.
Default:
0.1
- Name
frequency_penalty
- Type
- number
- Description
Reduces repetition of token sequences.
Default:
0
- Name
presence_penalty
- Type
- number
- Description
Reduces repetition of topics.
Default:
0
- Name
stream
- Type
- boolean
- Description
Whether to stream the response.
Default:
true
- Name
logprobs
- Type
- boolean
- Description
Whether to return log probabilities of the output tokens.
Default:
false
Request
curl -X POST "https://api.bitmind.ai/oracle/v1/4/chat" \
-H "Authorization: Bearer {token}" \
-H "Content-Type: application/json" \
-d '{
"model": "deepseek-ai/DeepSeek-R1",
"messages": [],
"temperature": 0.7,
"max_tokens": 256,
"top_p": 0.1,
"frequency_penalty": 0,
"presence_penalty": 0,
"stream": true,
"logprobs": false
}'
Response
{
"id": "chatcmpl-123ABC456DEF",
"object": "chat.completion",
"created": 1739374705,
"model": "NousResearch/Meta-Llama-3.1-8B-Instruct",
"choices": [
{
"message": {
"role": "assistant",
"content": "Here's a bubble sort implementation in Python with comments explaining how it works:\n\n```python\ndef bubble_sort(arr):\n # Get the length of the array\n n = len(arr)\n \n # Traverse through all array elements\n for i in range(n):\n # Flag to optimize if no swaps occur in a pass\n swapped = False\n \n # Last i elements are already in place\n # Compare adjacent elements and swap if needed\n for j in range(0, n-i-1):\n # If current element is greater than next element, swap them\n if arr[j] > arr[j+1]:\n arr[j], arr[j+1] = arr[j+1], arr[j]\n swapped = True\n \n # If no swapping occurred in this pass, array is sorted\n if not swapped:\n break\n \n return arr\n\n# Example usage\nif __name__ == \"__main__\":\n # Test with a sample array\n test_array = [64, 34, 25, 12, 22, 11, 90]\n print(\"Original array:\", test_array)\n \n # Sort the array\n sorted_array = bubble_sort(test_array)\n print(\"Sorted array:\", sorted_array)\n```\n\nExplanation of how bubble sort works:\n\n1. Bubble sort repeatedly steps through the list, compares adjacent elements, and swaps them if they're in the wrong order.\n2. The algorithm gets its name because smaller elements \"bubble\" to the top of the list with each iteration.\n3. The process is repeated for each element in the list until no more swaps are needed, indicating the list is sorted.\n4. The optimization with the `swapped` flag allows the algorithm to exit early if the array becomes sorted before all passes are complete.\n5. Time complexity: O(n²) in worst and average cases, O(n) in best case (when array is already sorted).\n6. Space complexity: O(1) as it only requires a constant amount of additional space."
},
"index": 0,
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 42,
"completion_tokens": 1024,
"total_tokens": 1066
},
"system_fingerprint": null
}
Last updated: April 24, 2025