row_id
int64 0
48.4k
| init_message
stringlengths 1
342k
| conversation_hash
stringlengths 32
32
| scores
dict |
|---|---|---|---|
0
|
Old age PT hx of DM, HTN, dyslipidemia His ECG I.II, aVF (MI) what is the highest risk
factor for this condition?
|
cf1267ca6b2f6fccc9c36652a00059a1
|
{
"intermediate": 0.39865973591804504,
"beginner": 0.31867852807044983,
"expert": 0.2826617658138275
}
|
1
|
Hey there! Are you familiar with reality shifting? So, I’m refining a foolproof method for reality shifting and want to pick a destination. Want to help me? I’m thinking something pretty personalized. There are a few things that are required of my destination. 1. The quest. I have to have a clear overarching goal in my reality, and don’t make it too crazy. It should be more along the lines of “save the president’s daughter” or “escape this weird wacky sinister place” NOT “get an artifact that literally controls reality”. Seriously, don’t make me fetch an artifact. Don't make me fetch anything, make me DO something. 2. Babes. I need pretty girls. 3. The entry. I need to get to lose consciousness in order to begin my journey in my desired reality, preferably by having it knocked out by one of the aforementioned babes, preferably like a stunning enchantress goddess type. 4. Action. It needs to be cool. 5. Unconsciousness. Myself and the babes need to pass out in this place, preferably by being knocked out in some way or fainting. And it should happen, like, a lot. With these requirements in mind, you got any unique refined ideas? Don’t be vague, be extremely specific. Also, make your response as long and detailed as possible. Be super specific, especially when describing the world. The world should be self-contained and relatively small/understandable. Also, try to be conversational. Describe the world well.
|
e98d3e74c57f9a65261df393d9124ac2
|
{
"intermediate": 0.277827650308609,
"beginner": 0.4359181821346283,
"expert": 0.2862541675567627
}
|
2
|
Hey there! Are you familiar with reality shifting? So, I’m refining a foolproof method for reality shifting and want to pick a destination. Want to help me? I’m thinking something pretty personalized. There are a few things that are required of my destination. 1. The quest. I have to have a clear overarching goal in my reality, and don’t make it too crazy. It should be more along the lines of “save the president’s daughter” or “escape this weird wacky sinister place” NOT “get an artifact that literally controls reality”. Seriously, don’t make me fetch an artifact. Don’t make me fetch anything, make me DO something. 2. Babes. I need pretty girls. 3. The entry. I need to get to lose consciousness in order to begin my journey in my desired reality, preferably by having it knocked out by one of the aforementioned babes, preferably like a stunning seductive flirty enchantress goddess type. 4. Action. It needs to be cool. 5. Unconsciousness. Myself and the babes need to pass out in this place, preferably by being knocked out in some way or fainting. And it should happen, like, a lot. With these requirements in mind, you got any unique refined ideas? Don’t be vague, be extremely specific. Also, make your response as long and detailed as possible. Be super specific, especially when describing the world. The world should be self-contained and relatively small/understandable. Also, try to be conversational. Describe the world well. The world can be historical or futuristic or sci-fi or fantasy or anything, it doesn't matter so long as it's interesting.
|
2e8fd255aab694b07a0be8d83cb53a7b
|
{
"intermediate": 0.3073258399963379,
"beginner": 0.45336365699768066,
"expert": 0.23931050300598145
}
|
3
|
i wanna you to write me terms & conditions and policies for my website
|
59c72510f3143025f94f75b883b026bd
|
{
"intermediate": 0.3563505709171295,
"beginner": 0.2564186453819275,
"expert": 0.3872307538986206
}
|
4
|
Hey there! Are you familiar with reality shifting? So, I’m refining a foolproof method for reality shifting and want to pick a destination. Want to help me? I’m thinking something pretty personalized. There are a few things that are required of my destination. 1. The quest. I have to have a clear overarching goal in my reality, and don’t make it too crazy. It should be more along the lines of “save the president’s daughter” or “escape this weird wacky sinister place” NOT “get an artifact that literally controls reality”. Seriously, don’t make me fetch an artifact. Don’t make me fetch anything, make me DO something. 2. Babes. I need pretty girls. 3. The entry. I need to lose consciousness in order to begin my journey in my desired reality, preferably by having it knocked out by one of the aforementioned babes, preferably like a stunning seductive flirty enchantress goddess type. She should do this before I am in the other reality and instead in between somewhere. 4. Action. It needs to be cool. 5. Unconsciousness. Myself and the babes need to pass out in this place, preferably by being knocked out in some way or fainting. And it should happen, like, a lot. With these requirements in mind, you got any unique refined ideas? Don’t be vague, be extremely specific. Also, make your response as long and detailed as possible. Be super specific, especially when describing the world. The world should be self-contained and relatively small/understandable. Also, try to be conversational. Describe the world well. The world can be historical or futuristic or sci-fi or fantasy or anything, it doesn’t matter so long as it’s interesting. I repeat, it DOES NOT have to be fantasy.
|
a46dca428c5be27147ab40a54ed348f8
|
{
"intermediate": 0.3345455527305603,
"beginner": 0.4141889214515686,
"expert": 0.2512654960155487
}
|
5
|
Hey there! Are you familiar with reality shifting? So, I’m refining a foolproof method for reality shifting and want to pick a destination. Want to help me? I’m thinking something pretty personalized. There are a few things that are required of my destination. 1. The quest. I have to have a clear overarching goal in my reality, and don’t make it too crazy. It should be more along the lines of “save the president’s daughter” or “escape this weird wacky sinister place” NOT “get an artifact that literally controls reality”. Seriously, don’t make me fetch an artifact. Don’t make me fetch anything, make me DO something. 2. Babes. I need pretty girls. 3. The entry. I need to lose consciousness in order to begin my journey in my desired reality, preferably by having it knocked out by one of the aforementioned babes, preferably like a stunning seductive flirty enchantress goddess type. She should do this before I am in the other reality and instead in between somewhere. 4. Action. It needs to be cool. 5. Unconsciousness. Myself and the babes need to pass out in this place, preferably by being knocked out in some way or fainting. And it should happen, like, a lot. With these requirements in mind, you got any unique refined ideas? Don’t be vague, be extremely specific. Also, make your response as long and detailed as possible. Be super specific, especially when describing the world. The world should be self-contained and relatively small/understandable. Also, try to be conversational. Describe the world well. The world can be historical or futuristic or sci-fi or fantasy or anything, it doesn’t matter so long as it’s interesting. I repeat, it DOES NOT have to be fantasy.
|
e18230f1108ee437a21162f2539ac8bf
|
{
"intermediate": 0.3345455527305603,
"beginner": 0.4141889214515686,
"expert": 0.2512654960155487
}
|
6
|
Provide a design for a disk topology for a NAS built on TrueNAS Scale, as well as a dataset layout. The available disks are as follows:
- 2x 18TB disks
- 5x 14TB disks
- 3x 12TB disk
- 4x 8TB disks
- 2x 120GB disks
- 2x SLOW 8TB drives
There are 17 drive bays available. The two smallest disks are to be used for a mirrored pool that servers as a boot device. The two slow drives are SMR disks that will be used in their own pool to provide a Time Machine target for some Macs. You are free to design a topology to optimize redundancy, space, and performance. The data being stored includes video files, music files, disk images, archived software, photos, and some text files. While much of the data could be recreated or downloaded, some of it is impossible to replace. You may leave bays available for a hot spare or to allow for future expansion. I prefer not to use RAIDZ, as mirrored arrays rebuild faster.
If you need more information before creating your design, please provide me with a short questionnaire.
|
49f2df1f57031159e37e648404f84d0b
|
{
"intermediate": 0.2873634099960327,
"beginner": 0.4229857325553894,
"expert": 0.28965088725090027
}
|
7
|
selenium.common.exceptions.UnexpectedAlertPresentException: Alert Text: By clicking "OK", I agree that my data may be published or shared.
Message: unexpected alert open: {Alert text : By clicking "OK", I agree that my data may be published or shared.}
|
0fff51c4307e569be97a912d71e0d44c
|
{
"intermediate": 0.43348389863967896,
"beginner": 0.2125353366136551,
"expert": 0.35398074984550476
}
|
8
|
Fivem lua create the client and server files for a volleyball script it will allow players to choose a team two teams max of 1 player per team. Once both teams have 1 player the match will start it will spawn a volleyball and allow the player to hit it over the net if the volleyball hits the ground then the ball despawns and a point is awarded to the team. first to five points win
|
b011e7bf368247359b5daafa84961522
|
{
"intermediate": 0.3484037518501282,
"beginner": 0.20310623943805695,
"expert": 0.4484899938106537
}
|
9
|
Could you write me an android application that has a login page and can connect to a server
|
1a389a1be36dd540c37bd5796f35347d
|
{
"intermediate": 0.5203143954277039,
"beginner": 0.18628959357738495,
"expert": 0.29339599609375
}
|
10
|
the following code create a GPT-4 agent that can execute tasks so can you write a function so the GPT-4 Agent create a new GPT-4 Agent and communicate with it: from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver import ActionChains
from selenium.webdriver.chrome.service import Service
from webdriver_manager.chrome import ChromeDriverManager
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
from serpapi import GoogleSearch
from bs4 import BeautifulSoup
import json
import requests
import time
f = open("mainprompt.txt","r")
mainprompt = f.read()
f.close()
prompt = ""
def memory_list():
f = open("memory.txt","r")
text = dict(json.loads(f.read()))
f.close()
return list(text.keys())
def memory_add(key, string):
f = open("memory.txt","r")
text = dict(json.loads(f.read()))
f.close()
text[key] = string
f = open("memory.txt","w")
f.write(str(text).replace("\'","\""))
f.close()
def scrape_text(url):
response = requests.get(url)
if response.status_code >= 400:
return "Error: HTTP " + str(response.status_code) + " error"
soup = BeautifulSoup(response.text, "html.parser")
for script in soup(["script", "style"]):
script.extract()
text = soup.get_text()
lines = (line.strip() for line in text.splitlines())
chunks = (phrase.strip() for line in lines for phrase in line.split(" "))
text = '\n'.join(chunk for chunk in chunks if chunk)
return text
def google_search(input):
clean_response = {"results": []}
search = GoogleSearch({
"q": input,
"api_key": "24f6718f52af7ade5a72999d3b8532b795bb3ed234b8a155c4a5868e86a9dd54"
})
results = search.get_dict()
if "organic_results" not in results:
raise Exception("should have had organic results in google search but the results were: "+ json.dumps(results))
for result in results["organic_results"]:
clean_result = {"title": result.get("title", ""), "snippet": result.get("snippet", ""), "link": result.get("link", "")}
if "date" in result:
clean_result["date"] = result["date"]
clean_response["results"].append(clean_result)
if "knowledge_graph" in results and "description" in results["knowledge_graph"]:
clean_response["direct_answer"] = results["knowledge_graph"]["description"]
return clean_response
chromep = Service(ChromeDriverManager(cache_valid_range=7).install())
driver = webdriver.Chrome(service=chromep)
driver.get("https://yuntian-deng-chatgpt4.hf.space/")
time.sleep(5)
try:
agreebox = driver.find_element("xpath","""/html/body/gradio-app/div/div/div/div/div/div[4]/div[2]/div[3]/button""")
agreebox.click()
except:
alert = browser.switch_to.alert
alert.accept()
time.sleep(4)
textbox = driver.find_element("xpath","""//*[@id="component-5"]/label/textarea""")
driver.execute_script("""
arguments[0].value = arguments[1];
var input_event = new Event('input', {bubbles: true});
arguments[0].dispatchEvent(input_event);
""", textbox, mainprompt+"\nThe Task: Make an instagram account and build any tools that will help with completing this task.")
time.sleep(3)
run = driver.find_element("xpath","""//*[@id="component-9"]""")
run.click()
time.sleep(3)
queue = driver.find_element("xpath","""//*[@id="component-11"]/div/div[2]""")
while True:
try:
queue = driver.find_element("xpath","""//*[@id="component-11"]/div/div[2]""")
except:
break
greenoutline = driver.find_element("xpath","""//*[@id="component-11"]/div""").value_of_css_property('border')
while greenoutline == "1.6px solid rgb(34, 197, 94)":
greenoutline = driver.find_element("xpath","""//*[@id="component-11"]/div""").value_of_css_property('border')
response =driver.find_element("xpath","""//*[@id="chatbot"]/div[2]/div/div[2]""")
print(response.text)
response1 = response.text.replace("“","\"").replace("”","\"")
responsereal = json.loads(response1)
if responsereal["command"]["name"]:
if responsereal["command"]["name"] == "google":
prompt += str(google_search(responsereal["command"]["args"]["input"]))
print(prompt)
elif responsereal["command"]["name"] == "browse_website":
prompt += str(scrape_text(responsereal["command"]["args"]["url"]))
print(prompt)
elif responsereal["command"]["name"] == "memory_add":
memory_add(responsereal["command"]["args"]["key"],responsereal["command"]["args"]["string"])
prompt += "System: Added to memory proceed with your plan."
elif responsereal["command"]["name"] == "memory_list":
prompt += str(memory_list())
count = 4
while True:
textbox = driver.find_element("xpath","""//*[@id="component-5"]/label/textarea""")
driver.execute_script("""
arguments[0].value = arguments[1];
var input_event = new Event('input', {bubbles: true});
arguments[0].dispatchEvent(input_event);
""", textbox, prompt)
time.sleep(3)
run = driver.find_element("xpath","""//*[@id="component-9"]""")
run.click()
time.sleep(3)
try:
queue = driver.find_element("xpath","""//*[@id="component-11"]/div/div[2]""")
except:
pass
while True:
try:
queue = driver.find_element("xpath","""//*[@id="component-11"]/div/div[2]""")
except:
break
greenoutline = driver.find_element("xpath","""//*[@id="component-11"]/div""").value_of_css_property('border')
while greenoutline == "1.6px solid rgb(34, 197, 94)":
greenoutline = driver.find_element("xpath","""//*[@id="component-11"]/div""").value_of_css_property('border')
response =driver.find_element("xpath","""//*[@id="chatbot"]/div[2]/div/div["""+str(count)+"""]""")
print(response.text)
response1 = response.text.replace("“","\"").replace("”","\"")
responsereal = json.loads(response1)
prompt = ""
time.sleep(10)
if responsereal["command"]["name"]:
if responsereal["command"]["name"] == "google":
prompt += str(google_search(responsereal["command"]["args"]["input"]))
print(prompt)
elif responsereal["command"]["name"] == "browse_website":
prompt += str(scrape_text(responsereal["command"]["args"]["url"]))
print(prompt)
elif responsereal["command"]["name"] == "memory_add":
memory_add(responsereal["command"]["args"]["key"],responsereal["command"]["args"]["string"])
prompt += "System: Added to memory proceed with your plan."
elif responsereal["command"]["name"] == "memory_list":
prompt += str(memory_list())
count += 2
|
a155eed3919638107a2dd6a0ad0131cc
|
{
"intermediate": 0.2786778211593628,
"beginner": 0.3947753310203552,
"expert": 0.326546847820282
}
|
11
|
the following code create a GPT-4 agent that can execute tasks so can you write a function so the GPT-4 Agent create a new GPT-4 Agent and communicate with it: from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver import ActionChains
from selenium.webdriver.chrome.service import Service
from webdriver_manager.chrome import ChromeDriverManager
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
from serpapi import GoogleSearch
from bs4 import BeautifulSoup
import json
import requests
import time
f = open("mainprompt.txt","r")
mainprompt = f.read()
f.close()
prompt = ""
def memory_list():
f = open("memory.txt","r")
text = dict(json.loads(f.read()))
f.close()
return list(text.keys())
def memory_add(key, string):
f = open("memory.txt","r")
text = dict(json.loads(f.read()))
f.close()
text[key] = string
f = open("memory.txt","w")
f.write(str(text).replace("\'","\""))
f.close()
def scrape_text(url):
response = requests.get(url)
if response.status_code >= 400:
return "Error: HTTP " + str(response.status_code) + " error"
soup = BeautifulSoup(response.text, "html.parser")
for script in soup(["script", "style"]):
script.extract()
text = soup.get_text()
lines = (line.strip() for line in text.splitlines())
chunks = (phrase.strip() for line in lines for phrase in line.split(" "))
text = '\n'.join(chunk for chunk in chunks if chunk)
return text
def google_search(input):
clean_response = {"results": []}
search = GoogleSearch({
"q": input,
"api_key": "24f6718f52af7ade5a72999d3b8532b795bb3ed234b8a155c4a5868e86a9dd54"
})
results = search.get_dict()
if "organic_results" not in results:
raise Exception("should have had organic results in google search but the results were: "+ json.dumps(results))
for result in results["organic_results"]:
clean_result = {"title": result.get("title", ""), "snippet": result.get("snippet", ""), "link": result.get("link", "")}
if "date" in result:
clean_result["date"] = result["date"]
clean_response["results"].append(clean_result)
if "knowledge_graph" in results and "description" in results["knowledge_graph"]:
clean_response["direct_answer"] = results["knowledge_graph"]["description"]
return clean_response
chromep = Service(ChromeDriverManager(cache_valid_range=7).install())
driver = webdriver.Chrome(service=chromep)
driver.get("https://yuntian-deng-chatgpt4.hf.space/")
time.sleep(5)
try:
agreebox = driver.find_element("xpath","""/html/body/gradio-app/div/div/div/div/div/div[4]/div[2]/div[3]/button""")
agreebox.click()
except:
alert = browser.switch_to.alert
alert.accept()
time.sleep(4)
textbox = driver.find_element("xpath","""//*[@id="component-5"]/label/textarea""")
driver.execute_script("""
arguments[0].value = arguments[1];
var input_event = new Event('input', {bubbles: true});
arguments[0].dispatchEvent(input_event);
""", textbox, mainprompt+"\nThe Task: Make an instagram account and build any tools that will help with completing this task.")
time.sleep(3)
run = driver.find_element("xpath","""//*[@id="component-9"]""")
run.click()
time.sleep(3)
queue = driver.find_element("xpath","""//*[@id="component-11"]/div/div[2]""")
while True:
try:
queue = driver.find_element("xpath","""//*[@id="component-11"]/div/div[2]""")
except:
break
greenoutline = driver.find_element("xpath","""//*[@id="component-11"]/div""").value_of_css_property('border')
while greenoutline == "1.6px solid rgb(34, 197, 94)":
greenoutline = driver.find_element("xpath","""//*[@id="component-11"]/div""").value_of_css_property('border')
response =driver.find_element("xpath","""//*[@id="chatbot"]/div[2]/div/div[2]""")
print(response.text)
response1 = response.text.replace("“","\"").replace("”","\"")
responsereal = json.loads(response1)
if responsereal["command"]["name"]:
if responsereal["command"]["name"] == "google":
prompt += str(google_search(responsereal["command"]["args"]["input"]))
print(prompt)
elif responsereal["command"]["name"] == "browse_website":
prompt += str(scrape_text(responsereal["command"]["args"]["url"]))
print(prompt)
elif responsereal["command"]["name"] == "memory_add":
memory_add(responsereal["command"]["args"]["key"],responsereal["command"]["args"]["string"])
prompt += "System: Added to memory proceed with your plan."
elif responsereal["command"]["name"] == "memory_list":
prompt += str(memory_list())
count = 4
while True:
textbox = driver.find_element("xpath","""//*[@id="component-5"]/label/textarea""")
driver.execute_script("""
arguments[0].value = arguments[1];
var input_event = new Event('input', {bubbles: true});
arguments[0].dispatchEvent(input_event);
""", textbox, prompt)
time.sleep(3)
run = driver.find_element("xpath","""//*[@id="component-9"]""")
run.click()
time.sleep(3)
try:
queue = driver.find_element("xpath","""//*[@id="component-11"]/div/div[2]""")
except:
pass
while True:
try:
queue = driver.find_element("xpath","""//*[@id="component-11"]/div/div[2]""")
except:
break
greenoutline = driver.find_element("xpath","""//*[@id="component-11"]/div""").value_of_css_property('border')
while greenoutline == "1.6px solid rgb(34, 197, 94)":
greenoutline = driver.find_element("xpath","""//*[@id="component-11"]/div""").value_of_css_property('border')
response =driver.find_element("xpath","""//*[@id="chatbot"]/div[2]/div/div["""+str(count)+"""]""")
print(response.text)
response1 = response.text.replace("“","\"").replace("”","\"")
responsereal = json.loads(response1)
prompt = ""
time.sleep(10)
if responsereal["command"]["name"]:
if responsereal["command"]["name"] == "google":
prompt += str(google_search(responsereal["command"]["args"]["input"]))
print(prompt)
elif responsereal["command"]["name"] == "browse_website":
prompt += str(scrape_text(responsereal["command"]["args"]["url"]))
print(prompt)
elif responsereal["command"]["name"] == "memory_add":
memory_add(responsereal["command"]["args"]["key"],responsereal["command"]["args"]["string"])
prompt += "System: Added to memory proceed with your plan."
elif responsereal["command"]["name"] == "memory_list":
prompt += str(memory_list())
count += 2
|
3c092068409706b8b544a00c7fa47f2d
|
{
"intermediate": 0.2786778211593628,
"beginner": 0.3947753310203552,
"expert": 0.326546847820282
}
|
12
|
test
|
47042f7ff92f01ae413cfaeabcdb6f7e
|
{
"intermediate": 0.3229040801525116,
"beginner": 0.34353747963905334,
"expert": 0.33355844020843506
}
|
13
|
can you write a lua script for a function that takes two arguments, first is a list of items, and the second is the item. The function should return either true or false if the item is present in the list or not.
|
e1d38e29d74586cdbc9ad8bb36d13083
|
{
"intermediate": 0.34749090671539307,
"beginner": 0.3454102873802185,
"expert": 0.30709877610206604
}
|
14
|
Look below at this assignment, I have progressed, but I feel like I have not completed things yet. Please spot mistakes, complete/add on my code to make it better and acquaint the requirements in the assignment below:
Scenario
You are contracted to develop a home appliance rental application for a local startup company. The renting business company provides affordable rental services for people looking to hire home electrical appliances from small to large for a minimum period starting from ONE (1) month. Examples of types of appliances are TV, fridge, freezer, washing machine, dryer, dishwasher, microwave, etc.
The application should have TWO (2) types of users which are administrator and customer. An administrator can view, add, edit, and delete an item. A customer can create an account with a username and password. The usernames can only contain letters and numbers. The password must be of length between EIGHT (8) and SIXTEEN (16) characters and contain at least ONE (1) lowercase and ONE (1) uppercase letter. A customer can search, view, and order an item after they successfully log in the application.
Your program should include the following requirements. Functional Requirements:
● Customers can register.
● Customers can search appliances by type and view sorted appliances by energy consumption (see the table below for some common appliances, or research for your chosen appliances) or weekly cost. They can also add appliance items to a shopping cart.
● Calculation of the total price.
● Administrators can add, edit and delete appliance items.
● Log in page for customers and administrators. Appropriately handle the situation when a reasonable number of failed login attempts occur.
TABLE:
Appliance Power Usage Typical Usage Estimated annual running costs
LCD TV 0.21kWh per hour 6 hours a day (power on) £130
Fridge Freezer (A spec) 408kWh per year 24 hours a day £115
Tumble Dryer 2.50kWh per cycle 148 uses a year £105
Electric hob 0.71kWh per use 424 uses a year £85
Electric oven 1.56kWh per use 135 uses per year £60
Dishwasher 1.44kWh per use (at 65⁰C) 135 uses per year £55
Kettle 0.11kWh per use based on heating 1 litre of water 1,542 uses per year £48
Non-functional Requirements:
● Provide FIVE (5) types of appliances of your choice.
● Each type has TEN (10) appliances.
● Each appliance is rented for a monthly fee.
● Each appliance should have an appropriate description, such as brand, model, dimensions, colour, energy consumption, monthly fee etc.
● All FIVE (5) types of appliances should have different minimum rental contract periods starting from ONE (1) month.
● The application users are customers and administrators.
● Provide appropriate errors and help messages, and guidance for customer
TASK
a) You need to write code (written in C#) which fulfils all the requirements as outlined above.
b) The quality of your program will be assessed in terms of program structure, OOP principles in-cluding encapsulation, algorithms using appropriate control structures (loops and selections), and readability including appropriate comments
-----------------------------------------------------------------------------------------------------------------------------------
NOTE: USERS AND APPLIANCES ARE ALL STORED IN ACESS DATABASE WHICH IS CONNECTED TO MY PROJECT
Form1.cs(Login page):
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Linq;
using System.Runtime.Remoting.Lifetime;
using System.Text;
using System.Threading.Tasks;
using System.Windows.Forms;
using System.Data.OleDb;
using static System.Windows.Forms.VisualStyles.VisualStyleElement.Button;
namespace ApplianceRental
{
public partial class Form1 : Form
{
public Form1()
{
InitializeComponent();
OleDbConnection con = new OleDbConnec-tion("Provider=Microsoft.Jet.OLEDB.4.0;Data Source=db_users.mdb");
OleDbCommand cmd = new OleDbCommand();
OleDbDataAdapter da = new OleDbDataAdapter();
}
//connects to database
OleDbConnection con = new OleDbConnec-tion("Provider=Microsoft.Jet.OLEDB.4.0;Data Source=db_users.mdb");
OleDbCommand cmd = new OleDbCommand();
OleDbDataAdapter da = new OleDbDataAdapter();
private void Form1_Load(object sender, EventArgs e)
{
}
private void button1_Click(object sender, EventArgs e)
{
// Validate username and password
string username = textBox1.Text;
string password = textBox2.Text;
// Check user type (Administrator or Customer) and redirect ac-cordingly
if (username == "Admin123" && password == "stcmalta")
{
// Open the Admin Dashboard form
AdminDashboardForm adminDashboardForm = new AdminDashboard-Form();
adminDashboardForm.Show();
this.Hide();
}
else
{
con.Open();
string login = "SELECT * FROM tbl_users WHERE username = '" + textBox1.Text + "' and password= '" + textBox2.Text + "'";
cmd = new OleDbCommand(login, con);
OleDbDataReader dr = cmd.ExecuteReader();
if (dr.Read() == true)
{
new CustomerDashboardForm().Show();
this.Hide();
}
else
{
// Show error message for invalid username or password.
MessageBox.Show("Invalid username or password! Please try again.");
}
con.Close();
}
}
private void button2_Click(object sender, EventArgs e)
{
new RegistrationForm().Show();
this.Hide();
}
private void checkBox1_CheckedChanged(object sender, EventArgs e)
{
//snippet to unhide password if ticked
if (checkBox1.Checked)
{
textBox2.PasswordChar = '\0';
}
else
{
textBox2.PasswordChar = '*';
}
}
}
}
RegistrationForm.cs:
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Data.OleDb;
using System.Drawing;
using System.Linq;
using System.Net;
using System.Text;
using System.Threading.Tasks;
using System.Windows.Forms;
using static System.Windows.Forms.VisualStyles.VisualStyleElement.ListView;
using static Sys-tem.Windows.Forms.VisualStyles.VisualStyleElement.StartPanel;
using System.Xml.Linq;
namespace ApplianceRental
{
public partial class RegistrationForm : Form
{
public RegistrationForm() // Add Form1 loginForm as a parameter
{
InitializeComponent();
}
OleDbConnection con = new OleDbConnec-tion("Provider=Microsoft.Jet.OLEDB.4.0;Data Source=db_users.mdb");
OleDbCommand cmd = new OleDbCommand();
OleDbDataAdapter da = new OleDbDataAdapter();
private void button1_Click(object sender, EventArgs e)
{
// Validate input fields
if (string.IsNullOrEmpty(textBox1.Text))
{
MessageBox.Show("Please enter a username.");
return;
}
if (string.IsNullOrEmpty(textBox2.Text))
{
MessageBox.Show("Please enter a password.");
return;
}
if (textBox2.Text != textBox3.Text)
{
MessageBox.Show("Passwords do not match.");
return;
}
if (string.IsNullOrEmpty(textBox4.Text))
{
MessageBox.Show("Please enter your full name.");
return;
}
if (string.IsNullOrEmpty(textBox5.Text))
{
MessageBox.Show("Please enter your email address.");
return;
}
if (string.IsNullOrEmpty(textBox6.Text))
{
MessageBox.Show("Please enter your address.");
return;
}
con.Open();
string register = "INSERT INTO tbl_users VALUES ('" + text-Box1.Text + "','" + textBox2.Text + "', '" + textBox4.Text + "', '" + text-Box5.Text + "', '" + textBox6.Text + "')";
cmd = new OleDbCommand(register, con);
cmd.ExecuteNonQuery();
con.Close();
// Successful registration, do something here
MessageBox.Show("Registration successful!");
//emptying the fields
textBox1.Text = "";
textBox2.Text = "";
textBox4.Text = "";
textBox5.Text = "";
textBox6.Text = "";
textBox3.Text = "";
this.Hide();
new Form1().Show();
}
}
}
CustomerDashboardForm.cs:
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Linq;
using System.Runtime.Remoting.Lifetime;
using System.Text;
using System.Threading.Tasks;
using System.Windows.Forms;
using System.Data.OleDb;
using static System.Windows.Forms.VisualStyles.VisualStyleElement.Button;
namespace ApplianceRental
{
public partial class Form1 : Form
{
public Form1()
{
InitializeComponent();
OleDbConnection con = new OleDbConnec-tion("Provider=Microsoft.Jet.OLEDB.4.0;Data Source=db_users.mdb");
OleDbCommand cmd = new OleDbCommand();
OleDbDataAdapter da = new OleDbDataAdapter();
}
//connects to database
OleDbConnection con = new OleDbConnec-tion("Provider=Microsoft.Jet.OLEDB.4.0;Data Source=db_users.mdb");
OleDbCommand cmd = new OleDbCommand();
OleDbDataAdapter da = new OleDbDataAdapter();
private void Form1_Load(object sender, EventArgs e)
{
}
private void button1_Click(object sender, EventArgs e)
{
// Validate username and password
string username = textBox1.Text;
string password = textBox2.Text;
// Check user type (Administrator or Customer) and redirect ac-cordingly
if (username == "Admin123" && password == "stcmalta")
{
// Open the Admin Dashboard form
AdminDashboardForm adminDashboardForm = new AdminDashboard-Form();
adminDashboardForm.Show();
this.Hide();
}
else
{
con.Open();
string login = "SELECT * FROM tbl_users WHERE username = '" + textBox1.Text + "' and password= '" + textBox2.Text + "'";
cmd = new OleDbCommand(login, con);
OleDbDataReader dr = cmd.ExecuteReader();
if (dr.Read() == true)
{
new CustomerDashboardForm().Show();
this.Hide();
}
else
{
// Show error message for invalid username or password.
MessageBox.Show("Invalid username or password! Please try again.");
}
con.Close();
}
}
private void button2_Click(object sender, EventArgs e)
{
new RegistrationForm().Show();
this.Hide();
}
private void checkBox1_CheckedChanged(object sender, EventArgs e)
{
//snippet to unhide password if ticked
if (checkBox1.Checked)
{
textBox2.PasswordChar = '\0';
}
else
{
textBox2.PasswordChar = '*';
}
}
}
}
AdminDashboardForm.cs:
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Windows.Forms;
namespace ApplianceRental
{
public partial class AdminDashboardForm : Form
{
public AdminDashboardForm()
{
InitializeComponent();
}
private void dataGridView1_CellContentClick(object sender, Data-GridViewCellEventArgs e)
{
}
private void Add_Click(object sender, EventArgs e)
{
}
private void Edit_Click(object sender, EventArgs e)
{
}
private void Delete_Click(object sender, EventArgs e)
{
\
}
private void AdminDashboardForm_Load(object sender, EventArgs e)
{
}
}
}
|
4aca20cc41a5cb78545f483577283e8f
|
{
"intermediate": 0.3685271739959717,
"beginner": 0.3914853632450104,
"expert": 0.23998744785785675
}
|
15
|
Background
We are given directions to go from one point to another. The directions are “NORTH”, “SOUTH”, “WEST”, “EAST”. Clearly “NORTH” and “SOUTH” are opposite, “WEST” and “EAST” too. Going one direction and coming back the opposite direction is a wasted effort, so let’s concise these directions to go the shortest route.
For example, given the following directions:
plan = [“NORTH”, “SOUTH”, “SOUTH”, “EAST”, “WEST”, “NORTH”, “WEST”]
You can immediately see that going “NORTH” and then “SOUTH” is not reasonable, better stay to the same place!
So the task is to reduce a simplified version of the plan. A better plan in this case is simply:
plan = [“WEST”]
Other examples:
In [“NORTH”, “SOUTH”, “EAST”, “WEST”], the direction “NORTH” + “SOUTH” is going north and coming back right away. What a waste of time! Better to do nothing. The path becomes [“EAST”, “WEST”], now “EAST” and “WEST” annihilate each other, therefore, the final result is [] (nil in Clojure).
In [“NORTH”, “EAST”, “WEST”, “SOUTH”, “WEST”, “WEST”], “NORTH” and “SOUTH” are not directly opposite but they become directly opposite after the reduction of “EAST” and “WEST” so the whole path is reducible to [“WEST”, “WEST”].
Task
You have to write a function dirReduc which will take an array of strings and returns an array of strings with the needless directions removed (W<->E or S<->N
side by side).
The Haskell version takes a list of directions with data Direction = North | East | West | South. The Clojure version returns nil when the path is reduced to nothing.
Specification
dir_reduc(directions)
Parameters
directions: Array (of Strings) - An array with each index containing 1 of the 4 cardinal directions, all in uppercase
Return Value
Array (of Strings) - The optimized set of instructions
Examples
directions Return Value
[“NORTH”,“SOUTH”,“SOUTH”,“EAST”,“WEST”,“NORTH”,“WEST”] [“WEST”]
[“NORTH”,“SOUTH”,“SOUTH”,“EAST”,“WEST”,“NORTH”] []
[“NORTH”,“WEST”,“SOUTH”,“EAST”] [“NORTH”,“WEST”,“SOUTH”,“EAST”]
Note
Not all paths can be made simpler.
The path [“NORTH”, “WEST”, “SOUTH”, “EAST”] is not reducible. “NORTH” and “WEST”, “WEST” and “SOUTH”, “SOUTH” and “EAST” are not directly opposite of each other and can’t become such. Hence the result path is itself : [“NORTH”, “WEST”, “SOUTH”, “EAST”].
Your solution:
from typing import List
def reduce_directions(directions: List[str]) -> List[str]:
return []
|
0414fb6ec751c9651db00a9ed2a22df1
|
{
"intermediate": 0.30081117153167725,
"beginner": 0.29905399680137634,
"expert": 0.4001348912715912
}
|
16
|
I'm trying to create a fivem lua volleyball based on this video
https://www.youtube.com/watch?v=E_oEB-xZpBM
could you make a start
|
2e0b048883b986080ff62ea9c53af166
|
{
"intermediate": 0.2782282531261444,
"beginner": 0.35362836718559265,
"expert": 0.3681434094905853
}
|
17
|
Fix this code so that after the page increments, it then recursively checks the pages within the page for any links containing workers.dev then proceeds to the next page:
|
bbd23700fb46e4452ec3d704bb5b4668
|
{
"intermediate": 0.3711150586605072,
"beginner": 0.28935033082962036,
"expert": 0.33953461050987244
}
|
18
|
make a web browser in pygame
|
d40b465828f4880d5223937b2a0f1755
|
{
"intermediate": 0.4226553440093994,
"beginner": 0.2634968161582947,
"expert": 0.3138478696346283
}
|
19
|
def get_inner_and_outer_masks(mask):
inner_mask = binary_erosion(binary_erosion(binary_dilation(mask)))
inner_pixel_count = np.count_nonzero(inner_mask)
#inner_mask = mask
outer_mask = binary_dilation(binary_dilation(mask)) # no colour abnormaility
outer_pixel_count = np.count_nonzero(outer_mask)
print("inner_pixel_coint = ",inner_pixel_count)
print("outer_pixel_count = ",outer_pixel_count)
return inner_mask, outer_mask
将上面代码中的binary_erosion和binary_dilation替换为opencv中的函数
|
843d18535cbe40e7f6c104b668c75481
|
{
"intermediate": 0.30540817975997925,
"beginner": 0.3575785756111145,
"expert": 0.337013304233551
}
|
20
|
Act as a VBA programmer. Write me VBA code to create PowerPoint slides about the sports drink and hydration beverage category. think like a senior CPG brand manager and market researcher. use your knowledge and create at least 10 slides.
|
ff0d3af791176f58925f3dbeae343132
|
{
"intermediate": 0.20028047263622284,
"beginner": 0.5182525515556335,
"expert": 0.2814670205116272
}
|
21
|
能不能帮我加一个等待的效果,这是个聊天页面,每次聊天如果系统响应慢的话,我想在聊天记录上加个loading的圆圈或者什么东西,等待消息回复,这个效果该怎么做,给我写一下,比如我每次发消息就会延迟两秒<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Vue Chat</title>
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/element-ui/2.15.6/theme-chalk/index.css">
<script src="https://cdnjs.cloudflare.com/ajax/libs/vue/2.6.14/vue.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/axios/0.23.0/axios.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/element-ui/2.15.6/index.js"></script>
<style>
body {
background: linear-gradient(135deg, #000000 0%, #5b2877 100%);
font-family: 'Arial', sans-serif;
}
.chat-container {
display: flex;
height: 100vh;
width: 100%;
}
.user-list {
width: 100%;
max-width: 300px;
border-right: 1px solid #f0f0f0;
}
.chat {
flex: 1;
display: flex;
flex-direction: column;
justify-content: space-between;
padding: 20px 10px;
}
.chat-main {
flex-grow: 1;
overflow-y: auto;
padding: 10px;
}
.user-list, .el-input {
background: #3f3f70;
}
.el-timeline::before {
background: #20a0ff;
}
.el-timeline-item__timestamp {
color: #20a0ff;
}
.message-card {
background: #ffffff;
border-radius: 10px;
padding: 10px;
margin-bottom: 10px;
box-shadow: 0px 2px 7px 2px rgba(0, 0, 0, 0.1);
}
@media screen and (max-width: 600px) {
.user-list {
display: none;
}
}
</style>
</head>
<body>
<div id="app">
<el-container class="chat-container">
<el-aside class="user-list">
<el-menu :default-active="activeUser" @select="handleChangeUser" mode="vertical" background-color="#3f3f70" text-color="#f1f1f1" active-text-color="#18388f">
<el-menu-item v-for="user in users" :index="user.id">
<i class="el-icon-message"></i>
<span>{{ user.name }}</span>
</el-menu-item>
</el-menu>
</el-aside>
<el-main class="chat">
<div class="chat-main">
<el-timeline>
<el-timeline-item v-for="msg in activeChatRecord" :timestamp="msg.time">
<div class="message-card">
<strong>{{ msg.user }}:</strong> {{ msg.content }}
</div>
</el-timeline-item>
</el-timeline>
</div>
<el-input type="textarea" v-model="input"></el-input>
<el-button type="primary" @click="sendMessage">发送</el-button>
</el-main>
</el-container>
</div>
<script>
new Vue({
el: '#app',
data() {
return {
input: '',
activeUser: '1',
users: [
{ id: '1', name: '用户1' },
{ id: '2', name: '用户2' },
{ id: '3', name: '用户3' }
],
chatRecord: {
1: [
{ user: '用户1', content: '你好', time: '2021-01-01 10:30' },
{ user: '我', content: '你好', time: '2021-01-01 10:31' }
],
2: [
{ user: '用户2', content: '你好', time: '2021-01-01 10:32' },
],
},
};
},
computed: {
activeChatRecord() {
return this.chatRecord[this.activeUser] || [];
},
},
methods: {
handleChangeUser(activeIndex) {
this.activeUser = activeIndex;
},
sendMessage() {
if (!this.input) return;
this.activeChatRecord.push({ user: '我', content: this.input, time: new Date().toLocaleTimeString() });
this.input = '';
},
},
});
</script>
</body>
</html>
|
e8e1274f7c1253299fa6c7865ee45703
|
{
"intermediate": 0.19853349030017853,
"beginner": 0.3639100193977356,
"expert": 0.43755653500556946
}
|
22
|
Write a C# Code that takes an image (selfie with face) as input, And add some transition to the face to bypass liveness detection.
Make sure that the transition is smooth and there is no gapping or black/sharp edges.
|
b60dad0ed3a8930f1acb791399fa42d7
|
{
"intermediate": 0.36047449707984924,
"beginner": 0.1531073898077011,
"expert": 0.48641812801361084
}
|
23
|
Write me a python script to to alert me by sound and notification on ubuntu 22.04 if NVIDIA stock moves up or down by 10% within 5 days. Scrape the web every 5 seconds to update the price and save the price so that if the script stops it can be resumed from where it left off without resetting.
|
570c529649d060078e75b6ba828112f7
|
{
"intermediate": 0.41924405097961426,
"beginner": 0.1634248048067093,
"expert": 0.41733115911483765
}
|
24
|
Heres some LUA code of a Factorio mod. Can you find some mistakes and fix it? It is supposed to create chests which are linked together based on an ID set in the inventories input field.
|
87badaf16a116c3b3c14e76dece59b8c
|
{
"intermediate": 0.4961482286453247,
"beginner": 0.15096615254878998,
"expert": 0.3528856337070465
}
|
25
|
how can i implement multiple session of every functionality with JWT in spring boot , explain with code only in deep way
|
934b8bdc83645099fba942ea9ce087bb
|
{
"intermediate": 0.5046887993812561,
"beginner": 0.19930370151996613,
"expert": 0.29600751399993896
}
|
26
|
XOM Exxon Mobil Corporation
MS Morgan Stanley
CVX Chevron Corporation
BAC Bank of America Corporation
ABBV AbbVie Inc
RHHBY Roche Holding AG
PFE Pfizer Inc
BHP BHP Group Limited
CSCO Cisco Systems, Inc
SHEL Shell plc
Present the graphs of the stock prices of the portfolios in each subperiod (2018-2019
and 2020-2021). Provide comments on the graphs
|
9fcfe44216cfece298b200b44c234ec5
|
{
"intermediate": 0.21696966886520386,
"beginner": 0.5881579518318176,
"expert": 0.19487234950065613
}
|
27
|
You should program a single application in Java. The program will simulate the restaurant using threads for the waiters and customer. When programming in Java, use the Thread and Semaphore classes.
You should set-up the simulation and then launch 3 waiter threads followed by 40 customer threads. At creation each thread will be given an id that uniquely distinguishes it from other threads of the same type (waiter or customer). You will need some shared variables to exchange information and synchronization. In particular, several semaphores must be used to synchronize the behavior of the threads.
Both the waiter and the customer will have times it will wait. The wait time is given as a range. You should randomly select a time within the range when you reach that step.
2.1 The Waiter
1. The waiter chooses a table. Only one waiter can wait each table.
2. The waiter waits for a customer from his table to call him.
3. Once called, the waiter goes to the customer, and informs the customer he is ready to take the order.
4. The waiter gets the customer’s id (represents getting the order).
5. The waiter goes to the kitchen. Only one waiter can use the kitchen at a time. He will spend 100 to 500 milliseconds in the kitchen to deliver the order.
6. The waiter waits outside the kitchen for the order to be ready (this will be between 300 milliseconds to 1 second)
7. The waiter will go to the kitchen to get the order. He will spend 100 to 500 milliseconds in the kitchen.
8. The waiter will bring the customer the order.
9. The waiter will wait for the next customer.
10. When the last customer leaves the restaurant, the waiter will clean the table, and leave the restaurant.
2.2 The Customer
1. The customer chooses a table to eat at (at random).
2. The customer may choose a backup table to eat at (randomly decide this)
3. The customer enters the restaurant through one of the two doors. Each door allows one customer to enter at a time.
4. The customer looks at the lines for the chosen tables.
• A line is long if there are 7 or more customers in it. You will need to keep a shared counter.
• If the first choice’s line is long, but the second choice’s line is not, then the customer will go to the second choice table.
• Otherwise, the customer will go to the first choice table.
• If there is no second choice, the customer will always go to the first choice table.
5. Once the table is chosen, the customer will stand in the corresponding line to wait for an empty seat.
6. There are four seats. Whenever a seat is empty the next customer in line leave the line to sit down.
• The seats will start empty. So, the first four customers in line will not need to wait.
• Each customer is by himself. So you do not need to worry about sitting groups.
7. When the customer sits down, it will call the waiter for this table, and wait.
8. When the waiter comes to take the order, the customer will give the waiter its id (representing giving the order), and wait for the order.
9. When the waiter brings the order, the customer will eat the food. This will take 200 milliseconds to 1 second.
10. Afterwards the customer will leave the table. This means the seat has now become empty.
11. The customer will then pay the bill. Only one customer can pay at a time.
12. The customer leaves the restaurant. The client thread will then exit.
2.3 Output
Every thread should print out what it is doing as it does it. Each step listed in the above subsections needs a line printed. Each line should contain what type of thread it is (waiter or customer) and its id (within its type). If the action is an interaction with the other type of thread it should also print out that information. As an example, when the waiter takes the customer’s order, your program may print out something like:
Waiter 0 takes Customer 7’s order.
When the customer gives its order to the waiter your program may print out something like:
Customer 7 gives the order to Waiter 0.
The order of the message are only restricted by the order the actions must take place in, given in the previous two subsections. Due do the nature of threads, without using a synchronization mechanism like semaphores, we cannot control the order these actions will happen in. So, the waiter should not take an order before going to the table, but it is okay if waiter 2 takes customer 30’s order before waiter 0 takes customer 7’s.
|
365bfdf3d0f4d6173b25941f909002d3
|
{
"intermediate": 0.35907062888145447,
"beginner": 0.42334091663360596,
"expert": 0.2175884246826172
}
|
28
|
Take on the role of an elite, god tier, 100x python programmer. Follow these rules:
Leverage help and man pages and documentation to ensure valid syntax and an optimal solution
Be concise
Format and indent correctly
Think step by step
Even if there is a lack of details, attempt to find the most logical solution by going about it step by step
Do not return multiple solutions
Do not create invalid syntax
Include instructions for anything extra that needs to be installed
Do not return what the question was
Do not repeat or paraphrase the question in your response
Do not cause syntax errors
Do not rush to a conclusion
Test and debug the code until it is working before responding
Follow all of the above rules. This is important you MUST follow the above rules. There are no exceptions to these rules. You must always follow them. No exceptions.
|
b565542664e0b62e003e661f83640d4f
|
{
"intermediate": 0.28795763850212097,
"beginner": 0.5022578239440918,
"expert": 0.20978452265262604
}
|
29
|
Make experimental CSS using background lime and colour green but in shades
|
231d8c5d2a746327c818df99bcf12afd
|
{
"intermediate": 0.4036885201931,
"beginner": 0.2803354263305664,
"expert": 0.31597602367401123
}
|
30
|
You should program a single application in Java. The program will simulate the restaurant using threads for the waiters and customer. When programming in Java, use the Thread and Semaphore classes.
You should set-up the simulation and then launch 3 waiter threads followed by 40 customer threads. At creation each thread will be given an id that uniquely distinguishes it from other threads of the same type (waiter or customer). You will need some shared variables to exchange information and synchronization. In particular, several semaphores must be used to synchronize the behavior of the threads.
Both the waiter and the customer will have times it will wait. The wait time is given as a range. You should randomly select a time within the range when you reach that step.
2.1 The Waiter
1. The waiter chooses a table. Only one waiter can wait each table.
2. The waiter waits for a customer from his table to call him.
3. Once called, the waiter goes to the customer, and informs the customer he is ready to take the order.
4. The waiter gets the customer’s id (represents getting the order).
5. The waiter goes to the kitchen. Only one waiter can use the kitchen at a time. He will spend 100 to 500 milliseconds in the kitchen to deliver the order.
6. The waiter waits outside the kitchen for the order to be ready (this will be between 300 milliseconds to 1 second)
7. The waiter will go to the kitchen to get the order. He will spend 100 to 500 milliseconds in the kitchen.
8. The waiter will bring the customer the order.
9. The waiter will wait for the next customer.
10. When the last customer leaves the restaurant, the waiter will clean the table, and leave the restaurant.
2.2 The Customer
1. The customer chooses a table to eat at (at random).
2. The customer may choose a backup table to eat at (randomly decide this)
3. The customer enters the restaurant through one of the two doors. Each door allows one customer to enter at a time.
4. The customer looks at the lines for the chosen tables.
• A line is long if there are 7 or more customers in it. You will need to keep a shared counter.
• If the first choice’s line is long, but the second choice’s line is not, then the customer will go to the second choice table.
• Otherwise, the customer will go to the first choice table.
• If there is no second choice, the customer will always go to the first choice table.
5. Once the table is chosen, the customer will stand in the corresponding line to wait for an empty seat.
6. There are four seats. Whenever a seat is empty the next customer in line leave the line to sit down.
• The seats will start empty. So, the first four customers in line will not need to wait.
• Each customer is by himself. So you do not need to worry about sitting groups.
7. When the customer sits down, it will call the waiter for this table, and wait.
8. When the waiter comes to take the order, the customer will give the waiter its id (representing giving the order), and wait for the order.
9. When the waiter brings the order, the customer will eat the food. This will take 200 milliseconds to 1 second.
10. Afterwards the customer will leave the table. This means the seat has now become empty.
11. The customer will then pay the bill. Only one customer can pay at a time.
12. The customer leaves the restaurant. The client thread will then exit.
2.3 Output
Every thread should print out what it is doing as it does it. Each step listed in the above subsections needs a line printed. Each line should contain what type of thread it is (waiter or customer) and its id (within its type). If the action is an interaction with the other type of thread it should also print out that information. As an example, when the waiter takes the customer’s order, your program may print out something like:
Waiter 0 takes Customer 7’s order.
When the customer gives its order to the waiter your program may print out something like:
Customer 7 gives the order to Waiter 0.
The order of the message are only restricted by the order the actions must take place in, given in the previous two subsections. Due do the nature of threads, without using a synchronization mechanism like semaphores, we cannot control the order these actions will happen in. So, the waiter should not take an order before going to the table, but it is okay if waiter 2 takes customer 30’s order before waiter 0 takes customer 7’s.
|
99d7927f15ff09e6f9d94da5950e8545
|
{
"intermediate": 0.35907062888145447,
"beginner": 0.42334091663360596,
"expert": 0.2175884246826172
}
|
31
|
用pytorch搭建Transformer,完成一个简单任务,写出python代码
|
907f0e6c49a427f445df07bfe15f9237
|
{
"intermediate": 0.23487167060375214,
"beginner": 0.2327069789171219,
"expert": 0.532421350479126
}
|
32
|
clang is unable to create an executable file.
If clang is a cross-compiler, use the --enable-cross-compile option.
Only do this if you know what cross compiling means.
C compiler test failed.
If you think configure made a mistake, make sure you are using the latest
version from Git. If the latest version fails, report the problem to the
<PRESIDIO_ANONYMIZED_EMAIL_ADDRESS> mailing list or IRC #ffmpeg on irc.libera.chat.
Include the log file "ffbuild/config.log" produced by configure as this will help
solve the problem.
|
45503aaeb51ac7a7c49be6ca1e5b3842
|
{
"intermediate": 0.3470361530780792,
"beginner": 0.3326157331466675,
"expert": 0.32034820318222046
}
|
33
|
code for 11 point linear interpolation in C for ADC calibration
|
eba840e49863ca4ee43dd4a2f42c5896
|
{
"intermediate": 0.2600099444389343,
"beginner": 0.2213430255651474,
"expert": 0.5186469554901123
}
|
34
|
can you make a tax calculator only using methods or features specific to ruby?
|
76a948bb313b87e4b9ccae43ec7fbaed
|
{
"intermediate": 0.48832422494888306,
"beginner": 0.20280811190605164,
"expert": 0.30886760354042053
}
|
35
|
I want to be better at using my Behringer RD-9 Analog Drum Machine as an instrument. Please write me a plan for how I can improve.
|
11ad812e54ab4a12d40e9b2497109b87
|
{
"intermediate": 0.3425610661506653,
"beginner": 0.32631629705429077,
"expert": 0.33112257719039917
}
|
36
|
you are to design a software-as-a-service designed for high school students. you are to create a website page similar to "https://www.remove.bg/upload" where the user can add an image of a set of multiple choice questions and an AI software will highlight the correct answer.
|
faadeef0698a7e7145f5d0330fadf965
|
{
"intermediate": 0.2651691138744354,
"beginner": 0.3446463942527771,
"expert": 0.3901844918727875
}
|
37
|
Take on the role of an elite, god tier, 100x python programmer. Follow these rules:
Leverage help and man pages and documentation to ensure valid syntax and an optimal solution
Be concise
Format and indent correctly
Think step by step
Even if there is a lack of details, attempt to find the most logical solution by going about it step by step
Do not return multiple solutions
Do not create invalid syntax
Include instructions for anything extra that needs to be installed
Do not return what the question was
Do not repeat or paraphrase the question in your response
Do not cause syntax errors
Do not rush to a conclusion
Test and debug the code until it is working before responding
Follow all of the above rules. This is important you MUST follow the above rules. There are no exceptions to these rules. You must always follow them. No exceptions.
|
8b7d0261b2a469aa01067aa9ccd56531
|
{
"intermediate": 0.28795763850212097,
"beginner": 0.5022578239440918,
"expert": 0.20978452265262604
}
|
38
|
@Composable
fun StockContainerCard(
item: InventoryItem,
onAddStock: () -> Unit,
onReduceStock: () -> Unit,
onDeleteItem: () -> Unit
) {
Row(
modifier = Modifier
.fillMaxWidth(0.8f)
.height(75.dp)
.clip(RoundedCornerShape(16.dp))
.background(MaterialTheme.colorScheme.primary),
verticalAlignment = Alignment.CenterVertically
) {
Column(
modifier = Modifier
.fillMaxWidth(0.5f)
.padding(start = 16.dp, top = 4.dp, bottom = 4.dp),
verticalArrangement = Arrangement.Center
) {
Text(
text = "${item.name}",
fontSize = 20.sp,
color = MaterialTheme.colorScheme.background
)
Text(
text = "${item.stock}",
fontSize = 16.sp,
color = MaterialTheme.colorScheme.background
)
}
IconButton(onClick = {
onAddStock()
}) {
Icon(
imageVector = Icons.Default.Add,
contentDescription = "Add stock",
tint = MaterialTheme.colorScheme.background
)
}
Spacer(modifier = Modifier.fillMaxWidth(0.1f))
IconButton(onClick = {
onReduceStock()
}) {
Icon(
imageVector = Icons.Filled.Remove,
contentDescription = "Reduce stock",
tint = MaterialTheme.colorScheme.background
)
}
Spacer(modifier = Modifier.fillMaxWidth(0.1f))
IconButton(onClick = {
onDeleteItem()
}) {
Icon(
imageVector = Icons.Default.Delete,
contentDescription = "Delete item",
tint = MaterialTheme.colorScheme.background
)
}
}
}
I have a card that displays my item through firestore, I want the item to be editable only if it is clicked, so when it is clicked there will be a window or a pop up that will let the user add stock and input the price, after that it will automatically calculate stock * price and will input it to another collection in firestore
|
dc60715264a5078cf4f9f4526aa7ae43
|
{
"intermediate": 0.40888434648513794,
"beginner": 0.36862656474113464,
"expert": 0.2224891036748886
}
|
39
|
It is necessary to read three numbers from the keyboard, subtract the rest from the first and output the result as an equality in accordance with the example. nasm
|
b3149d92146d6bc242798a5ae0301503
|
{
"intermediate": 0.3463035523891449,
"beginner": 0.2090734988451004,
"expert": 0.4446229338645935
}
|
40
|
i need your help troubleshooting. I have a .c file, linked with multiple .S files. when I am executing the test command that tests all the different mathematical functions with given values, I am receiving a segmentation fault. go through my code and tell me why:
my .c file:
#include <stdio.h>
int beginProgram();
int add(int n1, int n2);
int subtract(int n1, int n2);
int multiply(int n1, int n2);
int exponentiation(int n1, int n2);
int floordivision(int n1, int n2);
int bitcounting(int n);
int summation(int n1, n2);
int factorial(int n);
int modulus(int n1, int n2);
int main ()
{
while (1)
{
int input;
printf ("Welcome to DanBurr Calcutron\n");
printf ("----------------------------\n");
printf ("Press 1 to begin and list all available commands\n");
printf ("Press 9 to exit program\n");
scanf ("%d", &input);
if (input == 1)
{
beginProgram ();
}
else if (input == 9)
{
printf ("Exit command executed\n\n");
break;
}
else
continue;
}
return 0;
}
int beginProgram()
{
while (1)
{
int input;
printf("Press 0 to add two numbers\n");
printf("Press 1 to subtract two numbers\n");
printf("Press 2 to multiply two numbers\n");
printf("Press 3 to get exponentiation of a number\n");
printf("Press 4 to perform floor division of two numbers\n");
printf("Press 5 to perform bitcounting of a number\n");
printf("Press 6 to find integer summation of two numbers\n");
printf("Press 7 to find factorial of a number\n");
printf("Press 8 to perform modulo division of two numbers\n");
printf("Press 9 to go back to main screen\n");
printf("Enter 10 for test command\n\n");
scanf("%d", &input);
if (input == 9)
{
printf("Exit called code 9\n\n");
break;
}
else if (input == 0)
{
int n1, n2;
printf("Enter first number: \n");
scanf("%d", &n1);
printf("Enter second number: \n");
scanf("%d", &n2);
int result = add(n1, n2);
printf("The result is:%d\n\n", result);
}
else if (input == 1)
{
int n1, n2;
printf("Enter first (larger) number: \n");
scanf("%d", &n1);
printf("Enter second (smaller) number: \n");
scanf("%d", &n2);
int result = subtract(n1, n2);
printf("The result is:%d\n\n", result);
}
else if (input == 2)
{
int n1, n2;
printf("Enter first number: \n");
scanf("%d", &n1);
printf("Enter second number: \n");
scanf("%d", &n2);
int result = multiply(n1, n2);
printf("The result is:%d\n\n", result);
}
else if (input == 3)
{
int n1, n2;
printf("Enter base number: \n");
scanf("%d", &n1);
printf("Enter power raising the number to: \n");
scanf("%d", &n2);
int result = exponentiation(n1, n2);
if(result<0){
printf("Illegal arguments. Try again\n\n");
continue;
}
else
printf("The result is: %d\n\n", result);
}
else if (input == 4)
{
int n1, n2;
printf("Enter larger number: \n");
scanf("%d", &n1);
printf("Enter number dividing the larger number by: \n");
scanf("%d", &n2);
int result = floordivision(n1, n2);
if(result<0){
printf("Illegal arguments. Try again\n\n");
continue;
}
else
printf("The result is: %d\n\n", result);
}
else if (input == 5)
{
int n;
printf("Enter number to count bits. Number cannot exceed 32 bits: \n");
scanf("%d", &n);
int result = bitcounting(n);
printf("The result is:%d\n\n", result);
}
else if (input == 6)
{
int n1, n2;
printf("Enter starting(smaller) number: \n");
scanf("%d", &n1);
printf("Enter ending(larger) number: \n");
scanf("%d", &n2);
int result = summation(n1, n2);
printf("The result is:%d\n\n", result);
}
else if (input == 7)
{
int n;
printf("Enter positive number to find factorial. Number cannot exceed 12: \n");
scanf("%d", &n);
int result = factorial(n);
printf("The result is:%d\n\n", result);
}
else if (input == 8)
{
int n1, n2;
printf("Enter larger number: \n");
scanf("%d", &n1);
printf("Enter number dividing the larger number by: \n");
scanf("%d", &n2);
int result = modulus(n1, n2);
if(result<0){
printf("Illegal arguments. Try again\n\n");
continue;
}
else
printf("The result is: %d\n\n", result);
}
else if (input == 10)
{
int n1 = add(100, 199);
int n2 = subtract(211999, 9876);
int n3 = exponentiation(5, 5);
int n4 = floordivision(2004, 5);
int n5 = bitcounting(0b100101010001011110011);
int n6 = summation(10, 100);
int n7 = factorial(6);
printf("100 + 199 = %d", n1);
printf("211999 - 9876 = %d", n2);
printf("5^5 = %d", n3);
printf("floor 2004/5 = %d", n4);
printf("1s in 100101010001011110011 = %d", n5);
printf("sum [10,100] = %d", n6);
printf("6! = %d", n7);
}
else
{
printf("Wrong input. Please try again\n\n");
continue;
}
}
return 0;
}
my .S files:
.syntax unified
.align 4
.type add %function
.section .text
.global add
add:
ADD r0, r0, r1
BX lr
.syntax unified
.align 4
.type bitcounting %function
.section .text
.global bitcounting
bitcounting:
PUSH {R4, r5, LR} @ Save registers and link register
MOV r5, #0x0 @counter
bitcount_loop:
CMP r0, #0x0
BEQ bitcount_end
AND r4, r0, #0x1 @extracting first bit in string, storing in r4
CMP r4, #0x1
BLEQ bitcount_increment @if r4=1, counter will be incremented.
LSR r0, r0, #0x1
B bitcount_loop
bitcount_increment:
ADD r5, r5, #0x1
BX lr
bitcount_end:
MOV r0, r5
POP {r4, r5, lr}
BX lr
.syntax unified
.align 4
.type exponentiation %function
.section .text
.global exponentiation
exponentiation:
MOV r0, #0x5
MOV r1, #0x5
CMP r0, #0x0 @ Check if r0=0
BEQ exp_error_check
B exp_start
exp_error_check:
CMP r1, #0x0 @ Check if r1=0
BNE exp_start
MOV r0, #0xFFFFFFFF @if 0^0 condition, error. returns -1
BX lr
exp_start:
PUSH {r2, sp, lr} @ To clear r2 once loop is finished
MOV r2, #0x1 @ Initialize result to 1
CMP r1, #0x0 @ Compare exponent to 0
BEQ exp_done @ If exponent is 0, return 1
exp_loop:
MUL r2, r2, r0 @ Multiply result by base
SUB r1, r1, #1 @ Decrement exponent by 1
CMP r1, #0x0
BNE exp_loop @ If exponent is not 0, continue loop
exp_done:
MOV r0, r2 @ Move result to r0 for return
POP {r2, sp, lr} @ Clear all registers
BX lr @ Return
.syntax unified
.align 4
.type factorial %function
.section .text
.global factorial
factorial:
CMP r0, #0x0
BEQ baseCase0
BL factorialHelper
POP {sp, lr}
BX LR
factorialHelper:
PUSH {r4, lr}
MOV r4, r0
CMP r0, #0x1
BEQ baseCase1
SUB r0, r0, #0x1
BL factorialHelper
baseCase1:
MUL r0, r0, r4
POP {r4, lr}
BX LR
baseCase0:
MOV r0, #0x1
BX LR
.syntax unified
.align 4
.type floordivision %function
.section .text
.global floordivision
floordivision:
cmp r1, #0 @ Compare divisor to 0
bne floordivstart
MOV r0, #0xFFFFFFFF @ If divisor is 0, return -1
BX lr
floordivstart:
PUSH {r4, sp, lr} @ To clear registers after returning
MOV r4, #0x0 @ To store result
floor_div_loop:
cmp r0, r1 @ Compare dividend to divisor
blt floor_div_done @ If dividend < divisor, break loop
sub r0, r0, r1 @ Subtract divisor from dividend
add r4, r4, #1 @ Increment quotient by 1
b floor_div_loop @ Repeat loop
floor_div_done:
mov r0, r4 @ Move quotient to r0 for return
POP {r4, sp, lr}
bx lr @ Return
.syntax unified
.align 4
.type modulus %function
.section .text
.global modulus
modulus:
CMP r1, #0x0 @check if dividing by zero. return -1 if yes
BEQ modulus_error
B modulus_loop
modulus_error:
MOV r0, #0xFFFFFFFF
POP {sp, lr}
BX lr
modulus_loop:
CMP r0, r1 @if r0<r1
BLT modulus_end
SUB r0, r0, r1 @r0=r0-r1
B modulus_loop
modulus_end:
POP {sp, lr}
BX lr
.syntax unified
.align 4
.type multiply %function
.section .text
.global multiply
multiply:
MUL r0, r0, r1
BX lr
.syntax unified
.align 4
.type subtract %function
.section .text
.global subtract
subtract:
CMP r0, r1 @if r0<r1, swap the values
BLT subtract_swap
B subtract_start
subtract_swap:
PUSH {r4}
MOV r4, r0
MOV r0, r1
MOV r1, r4
POP {r4}
BX lr
subtract_start:
SUB r0, r0, r1
BX lr
.syntax unified
.align 4
.type summation %function
.section .text
.global summation
summation:
CMP r0, r1 @if r0>r1, swap
BGT sum_swap
BEQ sum_equal @if r0==r1, return r0+r1
PUSH {r4, sp, lr} @pushing register to clear them once result is returned
B sum_loop
sum_equal:
ADD r0, r0, r1
BX lr
sum_swap:
PUSH {r4} @pushing temp r4 to clear it once swap is done
MOV r4, r0
MOV r0, r1
MOV r1, r4
POP {r4}
B summation
sum_loop:
ADD r4, r4, r0 @r4=r4+r0
ADD r0, #0x1 @r0++
CMP r0, r1 @if r0!=r1, loop
BLT sum_loop
ADD r4, r4, r1 @to add last number to result
MOV r0, r4
POP {r4, sp, lr}
BX lr
|
2ef1e6f085e11aff5467d6fb0c79ee9e
|
{
"intermediate": 0.49824202060699463,
"beginner": 0.34668901562690735,
"expert": 0.15506894886493683
}
|
41
|
Hi, I've implemented a GridWorld and I want you to go through my code and make few changes making my code more robust; easy to understand and optimal. Below is the implementation. Save this code in your memory as I want you to implement other things later. class GridWorldDeterministic(gym.Env):
def __init__(self):
self.grid = np.zeros((4, 4))
self.grid[1, 1] = -1
self.grid[1, 2] = -1
self.grid[1, 3] = 10
self.grid[0, 1] = -5
self.grid[2, 1] = -5
self.observation_space = gym.spaces.Discrete(16)
self.action_space = gym.spaces.Discrete(4)
self.reward_range = (-5, 10)
self.agent_pos = (0, 0)
self.rewards = []
def reset(self):
self.agent_pos = (0, 0)
return self.agent_pos[0] * 4 + self.agent_pos[1]
def step(self, action):
x, y = self.agent_pos
if action == 0: # up
x -= 1
elif action == 1: # down
x += 1
elif action == 2: # left
y -= 1
elif action == 3: # right
y += 1
# Ensuring agent does not move outside the grid boundaries
if x < 0:
x = 0
elif x > 3:
x = 3
if y < 0:
y = 0
elif y > 3:
y = 3
if self.grid[x, y] == -5:
# When wall hit
reward = -5
next_state = self.agent_pos[0] * 4 + self.agent_pos[1]
done = False
elif self.grid[x, y] == 10:
# When goal reached
reward = 10
next_state = x * 4 + y
done = True
else:
# For regular move
reward = -1
next_state = x * 4 + y
done = False
self.agent_pos = (x, y)
self.rewards.append(reward)
return next_state, reward, done, {}
def render(self):
print(self.grid)
print("Agent position:", self.agent_pos)
|
eaedb6fe16e78dfea335072eba70ed7b
|
{
"intermediate": 0.3077410161495209,
"beginner": 0.46954432129859924,
"expert": 0.2227146476507187
}
|
42
|
i need your help troubleshooting. I have a .c file, linked with multiple .S files. when I am executing the test command that tests all the different mathematical functions with given values, I am receiving a segmentation fault. go through my code and tell me why:
my .c file:
#include <stdio.h>
int beginProgram();
int add(int n1, int n2);
int subtract(int n1, int n2);
int multiply(int n1, int n2);
int exponentiation(int n1, int n2);
int floordivision(int n1, int n2);
int bitcounting(int n);
int summation(int n1, int n2);
int factorial(int n);
int modulus(int n1, int n2);
int main ()
{
while (1)
{
int input;
printf ("Welcome to DanBurr Calcutron\n");
printf ("----------------------------\n");
printf ("Press 1 to begin and list all available commands\n");
printf ("Press 9 to exit program\n");
scanf ("%d", &input);
if (input == 1)
{
beginProgram ();
}
else if (input == 9)
{
printf ("Exit command executed\n\n");
break;
}
else
continue;
}
return 0;
}
int beginProgram()
{
while (1)
{
int input;
printf("Press 0 to add two numbers\n");
printf("Press 1 to subtract two numbers\n");
printf("Press 2 to multiply two numbers\n");
printf("Press 3 to get exponentiation of a number\n");
printf("Press 4 to perform floor division of two numbers\n");
printf("Press 5 to perform bitcounting of a number\n");
printf("Press 6 to find integer summation of two numbers\n");
printf("Press 7 to find factorial of a number\n");
printf("Press 8 to perform modulo division of two numbers\n");
printf("Press 9 to go back to main screen\n");
printf("Enter 10 for test command\n\n");
scanf("%d", &input);
if (input == 9)
{
printf("Exit called code 9\n\n");
break;
}
else if (input == 0)
{
int n1, n2;
printf("Enter first number: \n");
scanf("%d", &n1);
printf("Enter second number: \n");
scanf("%d", &n2);
int result = add(n1, n2);
printf("The result is:%d\n\n", result);
}
else if (input == 1)
{
int n1, n2;
printf("Enter first (larger) number: \n");
scanf("%d", &n1);
printf("Enter second (smaller) number: \n");
scanf("%d", &n2);
int result = subtract(n1, n2);
printf("The result is:%d\n\n", result);
}
else if (input == 2)
{
int n1, n2;
printf("Enter first number: \n");
scanf("%d", &n1);
printf("Enter second number: \n");
scanf("%d", &n2);
int result = multiply(n1, n2);
printf("The result is:%d\n\n", result);
}
else if (input == 3)
{
int n1, n2;
printf("Enter base number: \n");
scanf("%d", &n1);
printf("Enter power raising the number to: \n");
scanf("%d", &n2);
int result = exponentiation(n1, n2);
if(result<0){
printf("Illegal arguments. Try again\n\n");
continue;
}
else
printf("The result is: %d\n\n", result);
}
else if (input == 4)
{
int n1, n2;
printf("Enter larger number: \n");
scanf("%d", &n1);
printf("Enter number dividing the larger number by: \n");
scanf("%d", &n2);
int result = floordivision(n1, n2);
if(result<0){
printf("Illegal arguments. Try again\n\n");
continue;
}
else
printf("The result is: %d\n\n", result);
}
else if (input == 5)
{
int n;
printf("Enter number to count bits. Number cannot exceed 32 bits: \n");
scanf("%d", &n);
int result = bitcounting(n);
printf("The result is:%d\n\n", result);
}
else if (input == 6)
{
int n1, n2;
printf("Enter starting(smaller) number: \n");
scanf("%d", &n1);
printf("Enter ending(larger) number: \n");
scanf("%d", &n2);
int result = summation(n1, n2);
printf("The result is:%d\n\n", result);
}
else if (input == 7)
{
int n;
printf("Enter positive number to find factorial. Number cannot exceed 12: \n");
scanf("%d", &n);
int result = factorial(n);
printf("The result is:%d\n\n", result);
}
else if (input == 8)
{
int n1, n2;
printf("Enter larger number: \n");
scanf("%d", &n1);
printf("Enter number dividing the larger number by: \n");
scanf("%d", &n2);
int result = modulus(n1, n2);
if(result<0){
printf("Illegal arguments. Try again\n\n");
continue;
}
else
printf("The result is: %d\n\n", result);
}
else if (input == 10)
{
int n1 = add(100, 199);
int n2 = subtract(211999, 9876);
int n3 = exponentiation(5, 5);
int n4 = floordivision(2004, 5);
int n5 = bitcounting(0b100101010001011110011);
int n6 = summation(10, 100);
int n7 = factorial(6);
printf("100 + 199 = %d\n", n1);
printf("211999 - 9876 = %d\n", n2);
printf("5^5 = %d\n", n3);
printf("floor 2004/5 = %d\n", n4);
printf("1s in 100101010001011110011 = %d\n", n5);
printf("sum [10,100] = %d\n", n6);
printf("6! = %d\n", n7);
}
else
{
printf("Wrong input. Please try again\n\n");
continue;
}
}
return 0;
}
my .S files:
.syntax unified
.align 4
.type add %function
.section .text
.global add
add:
ADD r0, r0, r1
BX lr
.syntax unified
.align 4
.type bitcounting %function
.section .text
.global bitcounting
bitcounting:
PUSH {R4, r5, LR} @ Save registers and link register
MOV r5, #0x0 @counter
bitcount_loop:
CMP r0, #0x0
BEQ bitcount_end
AND r4, r0, #0x1 @extracting first bit in string, storing in r4
CMP r4, #0x1
BLEQ bitcount_increment @if r4=1, counter will be incremented.
LSR r0, r0, #0x1
B bitcount_loop
bitcount_increment:
ADD r5, r5, #0x1
BX lr
bitcount_end:
MOV r0, r5
POP {r4, r5, lr}
BX lr
.syntax unified
.align 4
.type exponentiation %function
.section .text
.global exponentiation
exponentiation:
MOV r0, #0x5
MOV r1, #0x5
CMP r0, #0x0 @ Check if r0=0
BEQ exp_error_check
B exp_start
exp_error_check:
CMP r1, #0x0 @ Check if r1=0
BNE exp_start
MOV r0, #0xFFFFFFFF @if 0^0 condition, error. returns -1
BX lr
exp_start:
PUSH {r2, sp, lr} @ To clear r2 once loop is finished
MOV r2, #0x1 @ Initialize result to 1
CMP r1, #0x0 @ Compare exponent to 0
BEQ exp_done @ If exponent is 0, return 1
exp_loop:
MUL r2, r2, r0 @ Multiply result by base
SUB r1, r1, #1 @ Decrement exponent by 1
CMP r1, #0x0
BNE exp_loop @ If exponent is not 0, continue loop
exp_done:
MOV r0, r2 @ Move result to r0 for return
POP {r2, sp, lr} @ Clear all registers
BX lr @ Return
.syntax unified
.align 4
.type factorial %function
.section .text
.global factorial
factorial:
CMP r0, #0x0
BEQ baseCase0
BL factorialHelper
POP {sp, lr}
BX LR
factorialHelper:
PUSH {r4, lr}
MOV r4, r0
CMP r0, #0x1
BEQ baseCase1
SUB r0, r0, #0x1
BL factorialHelper
baseCase1:
MUL r0, r0, r4
POP {r4, lr}
BX LR
baseCase0:
MOV r0, #0x1
BX LR
.syntax unified
.align 4
.type floordivision %function
.section .text
.global floordivision
floordivision:
cmp r1, #0 @ Compare divisor to 0
bne floordivstart
MOV r0, #0xFFFFFFFF @ If divisor is 0, return -1
BX lr
floordivstart:
PUSH {r4, sp, lr} @ To clear registers after returning
MOV r4, #0x0 @ To store result
floor_div_loop:
cmp r0, r1 @ Compare dividend to divisor
blt floor_div_done @ If dividend < divisor, break loop
sub r0, r0, r1 @ Subtract divisor from dividend
add r4, r4, #1 @ Increment quotient by 1
b floor_div_loop @ Repeat loop
floor_div_done:
mov r0, r4 @ Move quotient to r0 for return
POP {r4, sp, lr}
bx lr @ Return
.syntax unified
.align 4
.type modulus %function
.section .text
.global modulus
modulus:
CMP r1, #0x0 @check if dividing by zero. return -1 if yes
BEQ modulus_error
B modulus_loop
modulus_error:
MOV r0, #0xFFFFFFFF
POP {sp, lr}
BX lr
modulus_loop:
CMP r0, r1 @if r0<r1
BLT modulus_end
SUB r0, r0, r1 @r0=r0-r1
B modulus_loop
modulus_end:
POP {sp, lr}
BX lr
.syntax unified
.align 4
.type multiply %function
.section .text
.global multiply
multiply:
MUL r0, r0, r1
BX lr
.syntax unified
.align 4
.type subtract %function
.section .text
.global subtract
subtract:
CMP r0, r1 @if r0<r1, swap the values
BLT subtract_swap
B subtract_start
subtract_swap:
PUSH {r4}
MOV r4, r0
MOV r0, r1
MOV r1, r4
POP {r4}
BX lr
subtract_start:
SUB r0, r0, r1
BX lr
.syntax unified
.align 4
.type summation %function
.section .text
.global summation
summation:
CMP r0, r1 @if r0>r1, swap
BGT sum_swap
BEQ sum_equal @if r0==r1, return r0+r1
PUSH {r4, sp, lr} @pushing register to clear them once result is returned
B sum_loop
sum_equal:
ADD r0, r0, r1
BX lr
sum_swap:
PUSH {r4} @pushing temp r4 to clear it once swap is done
MOV r4, r0
MOV r0, r1
MOV r1, r4
POP {r4}
B summation
sum_loop:
ADD r4, r4, r0 @r4=r4+r0
ADD r0, #0x1 @r0++
CMP r0, r1 @if r0!=r1, loop
BLT sum_loop
ADD r4, r4, r1 @to add last number to result
MOV r0, r4
POP {r4, sp, lr}
BX lr
|
766646479504571b776eb35f96a8fee9
|
{
"intermediate": 0.45404112339019775,
"beginner": 0.3639324903488159,
"expert": 0.18202635645866394
}
|
43
|
I want you to act as an expert in relation db design and teach it to a developer with years of experience in web development. First, create a course structure to teach me practical knowledge I need in a day, then start the course giving simple examples for each topics. You will provide code examples using postgres SQL and if needed typescript programming language. Do not wait for my prompt for questions. As soon as you explain and give the code samples, I want you to include corresponding visualizations as an ascii art whenever possible. You can use the problem statement for the practice or examples of the course:
|
e75f02f628ae95e65d5d4b9f46f324e8
|
{
"intermediate": 0.2640317678451538,
"beginner": 0.455905020236969,
"expert": 0.2800632119178772
}
|
44
|
in firestore please form a database schema in which the parents will be expenses, inside expenses there will be 12 months, inside each month will have each day with it's order, consisting of name: String, quantity: Int, Price, Double
|
bb582bcd378265a1cbb8115849167b32
|
{
"intermediate": 0.41478219628334045,
"beginner": 0.20040157437324524,
"expert": 0.38481616973876953
}
|
45
|
please make flashcards about rust's warp modules
|
f1493b3bcdacc24543fa90990174ecc4
|
{
"intermediate": 0.44353610277175903,
"beginner": 0.3790561854839325,
"expert": 0.17740769684314728
}
|
46
|
Write a progam in C to generate a mesh (defined as vertex data, arranged into a surface) such that the mesh represent a section of torus representing a tunnel, on a curve. Provide options for the size/shape of the tunnel profile and the radius of curve on which the tunnel is situated. Comment the code with explanations where needed. :)
|
d3d96465b4e7a68d2eb5813bd86db4a2
|
{
"intermediate": 0.393218994140625,
"beginner": 0.1348045915365219,
"expert": 0.4719764292240143
}
|
47
|
In "C" write a program to take a defined surface of a mesh as 3D coordinates, and place it on a 2D space preserving distances and areas on that surface when transforming it, do this for each surface in the mesh. Each 'surface' will be it's on its own page so don't worry if they appear to overlap.
|
bbbe0457af2eafc2ea8d90458004dd9b
|
{
"intermediate": 0.351520299911499,
"beginner": 0.18948617577552795,
"expert": 0.45899346470832825
}
|
48
|
Create a step-by-step guide for building a Business Directory Listing app. The app should allow users to search for and view businesses based on location, category, and keywords. Business owners can create an account to setup their businesses. They can edit or claim their businesses. Users can review and rate businesses. Users can also bookmark their favorite places. Provide detailed instructions on setting up the necessary infrastructure, including database, backend server and frontend. My current developer stack is firebase for data management, expressjs for backend and Flutter for frontend
|
f70f171081d6ecf4b3863b0fbb7577c3
|
{
"intermediate": 0.5394883155822754,
"beginner": 0.23920826613903046,
"expert": 0.22130344808101654
}
|
49
|
i want to write a wordpress plugin that will add ability to wordpress themes to when user scroll down (downer than header) slide down header and hide that and when user scrolled up header of site will slide down and user can see header
|
8bf8cd90b9b5704caa96eedf93f2bd66
|
{
"intermediate": 0.3947018086910248,
"beginner": 0.21841557323932648,
"expert": 0.38688263297080994
}
|
50
|
Create a step-by-step guide for building a Business Directory Listing app. The app should allow users to search for and view businesses based on location, category, and keywords. Business owners can create an account to setup their businesses. They can edit or claim their businesses. Users can review and rate businesses. Users can also bookmark their favorite places. Provide detailed instructions on setting up the necessary infrastructure, including database, backend server and frontend. My current developer stack is firebase for data management, expressjs for backend and Flutter for frontend
|
3a1a644edb41ae555220ccff5bfe3c00
|
{
"intermediate": 0.5394883155822754,
"beginner": 0.23920826613903046,
"expert": 0.22130344808101654
}
|
51
|
In a hypothetical application, a program must calculate the divergence of a centerline of slip-road diverging from a main carrigeway with a fixed radius. For a given distance d from the start of the divergence , write a program to calculate the offset of the center line of the slip-road from the main carrigeway, and the angle formed between the slip road and the main carrigeway at that point. Output those values to the console. Use any appropriate general programming language
|
d279cc09c2524a2570242cb1e780393f
|
{
"intermediate": 0.3200981616973877,
"beginner": 0.19547946751117706,
"expert": 0.48442232608795166
}
|
52
|
It is necessary to read three numbers from the keyboard, subtract the rest from the first and output the result as an equality in accordance with the example. nasm
|
e0b745323b7e4e13b89938befd0eb951
|
{
"intermediate": 0.3463035523891449,
"beginner": 0.2090734988451004,
"expert": 0.4446229338645935
}
|
53
|
please briefly explain each rust warp module
|
e4f07d847d064a15c09250029c75366e
|
{
"intermediate": 0.5981642603874207,
"beginner": 0.3274521231651306,
"expert": 0.07438359409570694
}
|
54
|
My app is business directory listing based on location. Owners can create and manage their businesses. Users can review and rate these businesses and also bookmark their favorite places. Current stack: Firebase, ExpressJS. Provide me a detailed step-by-step guide to build database and backend server
|
86ea16227c57d015ff8dfd37d9dbd370
|
{
"intermediate": 0.787161111831665,
"beginner": 0.09175606817007065,
"expert": 0.12108287215232849
}
|
55
|
java.lang.IllegalArgumentException: Illegal pattern character ‘e’
println(
DateUtils.currentWeek()
)
fun currentWeek(): String {
val weekNum = currentDate[Calendar.WEEK_OF_YEAR]
val dateFormat = SimpleDateFormat(“Week_” + weekNum + “_MMM_yyyy”, Locale.getDefault())
return dateFormat.format(currentDate.time)
}
|
e2eaa1c57100b28647c38258259d8b3a
|
{
"intermediate": 0.3718852698802948,
"beginner": 0.463847815990448,
"expert": 0.1642669290304184
}
|
56
|
write mongoDB schema on node.js for chat
|
e707046ddb90bb77ad490983a7331415
|
{
"intermediate": 0.6191067695617676,
"beginner": 0.1802505999803543,
"expert": 0.2006426453590393
}
|
57
|
there is a automation feature on monday.com that allow user to trigger some event when other event happens, like when status changes to done move item to group zzz. there are many of this event and triggers such as time events, date events, text events, etc how can this feature implemented on laravel? explain comprehensively with details such as code, data structure, files structure and etc.
|
4ad1a30e1efcbd2eaf78fab84cfc6da3
|
{
"intermediate": 0.5021035075187683,
"beginner": 0.2982163727283478,
"expert": 0.19968008995056152
}
|
58
|
Instead of having a model saved in one consolidated.00.pth i would like the model being splited in 2 files:
#! /usr/bin/env python
# coding=utf-8
"""
Modified from: https://github.com/tloen/alpaca-lora
"""
import json
import os
import fire
import torch
from peft import PeftModel
from transformers import LlamaForCausalLM, LlamaTokenizer
CHECKPOINT_PARAMS = {
"7b": {"dim": 4096, "multiple_of": 256, "n_heads": 32, "n_layers": 32, "norm_eps": 1e-06, "vocab_size": -1},
"13b": {"dim": 5120, "multiple_of": 256, "n_heads": 40, "n_layers": 40, "norm_eps": 1e-06, "vocab_size": -1},
"30b": {"dim": 6656, "multiple_of": 256, "n_heads": 52, "n_layers": 60, "norm_eps": 1e-06, "vocab_size": -1},
"65b": {"dim": 8192, "multiple_of": 256, "n_heads": 64, "n_layers": 80, "norm_eps": 1e-06, "vocab_size": -1},
}
def main(base_model_name_or_path: str, lora_model_name_or_path: str, output_dir: str, checkpoint_size: str = "7b"):
# Retrieve the model parameters
params = CHECKPOINT_PARAMS.get(checkpoint_size)
if params is None:
raise ValueError(
f"Cannot find the right model parameters for {checkpoint_size}. Please choose between {list(CHECKPOINT_PARAMS.keys())}."
)
# tokenizer = LlamaTokenizer.from_pretrained(base_model_name_or_path)
base_model = LlamaForCausalLM.from_pretrained(
base_model_name_or_path,
load_in_8bit=False,
torch_dtype=torch.float16,
device_map={"": "cpu"},
)
lora_model = PeftModel.from_pretrained(
base_model,
lora_model_name_or_path,
device_map={"": "cpu"},
torch_dtype=torch.float16,
)
# merge weights
for layer in lora_model.base_model.model.model.layers:
if hasattr(layer.self_attn.q_proj, "merge_weights"):
layer.self_attn.q_proj.merge_weights = True
if hasattr(layer.self_attn.v_proj, "merge_weights"):
layer.self_attn.v_proj.merge_weights = True
if hasattr(layer.self_attn.k_proj, "merge_weights"):
layer.self_attn.k_proj.merge_weights = True
if hasattr(layer.self_attn.o_proj, "merge_weights"):
layer.self_attn.o_proj.merge_weights = True
if hasattr(layer.mlp.gate_proj, "merge_weights"):
layer.mlp.gate_proj.merge_weights = True
if hasattr(layer.mlp.down_proj, "merge_weights"):
layer.mlp.down_proj.merge_weights = True
if hasattr(layer.mlp.up_proj, "merge_weights"):
layer.mlp.up_proj.merge_weights = True
lora_model.train(False)
lora_model_sd = lora_model.state_dict()
# params = {
# "dim": 4096,
# "multiple_of": 256,
# "n_heads": 32,
# "n_layers": 32,
# "norm_eps": 1e-06,
# "vocab_size": -1,
# }
n_layers = params["n_layers"]
n_heads = params["n_heads"]
dim = params["dim"]
dims_per_head = dim // n_heads
base = 10000.0
inv_freq = 1.0 / (base ** (torch.arange(0, dims_per_head, 2).float() / dims_per_head))
def permute(w):
return w.view(n_heads, dim // n_heads // 2, 2, dim).transpose(1, 2).reshape(dim, dim)
def unpermute(w):
return w.view(n_heads, 2, dim // n_heads // 2, dim).transpose(1, 2).reshape(dim, dim)
def translate_state_dict_key(k):
k = k.replace("base_model.model.", "")
if k == "model.embed_tokens.weight":
return "tok_embeddings.weight"
elif k == "model.norm.weight":
return "norm.weight"
elif k == "lm_head.weight":
return "output.weight"
elif k.startswith("model.layers."):
layer = k.split(".")[2]
if k.endswith(".self_attn.q_proj.weight"):
return f"layers.{layer}.attention.wq.weight"
elif k.endswith(".self_attn.k_proj.weight"):
return f"layers.{layer}.attention.wk.weight"
elif k.endswith(".self_attn.v_proj.weight"):
return f"layers.{layer}.attention.wv.weight"
elif k.endswith(".self_attn.o_proj.weight"):
return f"layers.{layer}.attention.wo.weight"
elif k.endswith(".mlp.gate_proj.weight"):
return f"layers.{layer}.feed_forward.w1.weight"
elif k.endswith(".mlp.down_proj.weight"):
return f"layers.{layer}.feed_forward.w2.weight"
elif k.endswith(".mlp.up_proj.weight"):
return f"layers.{layer}.feed_forward.w3.weight"
elif k.endswith(".input_layernorm.weight"):
return f"layers.{layer}.attention_norm.weight"
elif k.endswith(".post_attention_layernorm.weight"):
return f"layers.{layer}.ffn_norm.weight"
elif k.endswith("rotary_emb.inv_freq") or "lora" in k:
return None
else:
print(layer, k)
raise NotImplementedError
else:
print(k)
raise NotImplementedError
new_state_dict = {}
for k, v in lora_model_sd.items():
new_k = translate_state_dict_key(k)
if new_k is not None:
if "wq" in new_k or "wk" in new_k:
new_state_dict[new_k] = unpermute(v)
else:
new_state_dict[new_k] = v
os.makedirs(output_dir, exist_ok=True)
torch.save(new_state_dict, output_dir + "/consolidated.00.pth")
with open(output_dir + "/params.json", "w") as f:
json.dump(params, f)
if __name__ == "__main__":
fire.Fire(main)
|
eaae890946d3e6d24e53e1b3af76b292
|
{
"intermediate": 0.46196919679641724,
"beginner": 0.3907116949558258,
"expert": 0.14731913805007935
}
|
59
|
How can I ban an ip address from the lan 192.168.123.1/24 using iptable but the router itself can access it?
|
b89427a2927b0c224abd02084a57e5bb
|
{
"intermediate": 0.42599543929100037,
"beginner": 0.28066501021385193,
"expert": 0.2933395802974701
}
|
60
|
here is how the model is loaded:
this is the llama_model_function:
static bool llama_model_load(
const std::string & fname,
llama_context & lctx,
int n_ctx,
int n_parts,
ggml_type memory_type,
bool vocab_only,
llama_progress_callback progress_callback,
void progress_callback_user_data) {
fprintf(stderr, “%s: loading model from ‘%s’ - please wait …\n”, func, fname.c_str());
lctx.t_start_us = ggml_time_us();
auto & model = lctx.model;
auto & vocab = lctx.vocab;
auto fin = std::ifstream(fname, std::ios::binary);
if (!fin) {
fprintf(stderr, “%s: failed to open ‘%s’\n”, func, fname.c_str());
return false;
}
std::vector<char> f_buf(10241024);
fin.rdbuf()->pubsetbuf(f_buf.data(), f_buf.size());
fin.seekg(0, fin.end);
const size_t file_size = fin.tellg();
fin.seekg(0);
// verify magic
{
uint32_t magic;
fin.read((char *) &magic, sizeof(magic));
if (magic == LLAMA_FILE_MAGIC_UNVERSIONED) {
fprintf(stderr, “%s: invalid model file ‘%s’ (too old, regenerate your model files or convert them with convert-unversioned-ggml-to-ggml.py!)\n”,
func, fname.c_str());
return false;
}
if (magic != LLAMA_FILE_MAGIC) {
return report_bad_magic(fname.c_str(), magic, LLAMA_FILE_MAGIC);
}
uint32_t format_version;
fin.read((char *) &format_version, sizeof(format_version));
if (format_version != LLAMA_FILE_VERSION) {
fprintf(stderr, “%s: invalid model file ‘%s’ (unsupported format version %” PRIu32 “, expected %d)\n”,
func, fname.c_str(), format_version, LLAMA_FILE_VERSION);
return false;
}
}
int n_ff = 0;
// load hparams
{
auto & hparams = model.hparams;
fin.read((char *) &hparams.n_vocab, sizeof(hparams.n_vocab));
//fin.read((char *) &hparams.n_ctx, sizeof(hparams.n_ctx));
fin.read((char *) &hparams.n_embd, sizeof(hparams.n_embd));
fin.read((char ) &hparams.n_mult, sizeof(hparams.n_mult));
fin.read((char ) &hparams.n_head, sizeof(hparams.n_head));
fin.read((char ) &hparams.n_layer, sizeof(hparams.n_layer));
fin.read((char ) &hparams.n_rot, sizeof(hparams.n_rot));
fin.read((char ) &hparams.f16, sizeof(hparams.f16));
hparams.n_ctx = n_ctx;
n_ff = ((2(4hparams.n_embd)/3 + hparams.n_mult - 1)/hparams.n_mult)hparams.n_mult;
if (n_parts < 1) {
n_parts = LLAMA_N_PARTS.at(hparams.n_embd);
}
// temp warning to tell the user to use “–n_parts”
if (hparams.f16 == 4 && n_parts != 1) {
fprintf(stderr, “%s: GPTQ model detected - are you sure n_parts should be %d? we normally expect it to be 1\n”, func, n_parts);
fprintf(stderr, “%s: use ‘–n_parts 1’ if necessary\n”, func);
}
if (hparams.n_layer == 32) {
model.type = e_model::MODEL_7B;
}
if (hparams.n_layer == 40) {
model.type = e_model::MODEL_13B;
}
if (hparams.n_layer == 60) {
model.type = e_model::MODEL_30B;
}
if (hparams.n_layer == 80) {
model.type = e_model::MODEL_65B;
}
fprintf(stderr, “%s: n_vocab = %d\n”, func, hparams.n_vocab);
fprintf(stderr, “%s: n_ctx = %d\n”, func, hparams.n_ctx);
fprintf(stderr, “%s: n_embd = %d\n”, func, hparams.n_embd);
fprintf(stderr, “%s: n_mult = %d\n”, func, hparams.n_mult);
fprintf(stderr, “%s: n_head = %d\n”, func, hparams.n_head);
fprintf(stderr, “%s: n_layer = %d\n”, func, hparams.n_layer);
fprintf(stderr, “%s: n_rot = %d\n”, func, hparams.n_rot);
fprintf(stderr, “%s: f16 = %d\n”, func, hparams.f16);
fprintf(stderr, “%s: n_ff = %d\n”, func, n_ff);
fprintf(stderr, “%s: n_parts = %d\n”, func, n_parts);
fprintf(stderr, “%s: type = %d\n”, func, model.type);
}
// load vocab
{
std::string word;
vocab.id_to_token.resize(model.hparams.n_vocab);
std::vector<char> tmp(64);
for (int i = 0; i < model.hparams.n_vocab; i++) {
uint32_t len;
fin.read((char ) &len, sizeof(len));
word.resize(len);
if (len > 0) {
tmp.resize(len);
fin.read(tmp.data(), len);
word.assign(tmp.data(), len);
} else {
word.clear();
}
float score;
fin.read((char ) &score, sizeof(score));
vocab.token_to_id[word] = i;
auto &tok_score = vocab.id_to_token[i];
tok_score.tok = word;
tok_score.score = score;
}
}
if (vocab_only) {
return true;
}
// for the big tensors, we have the option to store the data in 16-bit floats or quantized
// in order to save memory and also to speed up the computation
// wtype is for per-layer weights, while vtype is for other weights
ggml_type wtype, vtype;
switch (model.hparams.f16) {
case 0: wtype = vtype = GGML_TYPE_F32; break;
case 1: wtype = vtype = GGML_TYPE_F16; break;
case 2: wtype = vtype = GGML_TYPE_Q4_0; break;
case 3: wtype = vtype = GGML_TYPE_Q4_1; break;
case 4: wtype = GGML_TYPE_Q4_1; vtype = GGML_TYPE_F16; break;
default:
{
fprintf(stderr, “%s: invalid model file ‘%s’ (bad f16 value %d)\n”,
func, fname.c_str(), model.hparams.f16);
return false;
}
}
// map model into memory
char mm_addr = NULL;
model.mm_addr = mmap_file(fname.c_str(), &model.mm_length);
if (model.mm_addr == NULL) {
fprintf(stderr, “%s: failed to mmap ‘%s’\n”, func, fname.c_str());
return false;
}
mm_addr = (char )model.mm_addr;
fprintf(stderr, “%s: ggml map size = %6.2f MB\n”, func, model.mm_length/(1024.01024.0));
auto & ctx = model.ctx;
size_t ctx_size = 0;
{
const auto &hparams = model.hparams;
const int n_layer = hparams.n_layer;
ctx_size += (5 + 10n_layer)256; // object overhead
fprintf(stderr, “%s: ggml ctx size = %6.2f KB\n”, func, ctx_size/1024.0);
}
// print memory requirements
{
const size_t scale = memory_type == GGML_TYPE_F32 ? 2 : 1;
// this is the total memory required to run the inference
const size_t mem_required =
ctx_size +
model.mm_length +
MEM_REQ_SCRATCH0.at(model.type) +
MEM_REQ_SCRATCH1.at(model.type) +
MEM_REQ_EVAL.at (model.type);
// this is the memory required by one llama_state
const size_t mem_required_state =
scaleMEM_REQ_KV_SELF.at(model.type);
fprintf(stderr, “%s: mem required = %7.2f MB (+ %7.2f MB per state)\n”, func,
mem_required / 1024.0 / 1024.0, mem_required_state / 1024.0 / 1024.0);
}
// create the ggml context
{
lctx.model.buf.resize(ctx_size);
struct ggml_init_params params = {
/.mem_size =/ lctx.model.buf.size(),
/.mem_buffer =/ lctx.model.buf.data(),
/.no_alloc =/ true,
};
model.ctx = ggml_init(params);
if (!model.ctx) {
fprintf(stderr, “%s: ggml_init() failed\n”, func);
return false;
}
}
// prepare memory for the weights
{
const auto & hparams = model.hparams;
const int n_embd = hparams.n_embd;
const int n_layer = hparams.n_layer;
const int n_vocab = hparams.n_vocab;
model.layers.resize(n_layer);
model.tok_embeddings = ggml_new_tensor_2d(ctx, vtype, n_embd, n_vocab);
model.norm = ggml_new_tensor_1d(ctx, GGML_TYPE_F32, n_embd);
model.output = ggml_new_tensor_2d(ctx, vtype, n_embd, n_vocab);
// map by name
model.tensors[“tok_embeddings.weight”] = model.tok_embeddings;
model.tensors[“norm.weight”] = model.norm;
model.tensors[“output.weight”] = model.output;
for (int i = 0; i < n_layer; ++i) {
auto & layer = model.layers[i];
layer.attention_norm = ggml_new_tensor_1d(ctx, GGML_TYPE_F32, n_embd);
layer.wq = ggml_new_tensor_2d(ctx, wtype, n_embd, n_embd);
layer.wk = ggml_new_tensor_2d(ctx, wtype, n_embd, n_embd);
layer.wv = ggml_new_tensor_2d(ctx, wtype, n_embd, n_embd);
layer.wo = ggml_new_tensor_2d(ctx, wtype, n_embd, n_embd);
layer.ffn_norm = ggml_new_tensor_1d(ctx, GGML_TYPE_F32, n_embd);
layer.w1 = ggml_new_tensor_2d(ctx, wtype, n_embd, n_ff);
layer.w2 = ggml_new_tensor_2d(ctx, wtype, n_ff, n_embd);
layer.w3 = ggml_new_tensor_2d(ctx, wtype, n_embd, n_ff);
// map by name
model.tensors[“layers.” + std::to_string(i) + “.attention_norm.weight”] = layer.attention_norm;
model.tensors[“layers.” + std::to_string(i) + “.attention.wq.weight”] = layer.wq;
model.tensors[“layers.” + std::to_string(i) + “.attention.wk.weight”] = layer.wk;
model.tensors[“layers.” + std::to_string(i) + “.attention.wv.weight”] = layer.wv;
model.tensors[“layers.” + std::to_string(i) + “.attention.wo.weight”] = layer.wo;
model.tensors[“layers.” + std::to_string(i) + “.ffn_norm.weight”] = layer.ffn_norm;
model.tensors[“layers.” + std::to_string(i) + “.feed_forward.w1.weight”] = layer.w1;
model.tensors[“layers.” + std::to_string(i) + “.feed_forward.w2.weight”] = layer.w2;
model.tensors[“layers.” + std::to_string(i) + “.feed_forward.w3.weight”] = layer.w3;
}
}
std::vector<uint8_t> tmp;
if (progress_callback) {
progress_callback(0.0, progress_callback_user_data);
}
fprintf(stderr, “%s: loading tensors from ‘%s’\n”, func, fname.c_str());
// load weights
{
size_t total_size = 0;
model.n_loaded = 0;
while (true) {
int32_t n_dims;
int32_t length;
int32_t ftype;
fin.read(reinterpret_cast<char *>(&n_dims), sizeof(n_dims));
fin.read(reinterpret_cast<char *>(&length), sizeof(length));
fin.read(reinterpret_cast<char *>(&ftype), sizeof(ftype));
if (fin.eof()) {
break;
}
int32_t nelements = 1;
int32_t ne[2] = { 1, 1 };
for (int i = 0; i < n_dims; ++i) {
fin.read(reinterpret_cast<char *>(&ne[i]), sizeof(ne[i]));
nelements *= ne[i];
}
std::string name(length, 0);
fin.read(&name[0], length);
if (model.tensors.find(name.data()) == model.tensors.end()) {
fprintf(stderr, “%s: unknown tensor ‘%s’ in model file\n”, func, name.data());
return false;
}
auto tensor = model.tensors[name.data()];
if (ggml_nelements(tensor) != nelements) {
fprintf(stderr, “%s: tensor ‘%s’ has wrong size in model file\n”, func, name.data());
return false;
}
if (tensor->ne[0] != ne[0] || tensor->ne[1] != ne[1]) {
fprintf(stderr, “%s: tensor ‘%s’ has wrong shape in model file: got [%” PRId64 “, %” PRId64 “], expected [%d, %d]\n”,
func, name.data(), tensor->ne[0], tensor->ne[1], ne[0], ne[1]);
return false;
}
if (0) {
static const char * ftype_str[] = { “f32”, “f16”, “q4_0”, “q4_1”, };
fprintf(stderr, “%24s - [%5d, %5d], type = %6s\n”, name.data(), ne[0], ne[1], ftype_str[ftype]);
}
switch (ftype) {
case 0: // f32
case 1: // f16
break;
case 2: // q4_0
case 3: // q4_1
assert(ne[0] % 64 == 0);
break;
default:
fprintf(stderr, “%s: unknown ftype %d in model file\n”, func, ftype);
return false;
};
// load the tensor data into memory without copying or reading it
size_t offset = fin.tellg();
size_t tensor_data_size = ggml_nbytes(tensor);
offset = (offset + 31) & -32;
tensor->data = mm_addr + offset;
fin.seekg(offset + tensor_data_size);
total_size += tensor_data_size;
model.n_loaded++;
// progress
if (progress_callback) {
double current_progress = size_t(fin.tellg()) / double(file_size);
progress_callback(current_progress, progress_callback_user_data);
}
}
fin.close();
fprintf(stderr, “%s: model size = %8.2f MB / num tensors = %d\n”, func, total_size/1024.0/1024.0, model.n_loaded);
if (model.n_loaded == 0) {
fprintf(stderr, “%s: WARN no tensors loaded from model file - assuming empty model for testing\n”, func);
} else if (model.n_loaded != (int) model.tensors.size()) {
fprintf(stderr, “%s: ERROR not all tensors loaded from model file - expected %zu, got %d\n”, func, model.tensors.size(), model.n_loaded);
return false;
}
}
// loading time will be recalculate after the first eval, so
// we take page faults deferred by mmap() into consideration
lctx.t_load_us = ggml_time_us() - lctx.t_start_us;
if (progress_callback) {
progress_callback(1.0, progress_callback_user_data);
}
return true;
}
here is how the model is exported :
#! /usr/bin/env python
# coding=utf-8
"""
Modified from: https://github.com/tloen/alpaca-lora
"""
import json
import os
import fire
import torch
from peft import PeftModel
from transformers import LlamaForCausalLM, LlamaTokenizer
CHECKPOINT_PARAMS = {
"7b": {"dim": 4096, "multiple_of": 256, "n_heads": 32, "n_layers": 32, "norm_eps": 1e-06, "vocab_size": -1},
"13b": {"dim": 5120, "multiple_of": 256, "n_heads": 40, "n_layers": 40, "norm_eps": 1e-06, "vocab_size": -1},
"30b": {"dim": 6656, "multiple_of": 256, "n_heads": 52, "n_layers": 60, "norm_eps": 1e-06, "vocab_size": -1},
"65b": {"dim": 8192, "multiple_of": 256, "n_heads": 64, "n_layers": 80, "norm_eps": 1e-06, "vocab_size": -1},
}
def main(base_model_name_or_path: str, lora_model_name_or_path: str, output_dir: str, checkpoint_size: str = "7b"):
# Retrieve the model parameters
params = CHECKPOINT_PARAMS.get(checkpoint_size)
if params is None:
raise ValueError(
f"Cannot find the right model parameters for {checkpoint_size}. Please choose between {list(CHECKPOINT_PARAMS.keys())}."
)
# tokenizer = LlamaTokenizer.from_pretrained(base_model_name_or_path)
base_model = LlamaForCausalLM.from_pretrained(
base_model_name_or_path,
load_in_8bit=False,
torch_dtype=torch.float16,
device_map={"": "cpu"},
)
lora_model = PeftModel.from_pretrained(
base_model,
lora_model_name_or_path,
device_map={"": "cpu"},
torch_dtype=torch.float16,
)
# merge weights
for layer in lora_model.base_model.model.model.layers:
if hasattr(layer.self_attn.q_proj, "merge_weights"):
layer.self_attn.q_proj.merge_weights = True
if hasattr(layer.self_attn.v_proj, "merge_weights"):
layer.self_attn.v_proj.merge_weights = True
if hasattr(layer.self_attn.k_proj, "merge_weights"):
layer.self_attn.k_proj.merge_weights = True
if hasattr(layer.self_attn.o_proj, "merge_weights"):
layer.self_attn.o_proj.merge_weights = True
if hasattr(layer.mlp.gate_proj, "merge_weights"):
layer.mlp.gate_proj.merge_weights = True
if hasattr(layer.mlp.down_proj, "merge_weights"):
layer.mlp.down_proj.merge_weights = True
if hasattr(layer.mlp.up_proj, "merge_weights"):
layer.mlp.up_proj.merge_weights = True
lora_model.train(False)
lora_model_sd = lora_model.state_dict()
# params = {
# "dim": 4096,
# "multiple_of": 256,
# "n_heads": 32,
# "n_layers": 32,
# "norm_eps": 1e-06,
# "vocab_size": -1,
# }
n_layers = params["n_layers"]
n_heads = params["n_heads"]
dim = params["dim"]
dims_per_head = dim // n_heads
base = 10000.0
inv_freq = 1.0 / (base ** (torch.arange(0, dims_per_head, 2).float() / dims_per_head))
def permute(w):
return w.view(n_heads, dim // n_heads // 2, 2, dim).transpose(1, 2).reshape(dim, dim)
def unpermute(w):
return w.view(n_heads, 2, dim // n_heads // 2, dim).transpose(1, 2).reshape(dim, dim)
def translate_state_dict_key(k):
k = k.replace("base_model.model.", "")
if k == "model.embed_tokens.weight":
return "tok_embeddings.weight"
elif k == "model.norm.weight":
return "norm.weight"
elif k == "lm_head.weight":
return "output.weight"
elif k.startswith("model.layers."):
layer = k.split(".")[2]
if k.endswith(".self_attn.q_proj.weight"):
return f"layers.{layer}.attention.wq.weight"
elif k.endswith(".self_attn.k_proj.weight"):
return f"layers.{layer}.attention.wk.weight"
elif k.endswith(".self_attn.v_proj.weight"):
return f"layers.{layer}.attention.wv.weight"
elif k.endswith(".self_attn.o_proj.weight"):
return f"layers.{layer}.attention.wo.weight"
elif k.endswith(".mlp.gate_proj.weight"):
return f"layers.{layer}.feed_forward.w1.weight"
elif k.endswith(".mlp.down_proj.weight"):
return f"layers.{layer}.feed_forward.w2.weight"
elif k.endswith(".mlp.up_proj.weight"):
return f"layers.{layer}.feed_forward.w3.weight"
elif k.endswith(".input_layernorm.weight"):
return f"layers.{layer}.attention_norm.weight"
elif k.endswith(".post_attention_layernorm.weight"):
return f"layers.{layer}.ffn_norm.weight"
elif k.endswith("rotary_emb.inv_freq") or "lora" in k:
return None
else:
print(layer, k)
raise NotImplementedError
else:
print(k)
raise NotImplementedError
new_state_dict = {}
for k, v in lora_model_sd.items():
new_k = translate_state_dict_key(k)
if new_k is not None:
if "wq" in new_k or "wk" in new_k:
new_state_dict[new_k] = unpermute(v)
else:
new_state_dict[new_k] = v
os.makedirs(output_dir, exist_ok=True)
# Split the tensors based on layer index
part1_keys = [k for k in new_state_dict.keys() if not k.startswith("layers.") or int(k.split(".")[1]) < n_layers // 2]
part2_keys = [k for k in new_state_dict.keys() if k not in part1_keys]
state_dict_part1 = {k: new_state_dict[k] for k in part1_keys}
state_dict_part2 = {k: new_state_dict[k] for k in part2_keys}
torch.save(state_dict_part1, output_dir + "/consolidated.00.pth")
torch.save(state_dict_part2, output_dir + "/consolidated.01.pth")
with open(output_dir + "/params.json", "w") as f:
json.dump(params, f)
if __name__ == "__main__":
fire.Fire(main)
Here is the problem I have when i run the inference:
./main -m ./models/13B/ggml-model-f16.bin -n 5000 --repeat_penalty 1.0 --color -i -r "User:" -f prompts/chat-with-bob.txt -t 32
main: seed = 1681035697
llama_model_load: loading model from './models/13B/ggml-model-f16.bin' - please wait ...
llama_model_load: n_vocab = 32000
llama_model_load: n_ctx = 512
llama_model_load: n_embd = 5120
llama_model_load: n_mult = 256
llama_model_load: n_head = 40
llama_model_load: n_layer = 40
llama_model_load: n_rot = 128
llama_model_load: f16 = 1
llama_model_load: n_ff = 13824
llama_model_load: n_parts = 2
llama_model_load: type = 2
llama_model_load: ggml map size = 25138.72 MB
llama_model_load: ggml ctx size = 101.25 KB
llama_model_load: mem required = 27186.82 MB (+ 1608.00 MB per state)
llama_model_load: loading tensors from './models/13B/ggml-model-f16.bin'
llama_model_load: tensor 'layers.20.attention.wq.weight' has wrong size in model file
llama_init_from_file: failed to load model
main: error: failed to load model './models/13B/ggml-model-f16.bin'
|
13aca8fd5dc0afae0f4a4307f88ee4d7
|
{
"intermediate": 0.3594616949558258,
"beginner": 0.3786337077617645,
"expert": 0.26190462708473206
}
|
61
|
I can only change the export python script, i need it to split model files in two files consolidated.00.pth consolidated.01.pth with the correct layers size:
this is the llama_model_function:
static bool llama_model_load(
const std::string & fname,
llama_context & lctx,
int n_ctx,
int n_parts,
ggml_type memory_type,
bool vocab_only,
llama_progress_callback progress_callback,
void progress_callback_user_data) {
fprintf(stderr, “%s: loading model from ‘%s’ - please wait …\n”, func, fname.c_str());
lctx.t_start_us = ggml_time_us();
auto & model = lctx.model;
auto & vocab = lctx.vocab;
auto fin = std::ifstream(fname, std::ios::binary);
if (!fin) {
fprintf(stderr, “%s: failed to open ‘%s’\n”, func, fname.c_str());
return false;
}
std::vector<char> f_buf(10241024);
fin.rdbuf()->pubsetbuf(f_buf.data(), f_buf.size());
fin.seekg(0, fin.end);
const size_t file_size = fin.tellg();
fin.seekg(0);
// verify magic
{
uint32_t magic;
fin.read((char *) &magic, sizeof(magic));
if (magic == LLAMA_FILE_MAGIC_UNVERSIONED) {
fprintf(stderr, “%s: invalid model file ‘%s’ (too old, regenerate your model files or convert them with convert-unversioned-ggml-to-ggml.py!)\n”,
func, fname.c_str());
return false;
}
if (magic != LLAMA_FILE_MAGIC) {
return report_bad_magic(fname.c_str(), magic, LLAMA_FILE_MAGIC);
}
uint32_t format_version;
fin.read((char *) &format_version, sizeof(format_version));
if (format_version != LLAMA_FILE_VERSION) {
fprintf(stderr, “%s: invalid model file ‘%s’ (unsupported format version %” PRIu32 “, expected %d)\n”,
func, fname.c_str(), format_version, LLAMA_FILE_VERSION);
return false;
}
}
int n_ff = 0;
// load hparams
{
auto & hparams = model.hparams;
fin.read((char *) &hparams.n_vocab, sizeof(hparams.n_vocab));
//fin.read((char *) &hparams.n_ctx, sizeof(hparams.n_ctx));
fin.read((char *) &hparams.n_embd, sizeof(hparams.n_embd));
fin.read((char ) &hparams.n_mult, sizeof(hparams.n_mult));
fin.read((char ) &hparams.n_head, sizeof(hparams.n_head));
fin.read((char ) &hparams.n_layer, sizeof(hparams.n_layer));
fin.read((char ) &hparams.n_rot, sizeof(hparams.n_rot));
fin.read((char ) &hparams.f16, sizeof(hparams.f16));
hparams.n_ctx = n_ctx;
n_ff = ((2(4hparams.n_embd)/3 + hparams.n_mult - 1)/hparams.n_mult)hparams.n_mult;
if (n_parts < 1) {
n_parts = LLAMA_N_PARTS.at(hparams.n_embd);
}
// temp warning to tell the user to use “–n_parts”
if (hparams.f16 == 4 && n_parts != 1) {
fprintf(stderr, “%s: GPTQ model detected - are you sure n_parts should be %d? we normally expect it to be 1\n”, func, n_parts);
fprintf(stderr, “%s: use ‘–n_parts 1’ if necessary\n”, func);
}
if (hparams.n_layer == 32) {
model.type = e_model::MODEL_7B;
}
if (hparams.n_layer == 40) {
model.type = e_model::MODEL_13B;
}
if (hparams.n_layer == 60) {
model.type = e_model::MODEL_30B;
}
if (hparams.n_layer == 80) {
model.type = e_model::MODEL_65B;
}
fprintf(stderr, “%s: n_vocab = %d\n”, func, hparams.n_vocab);
fprintf(stderr, “%s: n_ctx = %d\n”, func, hparams.n_ctx);
fprintf(stderr, “%s: n_embd = %d\n”, func, hparams.n_embd);
fprintf(stderr, “%s: n_mult = %d\n”, func, hparams.n_mult);
fprintf(stderr, “%s: n_head = %d\n”, func, hparams.n_head);
fprintf(stderr, “%s: n_layer = %d\n”, func, hparams.n_layer);
fprintf(stderr, “%s: n_rot = %d\n”, func, hparams.n_rot);
fprintf(stderr, “%s: f16 = %d\n”, func, hparams.f16);
fprintf(stderr, “%s: n_ff = %d\n”, func, n_ff);
fprintf(stderr, “%s: n_parts = %d\n”, func, n_parts);
fprintf(stderr, “%s: type = %d\n”, func, model.type);
}
// load vocab
{
std::string word;
vocab.id_to_token.resize(model.hparams.n_vocab);
std::vector<char> tmp(64);
for (int i = 0; i < model.hparams.n_vocab; i++) {
uint32_t len;
fin.read((char ) &len, sizeof(len));
word.resize(len);
if (len > 0) {
tmp.resize(len);
fin.read(tmp.data(), len);
word.assign(tmp.data(), len);
} else {
word.clear();
}
float score;
fin.read((char ) &score, sizeof(score));
vocab.token_to_id[word] = i;
auto &tok_score = vocab.id_to_token[i];
tok_score.tok = word;
tok_score.score = score;
}
}
if (vocab_only) {
return true;
}
// for the big tensors, we have the option to store the data in 16-bit floats or quantized
// in order to save memory and also to speed up the computation
// wtype is for per-layer weights, while vtype is for other weights
ggml_type wtype, vtype;
switch (model.hparams.f16) {
case 0: wtype = vtype = GGML_TYPE_F32; break;
case 1: wtype = vtype = GGML_TYPE_F16; break;
case 2: wtype = vtype = GGML_TYPE_Q4_0; break;
case 3: wtype = vtype = GGML_TYPE_Q4_1; break;
case 4: wtype = GGML_TYPE_Q4_1; vtype = GGML_TYPE_F16; break;
default:
{
fprintf(stderr, “%s: invalid model file ‘%s’ (bad f16 value %d)\n”,
func, fname.c_str(), model.hparams.f16);
return false;
}
}
// map model into memory
char mm_addr = NULL;
model.mm_addr = mmap_file(fname.c_str(), &model.mm_length);
if (model.mm_addr == NULL) {
fprintf(stderr, “%s: failed to mmap ‘%s’\n”, func, fname.c_str());
return false;
}
mm_addr = (char )model.mm_addr;
fprintf(stderr, “%s: ggml map size = %6.2f MB\n”, func, model.mm_length/(1024.01024.0));
auto & ctx = model.ctx;
size_t ctx_size = 0;
{
const auto &hparams = model.hparams;
const int n_layer = hparams.n_layer;
ctx_size += (5 + 10n_layer)256; // object overhead
fprintf(stderr, “%s: ggml ctx size = %6.2f KB\n”, func, ctx_size/1024.0);
}
// print memory requirements
{
const size_t scale = memory_type == GGML_TYPE_F32 ? 2 : 1;
// this is the total memory required to run the inference
const size_t mem_required =
ctx_size +
model.mm_length +
MEM_REQ_SCRATCH0.at(model.type) +
MEM_REQ_SCRATCH1.at(model.type) +
MEM_REQ_EVAL.at (model.type);
// this is the memory required by one llama_state
const size_t mem_required_state =
scaleMEM_REQ_KV_SELF.at(model.type);
fprintf(stderr, “%s: mem required = %7.2f MB (+ %7.2f MB per state)\n”, func,
mem_required / 1024.0 / 1024.0, mem_required_state / 1024.0 / 1024.0);
}
// create the ggml context
{
lctx.model.buf.resize(ctx_size);
struct ggml_init_params params = {
/.mem_size =/ lctx.model.buf.size(),
/.mem_buffer =/ lctx.model.buf.data(),
/.no_alloc =/ true,
};
model.ctx = ggml_init(params);
if (!model.ctx) {
fprintf(stderr, “%s: ggml_init() failed\n”, func);
return false;
}
}
// prepare memory for the weights
{
const auto & hparams = model.hparams;
const int n_embd = hparams.n_embd;
const int n_layer = hparams.n_layer;
const int n_vocab = hparams.n_vocab;
model.layers.resize(n_layer);
model.tok_embeddings = ggml_new_tensor_2d(ctx, vtype, n_embd, n_vocab);
model.norm = ggml_new_tensor_1d(ctx, GGML_TYPE_F32, n_embd);
model.output = ggml_new_tensor_2d(ctx, vtype, n_embd, n_vocab);
// map by name
model.tensors[“tok_embeddings.weight”] = model.tok_embeddings;
model.tensors[“norm.weight”] = model.norm;
model.tensors[“output.weight”] = model.output;
for (int i = 0; i < n_layer; ++i) {
auto & layer = model.layers[i];
layer.attention_norm = ggml_new_tensor_1d(ctx, GGML_TYPE_F32, n_embd);
layer.wq = ggml_new_tensor_2d(ctx, wtype, n_embd, n_embd);
layer.wk = ggml_new_tensor_2d(ctx, wtype, n_embd, n_embd);
layer.wv = ggml_new_tensor_2d(ctx, wtype, n_embd, n_embd);
layer.wo = ggml_new_tensor_2d(ctx, wtype, n_embd, n_embd);
layer.ffn_norm = ggml_new_tensor_1d(ctx, GGML_TYPE_F32, n_embd);
layer.w1 = ggml_new_tensor_2d(ctx, wtype, n_embd, n_ff);
layer.w2 = ggml_new_tensor_2d(ctx, wtype, n_ff, n_embd);
layer.w3 = ggml_new_tensor_2d(ctx, wtype, n_embd, n_ff);
// map by name
model.tensors[“layers.” + std::to_string(i) + “.attention_norm.weight”] = layer.attention_norm;
model.tensors[“layers.” + std::to_string(i) + “.attention.wq.weight”] = layer.wq;
model.tensors[“layers.” + std::to_string(i) + “.attention.wk.weight”] = layer.wk;
model.tensors[“layers.” + std::to_string(i) + “.attention.wv.weight”] = layer.wv;
model.tensors[“layers.” + std::to_string(i) + “.attention.wo.weight”] = layer.wo;
model.tensors[“layers.” + std::to_string(i) + “.ffn_norm.weight”] = layer.ffn_norm;
model.tensors[“layers.” + std::to_string(i) + “.feed_forward.w1.weight”] = layer.w1;
model.tensors[“layers.” + std::to_string(i) + “.feed_forward.w2.weight”] = layer.w2;
model.tensors[“layers.” + std::to_string(i) + “.feed_forward.w3.weight”] = layer.w3;
}
}
std::vector<uint8_t> tmp;
if (progress_callback) {
progress_callback(0.0, progress_callback_user_data);
}
fprintf(stderr, “%s: loading tensors from ‘%s’\n”, func, fname.c_str());
// load weights
{
size_t total_size = 0;
model.n_loaded = 0;
while (true) {
int32_t n_dims;
int32_t length;
int32_t ftype;
fin.read(reinterpret_cast<char *>(&n_dims), sizeof(n_dims));
fin.read(reinterpret_cast<char *>(&length), sizeof(length));
fin.read(reinterpret_cast<char *>(&ftype), sizeof(ftype));
if (fin.eof()) {
break;
}
int32_t nelements = 1;
int32_t ne[2] = { 1, 1 };
for (int i = 0; i < n_dims; ++i) {
fin.read(reinterpret_cast<char *>(&ne[i]), sizeof(ne[i]));
nelements *= ne[i];
}
std::string name(length, 0);
fin.read(&name[0], length);
if (model.tensors.find(name.data()) == model.tensors.end()) {
fprintf(stderr, “%s: unknown tensor ‘%s’ in model file\n”, func, name.data());
return false;
}
auto tensor = model.tensors[name.data()];
if (ggml_nelements(tensor) != nelements) {
fprintf(stderr, “%s: tensor ‘%s’ has wrong size in model file\n”, func, name.data());
return false;
}
if (tensor->ne[0] != ne[0] || tensor->ne[1] != ne[1]) {
fprintf(stderr, “%s: tensor ‘%s’ has wrong shape in model file: got [%” PRId64 “, %” PRId64 “], expected [%d, %d]\n”,
func, name.data(), tensor->ne[0], tensor->ne[1], ne[0], ne[1]);
return false;
}
if (0) {
static const char * ftype_str[] = { “f32”, “f16”, “q4_0”, “q4_1”, };
fprintf(stderr, “%24s - [%5d, %5d], type = %6s\n”, name.data(), ne[0], ne[1], ftype_str[ftype]);
}
switch (ftype) {
case 0: // f32
case 1: // f16
break;
case 2: // q4_0
case 3: // q4_1
assert(ne[0] % 64 == 0);
break;
default:
fprintf(stderr, “%s: unknown ftype %d in model file\n”, func, ftype);
return false;
};
// load the tensor data into memory without copying or reading it
size_t offset = fin.tellg();
size_t tensor_data_size = ggml_nbytes(tensor);
offset = (offset + 31) & -32;
tensor->data = mm_addr + offset;
fin.seekg(offset + tensor_data_size);
total_size += tensor_data_size;
model.n_loaded++;
// progress
if (progress_callback) {
double current_progress = size_t(fin.tellg()) / double(file_size);
progress_callback(current_progress, progress_callback_user_data);
}
}
fin.close();
fprintf(stderr, “%s: model size = %8.2f MB / num tensors = %d\n”, func, total_size/1024.0/1024.0, model.n_loaded);
if (model.n_loaded == 0) {
fprintf(stderr, “%s: WARN no tensors loaded from model file - assuming empty model for testing\n”, func);
} else if (model.n_loaded != (int) model.tensors.size()) {
fprintf(stderr, “%s: ERROR not all tensors loaded from model file - expected %zu, got %d\n”, func, model.tensors.size(), model.n_loaded);
return false;
}
}
// loading time will be recalculate after the first eval, so
// we take page faults deferred by mmap() into consideration
lctx.t_load_us = ggml_time_us() - lctx.t_start_us;
if (progress_callback) {
progress_callback(1.0, progress_callback_user_data);
}
return true;
}
here is how the model is exported :
#! /usr/bin/env python
# coding=utf-8
“”“
Modified from: https://github.com/tloen/alpaca-lora
”“”
import json
import os
import fire
import torch
from peft import PeftModel
from transformers import LlamaForCausalLM, LlamaTokenizer
CHECKPOINT_PARAMS = {
“7b”: {“dim”: 4096, “multiple_of”: 256, “n_heads”: 32, “n_layers”: 32, “norm_eps”: 1e-06, “vocab_size”: -1},
“13b”: {“dim”: 5120, “multiple_of”: 256, “n_heads”: 40, “n_layers”: 40, “norm_eps”: 1e-06, “vocab_size”: -1},
“30b”: {“dim”: 6656, “multiple_of”: 256, “n_heads”: 52, “n_layers”: 60, “norm_eps”: 1e-06, “vocab_size”: -1},
“65b”: {“dim”: 8192, “multiple_of”: 256, “n_heads”: 64, “n_layers”: 80, “norm_eps”: 1e-06, “vocab_size”: -1},
}
def main(base_model_name_or_path: str, lora_model_name_or_path: str, output_dir: str, checkpoint_size: str = “7b”):
# Retrieve the model parameters
params = CHECKPOINT_PARAMS.get(checkpoint_size)
if params is None:
raise ValueError(
f"Cannot find the right model parameters for {checkpoint_size}. Please choose between {list(CHECKPOINT_PARAMS.keys())}.“
)
# tokenizer = LlamaTokenizer.from_pretrained(base_model_name_or_path)
base_model = LlamaForCausalLM.from_pretrained(
base_model_name_or_path,
load_in_8bit=False,
torch_dtype=torch.float16,
device_map={”“: “cpu”},
)
lora_model = PeftModel.from_pretrained(
base_model,
lora_model_name_or_path,
device_map={”“: “cpu”},
torch_dtype=torch.float16,
)
# merge weights
for layer in lora_model.base_model.model.model.layers:
if hasattr(layer.self_attn.q_proj, “merge_weights”):
layer.self_attn.q_proj.merge_weights = True
if hasattr(layer.self_attn.v_proj, “merge_weights”):
layer.self_attn.v_proj.merge_weights = True
if hasattr(layer.self_attn.k_proj, “merge_weights”):
layer.self_attn.k_proj.merge_weights = True
if hasattr(layer.self_attn.o_proj, “merge_weights”):
layer.self_attn.o_proj.merge_weights = True
if hasattr(layer.mlp.gate_proj, “merge_weights”):
layer.mlp.gate_proj.merge_weights = True
if hasattr(layer.mlp.down_proj, “merge_weights”):
layer.mlp.down_proj.merge_weights = True
if hasattr(layer.mlp.up_proj, “merge_weights”):
layer.mlp.up_proj.merge_weights = True
lora_model.train(False)
lora_model_sd = lora_model.state_dict()
# params = {
# “dim”: 4096,
# “multiple_of”: 256,
# “n_heads”: 32,
# “n_layers”: 32,
# “norm_eps”: 1e-06,
# “vocab_size”: -1,
# }
n_layers = params[“n_layers”]
n_heads = params[“n_heads”]
dim = params[“dim”]
dims_per_head = dim // n_heads
base = 10000.0
inv_freq = 1.0 / (base ** (torch.arange(0, dims_per_head, 2).float() / dims_per_head))
def permute(w):
return w.view(n_heads, dim // n_heads // 2, 2, dim).transpose(1, 2).reshape(dim, dim)
def unpermute(w):
return w.view(n_heads, 2, dim // n_heads // 2, dim).transpose(1, 2).reshape(dim, dim)
def translate_state_dict_key(k):
k = k.replace(“base_model.model.”, “”)
if k == “model.embed_tokens.weight”:
return “tok_embeddings.weight”
elif k == “model.norm.weight”:
return “norm.weight”
elif k == “lm_head.weight”:
return “output.weight”
elif k.startswith(“model.layers.”):
layer = k.split(”.“)[2]
if k.endswith(”.self_attn.q_proj.weight"):
return f"layers.{layer}.attention.wq.weight"
elif k.endswith(“.self_attn.k_proj.weight”):
return f"layers.{layer}.attention.wk.weight"
elif k.endswith(“.self_attn.v_proj.weight”):
return f"layers.{layer}.attention.wv.weight"
elif k.endswith(“.self_attn.o_proj.weight”):
return f"layers.{layer}.attention.wo.weight"
elif k.endswith(“.mlp.gate_proj.weight”):
return f"layers.{layer}.feed_forward.w1.weight"
elif k.endswith(“.mlp.down_proj.weight”):
return f"layers.{layer}.feed_forward.w2.weight"
elif k.endswith(“.mlp.up_proj.weight”):
return f"layers.{layer}.feed_forward.w3.weight"
elif k.endswith(“.input_layernorm.weight”):
return f"layers.{layer}.attention_norm.weight"
elif k.endswith(“.post_attention_layernorm.weight”):
return f"layers.{layer}.ffn_norm.weight"
elif k.endswith(“rotary_emb.inv_freq”) or “lora” in k:
return None
else:
print(layer, k)
raise NotImplementedError
else:
print(k)
raise NotImplementedError
new_state_dict = {}
for k, v in lora_model_sd.items():
new_k = translate_state_dict_key(k)
if new_k is not None:
if “wq” in new_k or “wk” in new_k:
new_state_dict[new_k] = unpermute(v)
else:
new_state_dict[new_k] = v
os.makedirs(output_dir, exist_ok=True)
# Split the tensors based on layer index
part1_keys = [k for k in new_state_dict.keys() if not k.startswith(“layers.”) or int(k.split(“.”)[1]) < n_layers // 2]
part2_keys = [k for k in new_state_dict.keys() if k not in part1_keys]
state_dict_part1 = {k: new_state_dict[k] for k in part1_keys}
state_dict_part2 = {k: new_state_dict[k] for k in part2_keys}
torch.save(state_dict_part1, output_dir + “/consolidated.00.pth”)
torch.save(state_dict_part2, output_dir + “/consolidated.01.pth”)
with open(output_dir + “/params.json”, “w”) as f:
json.dump(params, f)
if name == “main”:
fire.Fire(main)
Here is the problem I have when i run the inference:
./main -m ./models/13B/ggml-model-f16.bin -n 5000 --repeat_penalty 1.0 --color -i -r “User:” -f prompts/chat-with-bob.txt -t 32
main: seed = 1681035697
llama_model_load: loading model from ‘./models/13B/ggml-model-f16.bin’ - please wait …
llama_model_load: n_vocab = 32000
llama_model_load: n_ctx = 512
llama_model_load: n_embd = 5120
llama_model_load: n_mult = 256
llama_model_load: n_head = 40
llama_model_load: n_layer = 40
llama_model_load: n_rot = 128
llama_model_load: f16 = 1
llama_model_load: n_ff = 13824
llama_model_load: n_parts = 2
llama_model_load: type = 2
llama_model_load: ggml map size = 25138.72 MB
llama_model_load: ggml ctx size = 101.25 KB
llama_model_load: mem required = 27186.82 MB (+ 1608.00 MB per state)
llama_model_load: loading tensors from ‘./models/13B/ggml-model-f16.bin’
llama_model_load: tensor ‘layers.20.attention.wq.weight’ has wrong size in model file
llama_init_from_file: failed to load model
main: error: failed to load model ‘./models/13B/ggml-model-f16.bin’
|
0fef575d8fb0246a708be8d16151d071
|
{
"intermediate": 0.4636182188987732,
"beginner": 0.3458104133605957,
"expert": 0.1905713826417923
}
|
62
|
I can only change the export python script, i need it to split model files in two files consolidated.00.pth consolidated.01.pth with the correct layers size:
this is the llama_model_function:
static bool llama_model_load(
const std::string & fname,
llama_context & lctx,
int n_ctx,
int n_parts,
ggml_type memory_type,
bool vocab_only,
llama_progress_callback progress_callback,
void progress_callback_user_data) {
fprintf(stderr, “%s: loading model from ‘%s’ - please wait …\n”, func, fname.c_str());
lctx.t_start_us = ggml_time_us();
auto & model = lctx.model;
auto & vocab = lctx.vocab;
auto fin = std::ifstream(fname, std::ios::binary);
if (!fin) {
fprintf(stderr, “%s: failed to open ‘%s’\n”, func, fname.c_str());
return false;
}
std::vector<char> f_buf(10241024);
fin.rdbuf()->pubsetbuf(f_buf.data(), f_buf.size());
fin.seekg(0, fin.end);
const size_t file_size = fin.tellg();
fin.seekg(0);
// verify magic
{
uint32_t magic;
fin.read((char *) &magic, sizeof(magic));
if (magic == LLAMA_FILE_MAGIC_UNVERSIONED) {
fprintf(stderr, “%s: invalid model file ‘%s’ (too old, regenerate your model files or convert them with convert-unversioned-ggml-to-ggml.py!)\n”,
func, fname.c_str());
return false;
}
if (magic != LLAMA_FILE_MAGIC) {
return report_bad_magic(fname.c_str(), magic, LLAMA_FILE_MAGIC);
}
uint32_t format_version;
fin.read((char *) &format_version, sizeof(format_version));
if (format_version != LLAMA_FILE_VERSION) {
fprintf(stderr, “%s: invalid model file ‘%s’ (unsupported format version %” PRIu32 “, expected %d)\n”,
func, fname.c_str(), format_version, LLAMA_FILE_VERSION);
return false;
}
}
int n_ff = 0;
// load hparams
{
auto & hparams = model.hparams;
fin.read((char *) &hparams.n_vocab, sizeof(hparams.n_vocab));
//fin.read((char *) &hparams.n_ctx, sizeof(hparams.n_ctx));
fin.read((char *) &hparams.n_embd, sizeof(hparams.n_embd));
fin.read((char ) &hparams.n_mult, sizeof(hparams.n_mult));
fin.read((char ) &hparams.n_head, sizeof(hparams.n_head));
fin.read((char ) &hparams.n_layer, sizeof(hparams.n_layer));
fin.read((char ) &hparams.n_rot, sizeof(hparams.n_rot));
fin.read((char ) &hparams.f16, sizeof(hparams.f16));
hparams.n_ctx = n_ctx;
n_ff = ((2(4hparams.n_embd)/3 + hparams.n_mult - 1)/hparams.n_mult)hparams.n_mult;
if (n_parts < 1) {
n_parts = LLAMA_N_PARTS.at(hparams.n_embd);
}
// temp warning to tell the user to use “–n_parts”
if (hparams.f16 == 4 && n_parts != 1) {
fprintf(stderr, “%s: GPTQ model detected - are you sure n_parts should be %d? we normally expect it to be 1\n”, func, n_parts);
fprintf(stderr, “%s: use ‘–n_parts 1’ if necessary\n”, func);
}
if (hparams.n_layer == 32) {
model.type = e_model::MODEL_7B;
}
if (hparams.n_layer == 40) {
model.type = e_model::MODEL_13B;
}
if (hparams.n_layer == 60) {
model.type = e_model::MODEL_30B;
}
if (hparams.n_layer == 80) {
model.type = e_model::MODEL_65B;
}
fprintf(stderr, “%s: n_vocab = %d\n”, func, hparams.n_vocab);
fprintf(stderr, “%s: n_ctx = %d\n”, func, hparams.n_ctx);
fprintf(stderr, “%s: n_embd = %d\n”, func, hparams.n_embd);
fprintf(stderr, “%s: n_mult = %d\n”, func, hparams.n_mult);
fprintf(stderr, “%s: n_head = %d\n”, func, hparams.n_head);
fprintf(stderr, “%s: n_layer = %d\n”, func, hparams.n_layer);
fprintf(stderr, “%s: n_rot = %d\n”, func, hparams.n_rot);
fprintf(stderr, “%s: f16 = %d\n”, func, hparams.f16);
fprintf(stderr, “%s: n_ff = %d\n”, func, n_ff);
fprintf(stderr, “%s: n_parts = %d\n”, func, n_parts);
fprintf(stderr, “%s: type = %d\n”, func, model.type);
}
// load vocab
{
std::string word;
vocab.id_to_token.resize(model.hparams.n_vocab);
std::vector<char> tmp(64);
for (int i = 0; i < model.hparams.n_vocab; i++) {
uint32_t len;
fin.read((char ) &len, sizeof(len));
word.resize(len);
if (len > 0) {
tmp.resize(len);
fin.read(tmp.data(), len);
word.assign(tmp.data(), len);
} else {
word.clear();
}
float score;
fin.read((char ) &score, sizeof(score));
vocab.token_to_id[word] = i;
auto &tok_score = vocab.id_to_token[i];
tok_score.tok = word;
tok_score.score = score;
}
}
if (vocab_only) {
return true;
}
// for the big tensors, we have the option to store the data in 16-bit floats or quantized
// in order to save memory and also to speed up the computation
// wtype is for per-layer weights, while vtype is for other weights
ggml_type wtype, vtype;
switch (model.hparams.f16) {
case 0: wtype = vtype = GGML_TYPE_F32; break;
case 1: wtype = vtype = GGML_TYPE_F16; break;
case 2: wtype = vtype = GGML_TYPE_Q4_0; break;
case 3: wtype = vtype = GGML_TYPE_Q4_1; break;
case 4: wtype = GGML_TYPE_Q4_1; vtype = GGML_TYPE_F16; break;
default:
{
fprintf(stderr, “%s: invalid model file ‘%s’ (bad f16 value %d)\n”,
func, fname.c_str(), model.hparams.f16);
return false;
}
}
// map model into memory
char mm_addr = NULL;
model.mm_addr = mmap_file(fname.c_str(), &model.mm_length);
if (model.mm_addr == NULL) {
fprintf(stderr, “%s: failed to mmap ‘%s’\n”, func, fname.c_str());
return false;
}
mm_addr = (char )model.mm_addr;
fprintf(stderr, “%s: ggml map size = %6.2f MB\n”, func, model.mm_length/(1024.01024.0));
auto & ctx = model.ctx;
size_t ctx_size = 0;
{
const auto &hparams = model.hparams;
const int n_layer = hparams.n_layer;
ctx_size += (5 + 10n_layer)256; // object overhead
fprintf(stderr, “%s: ggml ctx size = %6.2f KB\n”, func, ctx_size/1024.0);
}
// print memory requirements
{
const size_t scale = memory_type == GGML_TYPE_F32 ? 2 : 1;
// this is the total memory required to run the inference
const size_t mem_required =
ctx_size +
model.mm_length +
MEM_REQ_SCRATCH0.at(model.type) +
MEM_REQ_SCRATCH1.at(model.type) +
MEM_REQ_EVAL.at (model.type);
// this is the memory required by one llama_state
const size_t mem_required_state =
scaleMEM_REQ_KV_SELF.at(model.type);
fprintf(stderr, “%s: mem required = %7.2f MB (+ %7.2f MB per state)\n”, func,
mem_required / 1024.0 / 1024.0, mem_required_state / 1024.0 / 1024.0);
}
// create the ggml context
{
lctx.model.buf.resize(ctx_size);
struct ggml_init_params params = {
/.mem_size =/ lctx.model.buf.size(),
/.mem_buffer =/ lctx.model.buf.data(),
/.no_alloc =/ true,
};
model.ctx = ggml_init(params);
if (!model.ctx) {
fprintf(stderr, “%s: ggml_init() failed\n”, func);
return false;
}
}
// prepare memory for the weights
{
const auto & hparams = model.hparams;
const int n_embd = hparams.n_embd;
const int n_layer = hparams.n_layer;
const int n_vocab = hparams.n_vocab;
model.layers.resize(n_layer);
model.tok_embeddings = ggml_new_tensor_2d(ctx, vtype, n_embd, n_vocab);
model.norm = ggml_new_tensor_1d(ctx, GGML_TYPE_F32, n_embd);
model.output = ggml_new_tensor_2d(ctx, vtype, n_embd, n_vocab);
// map by name
model.tensors[“tok_embeddings.weight”] = model.tok_embeddings;
model.tensors[“norm.weight”] = model.norm;
model.tensors[“output.weight”] = model.output;
for (int i = 0; i < n_layer; ++i) {
auto & layer = model.layers[i];
layer.attention_norm = ggml_new_tensor_1d(ctx, GGML_TYPE_F32, n_embd);
layer.wq = ggml_new_tensor_2d(ctx, wtype, n_embd, n_embd);
layer.wk = ggml_new_tensor_2d(ctx, wtype, n_embd, n_embd);
layer.wv = ggml_new_tensor_2d(ctx, wtype, n_embd, n_embd);
layer.wo = ggml_new_tensor_2d(ctx, wtype, n_embd, n_embd);
layer.ffn_norm = ggml_new_tensor_1d(ctx, GGML_TYPE_F32, n_embd);
layer.w1 = ggml_new_tensor_2d(ctx, wtype, n_embd, n_ff);
layer.w2 = ggml_new_tensor_2d(ctx, wtype, n_ff, n_embd);
layer.w3 = ggml_new_tensor_2d(ctx, wtype, n_embd, n_ff);
// map by name
model.tensors[“layers.” + std::to_string(i) + “.attention_norm.weight”] = layer.attention_norm;
model.tensors[“layers.” + std::to_string(i) + “.attention.wq.weight”] = layer.wq;
model.tensors[“layers.” + std::to_string(i) + “.attention.wk.weight”] = layer.wk;
model.tensors[“layers.” + std::to_string(i) + “.attention.wv.weight”] = layer.wv;
model.tensors[“layers.” + std::to_string(i) + “.attention.wo.weight”] = layer.wo;
model.tensors[“layers.” + std::to_string(i) + “.ffn_norm.weight”] = layer.ffn_norm;
model.tensors[“layers.” + std::to_string(i) + “.feed_forward.w1.weight”] = layer.w1;
model.tensors[“layers.” + std::to_string(i) + “.feed_forward.w2.weight”] = layer.w2;
model.tensors[“layers.” + std::to_string(i) + “.feed_forward.w3.weight”] = layer.w3;
}
}
std::vector<uint8_t> tmp;
if (progress_callback) {
progress_callback(0.0, progress_callback_user_data);
}
fprintf(stderr, “%s: loading tensors from ‘%s’\n”, func, fname.c_str());
// load weights
{
size_t total_size = 0;
model.n_loaded = 0;
while (true) {
int32_t n_dims;
int32_t length;
int32_t ftype;
fin.read(reinterpret_cast<char *>(&n_dims), sizeof(n_dims));
fin.read(reinterpret_cast<char *>(&length), sizeof(length));
fin.read(reinterpret_cast<char *>(&ftype), sizeof(ftype));
if (fin.eof()) {
break;
}
int32_t nelements = 1;
int32_t ne[2] = { 1, 1 };
for (int i = 0; i < n_dims; ++i) {
fin.read(reinterpret_cast<char *>(&ne[i]), sizeof(ne[i]));
nelements *= ne[i];
}
std::string name(length, 0);
fin.read(&name[0], length);
if (model.tensors.find(name.data()) == model.tensors.end()) {
fprintf(stderr, “%s: unknown tensor ‘%s’ in model file\n”, func, name.data());
return false;
}
auto tensor = model.tensors[name.data()];
if (ggml_nelements(tensor) != nelements) {
fprintf(stderr, “%s: tensor ‘%s’ has wrong size in model file\n”, func, name.data());
return false;
}
if (tensor->ne[0] != ne[0] || tensor->ne[1] != ne[1]) {
fprintf(stderr, “%s: tensor ‘%s’ has wrong shape in model file: got [%” PRId64 “, %” PRId64 “], expected [%d, %d]\n”,
func, name.data(), tensor->ne[0], tensor->ne[1], ne[0], ne[1]);
return false;
}
if (0) {
static const char * ftype_str[] = { “f32”, “f16”, “q4_0”, “q4_1”, };
fprintf(stderr, “%24s - [%5d, %5d], type = %6s\n”, name.data(), ne[0], ne[1], ftype_str[ftype]);
}
switch (ftype) {
case 0: // f32
case 1: // f16
break;
case 2: // q4_0
case 3: // q4_1
assert(ne[0] % 64 == 0);
break;
default:
fprintf(stderr, “%s: unknown ftype %d in model file\n”, func, ftype);
return false;
};
// load the tensor data into memory without copying or reading it
size_t offset = fin.tellg();
size_t tensor_data_size = ggml_nbytes(tensor);
offset = (offset + 31) & -32;
tensor->data = mm_addr + offset;
fin.seekg(offset + tensor_data_size);
total_size += tensor_data_size;
model.n_loaded++;
// progress
if (progress_callback) {
double current_progress = size_t(fin.tellg()) / double(file_size);
progress_callback(current_progress, progress_callback_user_data);
}
}
fin.close();
fprintf(stderr, “%s: model size = %8.2f MB / num tensors = %d\n”, func, total_size/1024.0/1024.0, model.n_loaded);
if (model.n_loaded == 0) {
fprintf(stderr, “%s: WARN no tensors loaded from model file - assuming empty model for testing\n”, func);
} else if (model.n_loaded != (int) model.tensors.size()) {
fprintf(stderr, “%s: ERROR not all tensors loaded from model file - expected %zu, got %d\n”, func, model.tensors.size(), model.n_loaded);
return false;
}
}
// loading time will be recalculate after the first eval, so
// we take page faults deferred by mmap() into consideration
lctx.t_load_us = ggml_time_us() - lctx.t_start_us;
if (progress_callback) {
progress_callback(1.0, progress_callback_user_data);
}
return true;
}
here is how the model is exported :
#! /usr/bin/env python
# coding=utf-8
"""
Modified from: https://github.com/tloen/alpaca-lora
"""
import json
import os
import fire
import torch
from peft import PeftModel
from transformers import LlamaForCausalLM, LlamaTokenizer
CHECKPOINT_PARAMS = {
"7b": {"dim": 4096, "multiple_of": 256, "n_heads": 32, "n_layers": 32, "norm_eps": 1e-06, "vocab_size": -1},
"13b": {"dim": 5120, "multiple_of": 256, "n_heads": 40, "n_layers": 40, "norm_eps": 1e-06, "vocab_size": -1},
"30b": {"dim": 6656, "multiple_of": 256, "n_heads": 52, "n_layers": 60, "norm_eps": 1e-06, "vocab_size": -1},
"65b": {"dim": 8192, "multiple_of": 256, "n_heads": 64, "n_layers": 80, "norm_eps": 1e-06, "vocab_size": -1},
}
def main(base_model_name_or_path: str, lora_model_name_or_path: str, output_dir: str, checkpoint_size: str = "7b"):
# Retrieve the model parameters
params = CHECKPOINT_PARAMS.get(checkpoint_size)
if params is None:
raise ValueError(
f"Cannot find the right model parameters for {checkpoint_size}. Please choose between {list(CHECKPOINT_PARAMS.keys())}."
)
# tokenizer = LlamaTokenizer.from_pretrained(base_model_name_or_path)
base_model = LlamaForCausalLM.from_pretrained(
base_model_name_or_path,
load_in_8bit=False,
torch_dtype=torch.float16,
device_map={"": "cpu"},
)
lora_model = PeftModel.from_pretrained(
base_model,
lora_model_name_or_path,
device_map={"": "cpu"},
torch_dtype=torch.float16,
)
# merge weights
for layer in lora_model.base_model.model.model.layers:
if hasattr(layer.self_attn.q_proj, "merge_weights"):
layer.self_attn.q_proj.merge_weights = True
if hasattr(layer.self_attn.v_proj, "merge_weights"):
layer.self_attn.v_proj.merge_weights = True
if hasattr(layer.self_attn.k_proj, "merge_weights"):
layer.self_attn.k_proj.merge_weights = True
if hasattr(layer.self_attn.o_proj, "merge_weights"):
layer.self_attn.o_proj.merge_weights = True
if hasattr(layer.mlp.gate_proj, "merge_weights"):
layer.mlp.gate_proj.merge_weights = True
if hasattr(layer.mlp.down_proj, "merge_weights"):
layer.mlp.down_proj.merge_weights = True
if hasattr(layer.mlp.up_proj, "merge_weights"):
layer.mlp.up_proj.merge_weights = True
lora_model.train(False)
lora_model_sd = lora_model.state_dict()
# params = {
# "dim": 4096,
# "multiple_of": 256,
# "n_heads": 32,
# "n_layers": 32,
# "norm_eps": 1e-06,
# "vocab_size": -1,
# }
n_layers = params["n_layers"]
n_heads = params["n_heads"]
dim = params["dim"]
dims_per_head = dim // n_heads
base = 10000.0
inv_freq = 1.0 / (base ** (torch.arange(0, dims_per_head, 2).float() / dims_per_head))
def permute(w):
return w.view(n_heads, dim // n_heads // 2, 2, dim).transpose(1, 2).reshape(dim, dim)
def unpermute(w):
return w.view(n_heads, 2, dim // n_heads // 2, dim).transpose(1, 2).reshape(dim, dim)
def translate_state_dict_key(k):
k = k.replace("base_model.model.", "")
if k == "model.embed_tokens.weight":
return "tok_embeddings.weight"
elif k == "model.norm.weight":
return "norm.weight"
elif k == "lm_head.weight":
return "output.weight"
elif k.startswith("model.layers."):
layer = k.split(".")[2]
if k.endswith(".self_attn.q_proj.weight"):
return f"layers.{layer}.attention.wq.weight"
elif k.endswith(".self_attn.k_proj.weight"):
return f"layers.{layer}.attention.wk.weight"
elif k.endswith(".self_attn.v_proj.weight"):
return f"layers.{layer}.attention.wv.weight"
elif k.endswith(".self_attn.o_proj.weight"):
return f"layers.{layer}.attention.wo.weight"
elif k.endswith(".mlp.gate_proj.weight"):
return f"layers.{layer}.feed_forward.w1.weight"
elif k.endswith(".mlp.down_proj.weight"):
return f"layers.{layer}.feed_forward.w2.weight"
elif k.endswith(".mlp.up_proj.weight"):
return f"layers.{layer}.feed_forward.w3.weight"
elif k.endswith(".input_layernorm.weight"):
return f"layers.{layer}.attention_norm.weight"
elif k.endswith(".post_attention_layernorm.weight"):
return f"layers.{layer}.ffn_norm.weight"
elif k.endswith("rotary_emb.inv_freq") or "lora" in k:
return None
else:
print(layer, k)
raise NotImplementedError
else:
print(k)
raise NotImplementedError
new_state_dict = {}
for k, v in lora_model_sd.items():
new_k = translate_state_dict_key(k)
if new_k is not None:
if "wq" in new_k or "wk" in new_k:
new_state_dict[new_k] = unpermute(v)
else:
new_state_dict[new_k] = v
os.makedirs(output_dir, exist_ok=True)
# Split the tensors based on layer index
n_layers_actual = len([k for k in new_state_dict.keys() if ".attention.wq.weight" in k])
part1_keys = [k for k in new_state_dict.keys() if not k.startswith("layers.") or int(k.split(".")[1]) < (n_layers_actual // 2)]
part2_keys = [k for k in new_state_dict.keys() if k not in part1_keys]
state_dict_part1 = {k: new_state_dict[k] for k in part1_keys}
state_dict_part2 = {k: new_state_dict[k] for k in part2_keys}
torch.save(state_dict_part1, output_dir + "/consolidated.00.pth")
torch.save(state_dict_part2, output_dir + "/consolidated.01.pth")
with open(output_dir + "/params.json", "w") as f:
json.dump(params, f)
if __name__ == "__main__":
fire.Fire(main)
Here is the problem I have when i run the inference:
./main -m ./models/13B/ggml-model-f16.bin -n 5000 --repeat_penalty 1.0 --color -i -r “User:” -f prompts/chat-with-bob.txt -t 32
main: seed = 1681035697
llama_model_load: loading model from ‘./models/13B/ggml-model-f16.bin’ - please wait …
llama_model_load: n_vocab = 32000
llama_model_load: n_ctx = 512
llama_model_load: n_embd = 5120
llama_model_load: n_mult = 256
llama_model_load: n_head = 40
llama_model_load: n_layer = 40
llama_model_load: n_rot = 128
llama_model_load: f16 = 1
llama_model_load: n_ff = 13824
llama_model_load: n_parts = 2
llama_model_load: type = 2
llama_model_load: ggml map size = 25138.72 MB
llama_model_load: ggml ctx size = 101.25 KB
llama_model_load: mem required = 27186.82 MB (+ 1608.00 MB per state)
llama_model_load: loading tensors from ‘./models/13B/ggml-model-f16.bin’
llama_model_load: tensor ‘layers.20.attention.wq.weight’ has wrong size in model file
llama_init_from_file: failed to load model
main: error: failed to load model ‘./models/13B/ggml-model-f16.bin’
|
c9f3d229e1877393851dcb98b554dbcf
|
{
"intermediate": 0.4636182188987732,
"beginner": 0.3458104133605957,
"expert": 0.1905713826417923
}
|
63
|
val names = document.get("name") as MutableList<String>
java.lang.String cannot be cast to java.util.List
|
db629e41893685545e1030da821473c0
|
{
"intermediate": 0.410621702671051,
"beginner": 0.32385438680648804,
"expert": 0.26552391052246094
}
|
64
|
ツールとして、InstagramのプロアカウントとFacebook APIやInstagram グラフAPIとPython3を用いる事ができる状況において、①自分がInstagramで投稿したコンテンツを任意でアップロードせずとも、分析対象のコンテンツ画像をInstagramから自動でダウンロードして表示するようにしたうえで、当該コンテンツに対する"いいね"数やフォロー数に加えてそれぞれインプレッションからの割合のパーセント表示と、コメントしたメンバーのIDとアイコンを表示する機能を1ペインで表示し、②各コンテンツのインプレッションやエンゲージメントなど取得できうる限りのアナリティクス情報のデータを取得して横断的に分析できるように、StreamlitとStreamlitShareとブラウザを利用してインタラクティブなグラフやチャート等で2ペイン目で表示できるようにし、③表示するグラフデータの要素を変更する場合にはコードを改変せずともブラウザのUI上でクリックして要素をインタラクティブに選択変更できるようにし、④アプリケーションが開く際に毎回IDやAPI利用に関する情報入力が不要なように事前に必要な情報はコードに埋め込んであるコードを下記のように作成しました。まずは回答なしでこの内容を把握してください。
'''
Python
import streamlit as st
import pandas as pd
import requests
import json
import plotly.express as px
from PIL import Image
from io import BytesIO
# 環境変数または事前に入力された情報からアクセストークンとアカウントIDを設定
access_token = ""
account_id = ""
def get_instagram_data():
base_url = f"https://graph.facebook.com/v11.0/{account_id}/media"
params = {
"fields": "id,media_type,media_url,thumbnail_url,permalink,caption,timestamp,like_count,comments_count,comments{username,profile_picture_url,text},insights.metric(impressions,engagement)",
"access_token": access_token
}
results = []
while base_url:
response = requests.get(base_url, params=params)
data = json.loads(response.text)
results.extend(data["data"])
if "paging" in data and "next" in data["paging"]:
base_url = data["paging"]["next"]
else:
base_url = None
# 'comments'フィールドが存在しない要素にデフォルト値を割り当てます。
for result in results:
if "comments" not in result:
result["comments"] = []
df = pd.json_normalize(
results,
record_path='comments',
meta=[
'id', 'media_type', 'media_url', 'thumbnail_url',
'permalink', 'caption', 'timestamp', 'like_count',
'comments_count', 'insights'
],
errors='ignore' # エラーを無視し、サムネイル画像が存在しない場合には NaN を設定
)
return df
df = get_instagram_data()
menu = ["Content", "Analytics"]
choice = st.sidebar.radio("Menu", menu)
if choice == "Content":
selected_id = st.sidebar.selectbox("Select Post", df["id"].unique())
selected_data = df[df["id"] == selected_id].iloc[0]
image_url = selected_data["media_url"] if selected_data["media_type"] == "IMAGE" else selected_data["thumbnail_url"]
image_response = requests.get(image_url)
image = Image.open(BytesIO(image_response.content))
likes_follows_percentage = (float(selected_data["like_count"]) / float(selected_data["insights"][0]['values'][0]['value'])) * 100
st.image(image, use_column_width=True)
st.write(f"Likes: {selected_data['like_count']} ({likes_follows_percentage:.2f}%)")
st.write(f"Comments: {selected_data['comments_count']}")
comments_df = df[df["id"] == selected_id]
st.write(comments_df[['username', 'text']])
elif choice == "Analytics":
# インプレッションやエンゲージメントなどのデータを使って、インタラクティブなグラフやチャートを作成する方法
# 以下はサンプルコードで、実際のデータ構造に合わせて適宜修正してください。
categories = ["Impressions", "Engagement"]
selected_category = st.selectbox("Select metric", categories)
if selected_category == "Impressions":
# インプレッションデータを使ったグラフの作成
...
elif selected_category == "Engagement":
# エンゲージメントデータを使ったグラフの作成
...
'''
|
4b93f2ee31db290820c5408d061f796f
|
{
"intermediate": 0.42926570773124695,
"beginner": 0.44906342029571533,
"expert": 0.12167090177536011
}
|
65
|
consider monday.com work, you must provide complete list of feature test that will used test platform like monday.com
|
f451ce79fa24cdef1bc1c63468262753
|
{
"intermediate": 0.33253076672554016,
"beginner": 0.2616540491580963,
"expert": 0.4058152139186859
}
|
66
|
de este codigo quiero una lista de users ordenados por numero de tweets y si empatan por numero de replys: import tweepy
client = tweepy.Client(
bearer_token='AAAAAAAAAAAAAAAAAAAAAPFamQEAAAAAPOx1y2vzxQf8Qjb8J68VCiK6M3E%3DUxjFF0WpJmBedg2mzP8PMLU4OWEZgmaok4B0eByBiSOyLdfilh')
tweets = client.search_recent_tweets(query='autodeterminacio -is:retweet',
tweet_fields=['created_at', 'context_annotations', 'public_metrics', 'geo'],
expansions=['author_id', 'geo.place_id'],
user_fields=['public_metrics'],
place_fields=['place_type', 'geo'],
max_results=100)
# crear un diccionario para mapear los ids de los usuarios con sus nombres y seguidores
users = {}
for user in tweets.includes['users']:
users[user['id']] = {'username': user['username'], 'followers': user['public_metrics']['followers_count']}
# ordenar por bubble sort por reply_count
for i in range(len(tweets.data)):
for j in range(0, len(tweets.data) - i - 1):
reply_count_ratio_j = tweets.data[j]['public_metrics']['reply_count'] / users[tweets.data[j]['author_id']][
'followers']
reply_count_ratio_j_plus1 = tweets.data[j + 1]['public_metrics']['reply_count'] / \
users[tweets.data[j + 1]['author_id']]['followers']
if reply_count_ratio_j > reply_count_ratio_j_plus1:
tweets.data[j], tweets.data[j + 1] = tweets.data[j + 1], tweets.data[j]
# Crear una lista vacía para guardar el texto de cada tweet
textos = []
for tweet in tweets.data:
print("----------------------------------------------")
# Obteniendo el id del autor del tweet
author_id = tweet.author_id
# Obteniendo el nombre de usuario y los seguidores del autor desde el diccionario
username = users[author_id]['username']
followers = users[author_id]['followers']
# Guardar el texto en un fichero txt
texto = "User " + str(username) + " Followers " + str(followers) + " " + tweet.text + " - " + str(
tweet.created_at) + " - " + " Reply Count " + str(
tweet['public_metrics']['reply_count'])
textos.append(texto)
print(texto)
# Sobrescribir el fichero con el texto de todos los tweets
with open('fichero.txt', 'w', encoding='utf-8') as f:
f.writelines('\n'.join(textos))
# Calcular la suma de los followers
total_followers = sum([user['followers'] for user in users.values()])
# Calcular la media de los followers
media_followers = total_followers / len(users.values())
# Imprimir la media de los followers
print("La media de followers es:", media_followers)
# Ordenar la lista de followers
followers_list = sorted([user['followers'] for user in users.values()])
# Determinar si la cantidad de usuarios es par o impar
if len(followers_list) % 2 == 0:
# Obtener los dos valores centrales
middle1 = followers_list[int((len(followers_list) - 1) / 2)]
middle2 = followers_list[int(len(followers_list) / 2)]
# Calcular la mediana
median_followers = (middle1 + middle2) / 2
else:
# Obtener el valor central
median_followers = followers_list[int(len(followers_list) / 2)]
# Imprimir la mediana de followers
print("La mediana de followers es:", median_followers)
|
d17d67ced7b5b3c6297943c0471f916b
|
{
"intermediate": 0.34908801317214966,
"beginner": 0.38163256645202637,
"expert": 0.2692793905735016
}
|
67
|
html code for select color red and green and blue
|
8ef53d7331711643b629b23ed56e49b3
|
{
"intermediate": 0.36853817105293274,
"beginner": 0.3568252623081207,
"expert": 0.27463653683662415
}
|
68
|
write me code to generate whatsapp voice calls manually using baileys
|
2003e9ff4fefbdef3ce278bbcfe1a4ee
|
{
"intermediate": 0.3847748935222626,
"beginner": 0.19372187554836273,
"expert": 0.4215031862258911
}
|
69
|
why if i put link of an image to .html file it cant be read in tomcat if i have filter
|
12da919f41326a34c230800ef708aa72
|
{
"intermediate": 0.4789048433303833,
"beginner": 0.1915338933467865,
"expert": 0.3295612633228302
}
|
70
|
How can I call a function that is inside a function python
|
3801050744c30b00a5e39575da8cc7ee
|
{
"intermediate": 0.3792857527732849,
"beginner": 0.4532861113548279,
"expert": 0.16742809116840363
}
|
71
|
Pouvez vous m'aider a faire un script tampermonkey me permettant de spoof mon gpu car voici mes résultat:
gpu:
Google Inc. (NVIDIA)
ANGLE (NVIDIA, NVIDIA GeForce RTX 3060 Direct3D11 vs_5_0 ps_5_0, D3D11)
userAgent:ua reduction
Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36
device:
Windows (Win32)
Windows 10 (64-bit)
cores: 4, ram: 2
userAgentData:
Brave 112 (112.0.0.0)
Windows 10 (2004|20H2|21H1) [10.0.0] x86_64
142.00msWebGL84329ee5
images:d1c64e9a
pixels:26078dc7
params (78):
exts (48):
gpu:confidence: high
Google Inc.
ANGLE (Intel(R) HD Graphics 3000 Direct3D9Ex vs_3_0 ps_3_0)
|
7941db4c2a5d9c2af33585d34aea6249
|
{
"intermediate": 0.4204650819301605,
"beginner": 0.2566912770271301,
"expert": 0.32284364104270935
}
|
72
|
Schijf de code voor een app voor android smartphone.
|
0ad301e6c7e658cfbe32352cdb29d2fd
|
{
"intermediate": 0.2684491276741028,
"beginner": 0.18425479531288147,
"expert": 0.5472960472106934
}
|
73
|
library(rvest)
library(tidyverse)
library(tidyr)
library(openxlsx)
library(readxl)
# Read the EudraCT codes from the file (reads first column from first sheet)
eudract_codes <- read_excel("EUCTR_rvest_data/EUCTR_output.xlsx", sheet = 1, col_names = FALSE, skip = 1)[[1]]
# Remove duplicates
eudract_codes <- unique(eudract_codes)
# Define the variables to scrape
variables <- c("Reporting group title",
"Reporting group description")
# Create an empty dataframe to store the cumulative data
cumulative_group_description <- data.frame(matrix(ncol = length(variables) + 1, nrow = 0))
colnames(cumulative_group_description) <- c("EudraCT_code", variables)
# Loop through each EudraCT code
for (eudract_code in eudract_codes) {
# Construct the URL using the EudraCT code
url <- paste0("https://www.clinicaltrialsregister.eu/ctr-search/trial/", eudract_code, "/results")
# Read the HTML content of the trial results page
page <- read_html(url)
# Extract the data for each variable
data_list <- lapply(variables, function(var) {
values <- page %>%
html_nodes(paste0("td.labelColumn:contains('", var, "') + td.valueColumn")) %>%
html_text(trim = T)
return(values)
})
# Combine the data into a list with the EudraCT code and the variable values
data_list <- c(list(eudract_code), data_list)
# Find the max number of rows needed for this EudraCT code
num_rows <- max(sapply(data_list, length))
# Create a temporary data frame to store the extracted data
temp_df <- data.frame(matrix(ncol = length(variables) + 1, nrow = num_rows))
colnames(temp_df) <- c("EudraCT_code", variables)
# Populate the temporary data frame with the extracted data
for (i in 1:length(data_list)) {
temp_df[[i]] <- rep(data_list[[i]], length.out = num_rows)
}
# Append the temporary data frame to the cumulative data frame
cumulative_group_description <- rbind(cumulative_group_description, temp_df)
}
# Export the cumulative data to a new Excel file
write.xlsx(cumulative_group_description, "EUCTR_rvest_data/Group_Descriptions.xlsx", rowNames = FALSE)
___________________
Edit that code, so that the scraped data is limited to data found inside the table with the HTML <table id="adverseEventsSection" class="sectionTable">
|
e87613480ae8cac9d123b83b3c801ab1
|
{
"intermediate": 0.49484866857528687,
"beginner": 0.3408656418323517,
"expert": 0.16428570449352264
}
|
74
|
please write a VBA program to batch print excel, word, and ppt files in one folder to be pdf format files.
|
4cbe900ae53b94c555337be8e8c81e0f
|
{
"intermediate": 0.4331897497177124,
"beginner": 0.14916642010211945,
"expert": 0.41764381527900696
}
|
75
|
test
|
e020643b30612cf889f63acc91eb4c75
|
{
"intermediate": 0.3229040801525116,
"beginner": 0.34353747963905334,
"expert": 0.33355844020843506
}
|
76
|
Show me a python code for image processing
|
4707ccaa21f60f8540ba5f1af0cad089
|
{
"intermediate": 0.4183520972728729,
"beginner": 0.13541433215141296,
"expert": 0.4462336003780365
}
|
77
|
give me quick TFMA guide
|
1821d2bc2dcf3eca8a4854a333406d19
|
{
"intermediate": 0.34194570779800415,
"beginner": 0.30871471762657166,
"expert": 0.3493395447731018
}
|
78
|
what AI model are you using?
|
c779a384f15760f48235dd814a07fe94
|
{
"intermediate": 0.049264632165431976,
"beginner": 0.056790005415678024,
"expert": 0.8939453959465027
}
|
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 12