content
stringlengths 85
101k
| title
stringlengths 0
150
| question
stringlengths 15
48k
| answers
sequence | answers_scores
sequence | non_answers
sequence | non_answers_scores
sequence | tags
sequence | name
stringlengths 35
137
|
---|---|---|---|---|---|---|---|---|
Q:
Create a dataframe with and specific name within a function depends on input
I need to create a dataframe with and specific name within a function depends on input.
Here is my code:
`
def filter_season (df_teams ,season):
df_teams[season]= df_teams[df_teams['SEASON']== season ]
return df_teams[season]
`
Error got: ValueError: Wrong number of items passed 34, placement implies 1
I expect a result where the dataframe is created with a name due to the condition said in the funcion.
ex:
filter_season(df_teams, 22) #(Refers to season 2022)
OUTPUT:
df_teams_22
A:
IIUC, use varname with globals :
def filter_name(df, season):
sub_df = df.loc[df['SEASON'].eq(season)].copy()
globals()[nameof(df) + "_" + season] = sub_df
And here is an example to give you the general logic.
import pandas as pd
from varname import nameof
df = pd.DataFrame({'character': ['cobra', 'viper', 'sidewinder'],
'max_speed': [1, 4, 7],
'shield': [2, 5, 8]})
print(df)
character max_speed shield
0 cobra 1 2
1 viper 4 5
2 sidewinder 7 8
Now, let's apply our function to return a new dataframe with a custom name (based on the filter).
def filter_name(df, charname):
sub_df = df.loc[df['character'].eq(charname)].copy()
globals()[nameof(df) + "_" + charname] = sub_df
filter_name(df, "viper")
# Output :
print(df_viper, type(df_viper))
character max_speed shield
1 viper 4 5 <class 'pandas.core.frame.DataFrame'>
| Create a dataframe with and specific name within a function depends on input | I need to create a dataframe with and specific name within a function depends on input.
Here is my code:
`
def filter_season (df_teams ,season):
df_teams[season]= df_teams[df_teams['SEASON']== season ]
return df_teams[season]
`
Error got: ValueError: Wrong number of items passed 34, placement implies 1
I expect a result where the dataframe is created with a name due to the condition said in the funcion.
ex:
filter_season(df_teams, 22) #(Refers to season 2022)
OUTPUT:
df_teams_22
| [
"IIUC, use varname with globals :\ndef filter_name(df, season):\n sub_df = df.loc[df['SEASON'].eq(season)].copy()\n globals()[nameof(df) + \"_\" + season] = sub_df\n\nAnd here is an example to give you the general logic.\nimport pandas as pd\nfrom varname import nameof\n\ndf = pd.DataFrame({'character': ['cobra', 'viper', 'sidewinder'],\n 'max_speed': [1, 4, 7],\n 'shield': [2, 5, 8]})\n\nprint(df)\n character max_speed shield\n0 cobra 1 2\n1 viper 4 5\n2 sidewinder 7 8\n\nNow, let's apply our function to return a new dataframe with a custom name (based on the filter).\ndef filter_name(df, charname):\n sub_df = df.loc[df['character'].eq(charname)].copy()\n globals()[nameof(df) + \"_\" + charname] = sub_df\n\nfilter_name(df, \"viper\")\n\n# Output :\nprint(df_viper, type(df_viper))\n\n character max_speed shield\n1 viper 4 5 <class 'pandas.core.frame.DataFrame'>\n\n"
] | [
0
] | [] | [] | [
"dataframe",
"function",
"python",
"python_3.x"
] | stackoverflow_0074672040_dataframe_function_python_python_3.x.txt |
Q:
Python WebScraping - Sleep oscillate in slow websites
I have a webscraping, but the site I'm using in some days is slow and sometimes not. Using the fixed SLEEP, it gives an error in a few days. How to fix this?
I use SLEEP in the intervals of the tasks that I have placed, because the site is sometimes slow and does not return the result giving me an error.
from bs4 import BeautifulSoup
from selenium.webdriver.common.by import By
from selenium.webdriver.firefox import options
from selenium.webdriver.firefox.options import Options
from selenium.webdriver.support.ui import Select
import pandas as pd
import json
from time import sleep
options = Options()
options.headless = True
navegador = webdriver.Firefox(options = options)
link = '****************************'
navegador.get(url = link)
sleep(1)
usuario = navegador.find_element(by=By.ID, value='ctl00_ctl00_Content_Content_txtLogin')
usuario.send_keys('****************************')
sleep(1)
senha = navegador.find_element(by=By.ID, value='ctl00_ctl00_Content_Content_txtSenha')
senha.send_keys('****************************')
sleep(2.5)
botaologin = navegador.find_element(by=By.ID, value='ctl00_ctl00_Content_Content_btnEnviar')
botaologin.click()
sleep(40)
agendamento = navegador.find_element(by=By.ID, value='ctl00_ctl00_Content_Content_TreeView2t8')
agendamento.click()
sleep(2)
selecdia = navegador.find_element(By.CSS_SELECTOR, "a[title='06 de dezembro']")
selecdia.click()
sleep(2)
selecterminal = navegador.find_element(by=By.ID, value='ctl00_ctl00_Content_Content_ddlVagasTerminalEmpresa')
selecterminal.click()
sleep(1)
select = Select(navegador.find_element(by=By.ID, value='ctl00_ctl00_Content_Content_ddlVagasTerminalEmpresa'))
select.select_by_index(1)
sleep(10)
buscalink = navegador.find_elements(by=By.XPATH, value='//*[@id="divScroll"]')
for element in buscalink:
teste3 = element.get_attribute('innerHTML')
soup = BeautifulSoup(teste3, "html.parser")
Vagas = soup.find_all(title="Vaga disponível.")
print(Vagas)
temp=[]
for i in Vagas:
on_click = i.get('onclick')
temp.append(on_click)
df = pd.DataFrame(temp)
df.to_csv('test.csv', mode='a', header=False, index=False)
It returns an error because the page does not load in time and it cannot get the data, but this time is variable
A:
Instead of all these hardcoded sleeps you need to use WebDriverWait expected_conditions explicit waits.
With it you can set some timeout period so Selenium will poll the page periodically until the expected condition is fulfilled.
For example if you need to click a button you will wait for that element clickability. Once this condition is found Selenium will return you that element and you will be able to click it.
This will reduce all the redundant delays on the one hand and will keep waiting until the condition is matched on the other hand (until it is inside the defined timeout).
So, your code can be modified as following:
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
#-----
wait = WebDriverWait(navegador, 30)
navegador.get(link)
wait.until(EC.element_to_be_clickable((By.ID, "ctl00_ctl00_Content_Content_txtLogin"))).send_keys('****************************')
wait.until(EC.element_to_be_clickable((By.ID, "ctl00_ctl00_Content_Content_txtSenha"))).send_keys('****************************')
wait.until(EC.element_to_be_clickable((By.ID, "ctl00_ctl00_Content_Content_btnEnviar"))).click()
wait.until(EC.element_to_be_clickable((By.ID, "ctl00_ctl00_Content_Content_TreeView2t8"))).click()
wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, "a[title='06 de dezembro']"))).click()
etc.
| Python WebScraping - Sleep oscillate in slow websites | I have a webscraping, but the site I'm using in some days is slow and sometimes not. Using the fixed SLEEP, it gives an error in a few days. How to fix this?
I use SLEEP in the intervals of the tasks that I have placed, because the site is sometimes slow and does not return the result giving me an error.
from bs4 import BeautifulSoup
from selenium.webdriver.common.by import By
from selenium.webdriver.firefox import options
from selenium.webdriver.firefox.options import Options
from selenium.webdriver.support.ui import Select
import pandas as pd
import json
from time import sleep
options = Options()
options.headless = True
navegador = webdriver.Firefox(options = options)
link = '****************************'
navegador.get(url = link)
sleep(1)
usuario = navegador.find_element(by=By.ID, value='ctl00_ctl00_Content_Content_txtLogin')
usuario.send_keys('****************************')
sleep(1)
senha = navegador.find_element(by=By.ID, value='ctl00_ctl00_Content_Content_txtSenha')
senha.send_keys('****************************')
sleep(2.5)
botaologin = navegador.find_element(by=By.ID, value='ctl00_ctl00_Content_Content_btnEnviar')
botaologin.click()
sleep(40)
agendamento = navegador.find_element(by=By.ID, value='ctl00_ctl00_Content_Content_TreeView2t8')
agendamento.click()
sleep(2)
selecdia = navegador.find_element(By.CSS_SELECTOR, "a[title='06 de dezembro']")
selecdia.click()
sleep(2)
selecterminal = navegador.find_element(by=By.ID, value='ctl00_ctl00_Content_Content_ddlVagasTerminalEmpresa')
selecterminal.click()
sleep(1)
select = Select(navegador.find_element(by=By.ID, value='ctl00_ctl00_Content_Content_ddlVagasTerminalEmpresa'))
select.select_by_index(1)
sleep(10)
buscalink = navegador.find_elements(by=By.XPATH, value='//*[@id="divScroll"]')
for element in buscalink:
teste3 = element.get_attribute('innerHTML')
soup = BeautifulSoup(teste3, "html.parser")
Vagas = soup.find_all(title="Vaga disponível.")
print(Vagas)
temp=[]
for i in Vagas:
on_click = i.get('onclick')
temp.append(on_click)
df = pd.DataFrame(temp)
df.to_csv('test.csv', mode='a', header=False, index=False)
It returns an error because the page does not load in time and it cannot get the data, but this time is variable
| [
"Instead of all these hardcoded sleeps you need to use WebDriverWait expected_conditions explicit waits.\nWith it you can set some timeout period so Selenium will poll the page periodically until the expected condition is fulfilled.\nFor example if you need to click a button you will wait for that element clickability. Once this condition is found Selenium will return you that element and you will be able to click it.\nThis will reduce all the redundant delays on the one hand and will keep waiting until the condition is matched on the other hand (until it is inside the defined timeout).\nSo, your code can be modified as following:\nfrom selenium.webdriver.support.ui import WebDriverWait\nfrom selenium.webdriver.common.by import By\nfrom selenium.webdriver.support import expected_conditions as EC\n\n#-----\nwait = WebDriverWait(navegador, 30)\n\nnavegador.get(link)\n\nwait.until(EC.element_to_be_clickable((By.ID, \"ctl00_ctl00_Content_Content_txtLogin\"))).send_keys('****************************')\nwait.until(EC.element_to_be_clickable((By.ID, \"ctl00_ctl00_Content_Content_txtSenha\"))).send_keys('****************************')\nwait.until(EC.element_to_be_clickable((By.ID, \"ctl00_ctl00_Content_Content_btnEnviar\"))).click()\nwait.until(EC.element_to_be_clickable((By.ID, \"ctl00_ctl00_Content_Content_TreeView2t8\"))).click()\nwait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, \"a[title='06 de dezembro']\"))).click()\n\netc.\n"
] | [
1
] | [] | [] | [
"python",
"selenium",
"sleep",
"web_scraping",
"webdriverwait"
] | stackoverflow_0074670711_python_selenium_sleep_web_scraping_webdriverwait.txt |
Q:
Get current learning rate when using ReduceLROnPlateau
I am using ReduceLROnPlateau to modify the learning rate during training of a PyTorch mode. ReduceLROnPlateau does not inherit from LRScheduler and does not implement the get_last_lr method which is PyTorch's recommended way of getting the current learning rate when using a learning rate scheduler.
How can I get the learning rate when using ReduceLROnPlateau?
Currently I am doing the following but am not sure if this is rigorous and correct:
lr = optimizer.state_dict()["param_groups"][0]["lr"]
A:
You can skip the state_dict of the optimizer and access the learning rate directly:
optimizer.param_groups[0]["lr"]
| Get current learning rate when using ReduceLROnPlateau | I am using ReduceLROnPlateau to modify the learning rate during training of a PyTorch mode. ReduceLROnPlateau does not inherit from LRScheduler and does not implement the get_last_lr method which is PyTorch's recommended way of getting the current learning rate when using a learning rate scheduler.
How can I get the learning rate when using ReduceLROnPlateau?
Currently I am doing the following but am not sure if this is rigorous and correct:
lr = optimizer.state_dict()["param_groups"][0]["lr"]
| [
"You can skip the state_dict of the optimizer and access the learning rate directly:\noptimizer.param_groups[0][\"lr\"]\n\n"
] | [
0
] | [] | [] | [
"learning_rate",
"python",
"pytorch"
] | stackoverflow_0074668086_learning_rate_python_pytorch.txt |
Q:
Python http server with multiple directories
Is it possible to add multiple paths from different driver in os,chdir method?
like, 'd:\\folder1' , 'e:\\folder2'
I tried to add two paths, but could not join them, got syntax error
A:
I managed to do it using symbolic links:
On windows, you can create symbolic links to directories as so:
mklink /D <symbolic link name> <destination directory>
So in a new folder you can run:
mklink /D folder1 "D:\folder1"
mklink /D folder2 "E:\folder2"
On linux this would be:
ln -s <destination directory> <symbolic link name>
ln -s /mnt/d/folder1 folder1
ln -s /mnt/e/folder2 folder2
Then by running a python HTTP server in the directory, you can access both folders as sub-folders of the server:
| Python http server with multiple directories | Is it possible to add multiple paths from different driver in os,chdir method?
like, 'd:\\folder1' , 'e:\\folder2'
I tried to add two paths, but could not join them, got syntax error
| [
"I managed to do it using symbolic links:\nOn windows, you can create symbolic links to directories as so:\nmklink /D <symbolic link name> <destination directory>\n\nSo in a new folder you can run:\nmklink /D folder1 \"D:\\folder1\"\nmklink /D folder2 \"E:\\folder2\"\n\nOn linux this would be:\nln -s <destination directory> <symbolic link name>\n\nln -s /mnt/d/folder1 folder1 \nln -s /mnt/e/folder2 folder2\n\nThen by running a python HTTP server in the directory, you can access both folders as sub-folders of the server:\n\n"
] | [
0
] | [] | [] | [
"python"
] | stackoverflow_0074673599_python.txt |
Q:
Vector field with numpy and mathplotlib
I know how to generate a vector field in all plane, but know I'm trying to create the vector just in some specific line, my code is
import numpy as np
import matplotlib.pyplot as plt
x = np.linspace(-3,3,15)
y = np.linspace(-3,3,15)
x,y = np.meshgrid(x,y)
u = x
v = (x-y)
plt.quiver(x,y,u,v,color = "purple")
plt.show()
that create the vector field in all plane, but I want the vector field along the line x=y, how should I do that?
To create a line, with x1 and y1 for example, and change x,y for x1,y1 in u,v
A:
For the case along the line x=y, you can define the coordinates as follows:
X = np.linspace(0,9,10)
Y = np.linspace(0,13.5,10)
U = np.ones(10)
V = np.ones(10)
plt.quiver(X, Y, U, V, color='b', units='xy', scale=1)
plt.xlim(-2, 15)
plt.ylim(-2, 15)
plt.show()
Output
| Vector field with numpy and mathplotlib | I know how to generate a vector field in all plane, but know I'm trying to create the vector just in some specific line, my code is
import numpy as np
import matplotlib.pyplot as plt
x = np.linspace(-3,3,15)
y = np.linspace(-3,3,15)
x,y = np.meshgrid(x,y)
u = x
v = (x-y)
plt.quiver(x,y,u,v,color = "purple")
plt.show()
that create the vector field in all plane, but I want the vector field along the line x=y, how should I do that?
To create a line, with x1 and y1 for example, and change x,y for x1,y1 in u,v
| [
"For the case along the line x=y, you can define the coordinates as follows:\nX = np.linspace(0,9,10)\nY = np.linspace(0,13.5,10) \nU = np.ones(10)\nV = np.ones(10) \n \nplt.quiver(X, Y, U, V, color='b', units='xy', scale=1)\nplt.xlim(-2, 15)\nplt.ylim(-2, 15)\nplt.show()\n\nOutput\n\n"
] | [
0
] | [] | [] | [
"python",
"vector_graphics"
] | stackoverflow_0074672574_python_vector_graphics.txt |
Q:
Trying to append csv into another csv AS A ROW but I am getting this AttributeError: '_io.TextIOWrapper' object has no attribute 'writerows'
i am trying to append the content in one of my CSV as a ROW to another csv but i am getting this attribute error...I am unsure how to fix it. I think the issue is with writer.writerows(row) but I don't what i should change it to for .writerows(row) to work
This is my below code for appending the first csv to the second csv.
with open(csv1', 'r', encoding='utf8') as reader, open(csv2', 'a', encoding='utf8') as writer:
for row in reader:
writer.writerows(row)
A:
Use write() instead because writerows() is belong to csv.writer, not normal io. However, if you want to append at the end of the file, you need to make sure that the last row contain new line (i.e., \n) already.
with open('test1.csv', 'r', encoding='utf8') as reader:
with open('test2.csv', 'a', encoding='utf8') as writer:
writer.write("\n") # no need if the last row have new line already
for line in reader:
writer.write(line)
Or, if you want to use csv, you can use writerows() as shown in code below:
import csv
with open('test1.csv', 'r', encoding='utf8') as reader:
with open('test2.csv', 'a', encoding='utf8') as writer:
csv_reader = csv.reader(reader)
csv_writer = csv.writer(writer)
csv_writer.writerow([]) # again, no need if the last row have new line already
csv_writer.writerows(csv_reader)
| Trying to append csv into another csv AS A ROW but I am getting this AttributeError: '_io.TextIOWrapper' object has no attribute 'writerows' | i am trying to append the content in one of my CSV as a ROW to another csv but i am getting this attribute error...I am unsure how to fix it. I think the issue is with writer.writerows(row) but I don't what i should change it to for .writerows(row) to work
This is my below code for appending the first csv to the second csv.
with open(csv1', 'r', encoding='utf8') as reader, open(csv2', 'a', encoding='utf8') as writer:
for row in reader:
writer.writerows(row)
| [
"Use write() instead because writerows() is belong to csv.writer, not normal io. However, if you want to append at the end of the file, you need to make sure that the last row contain new line (i.e., \\n) already.\nwith open('test1.csv', 'r', encoding='utf8') as reader:\n with open('test2.csv', 'a', encoding='utf8') as writer:\n writer.write(\"\\n\") # no need if the last row have new line already\n for line in reader:\n writer.write(line)\n\nOr, if you want to use csv, you can use writerows() as shown in code below:\nimport csv\n\nwith open('test1.csv', 'r', encoding='utf8') as reader:\n with open('test2.csv', 'a', encoding='utf8') as writer:\n csv_reader = csv.reader(reader)\n csv_writer = csv.writer(writer)\n csv_writer.writerow([]) # again, no need if the last row have new line already\n csv_writer.writerows(csv_reader)\n\n"
] | [
1
] | [] | [] | [
"append",
"attributeerror",
"csv",
"python"
] | stackoverflow_0074673250_append_attributeerror_csv_python.txt |
Q:
AttributeError: 'str' object has no attribute 'append'
I am completely newbee in programming. So I started learning python. In this proram i want to print the name of second lowest scorers and for multiple students print them alphabetically. So I have written this program and I am trying to add those student who has second lowest score.
So I add in list named "name". But it is showing the error.
My doubt is that I am not appending to any string rather to a list. Then why this error?
if __name__ == '__main__':
student_info = []
name = []
for _ in range(int(input())):
name = input()
score = float(input())
student_info.extend([[name,score]])
student_info.sort(key=lambda x:x[1])
for i in range(0,len(student_info)):
if student_info[1][1] == student_info[i][1]:
name.append(student_info[i][1])
name.sort()
for i in name:
print(i)
AttributeError Traceback (most recent call last)
C:\Users\PABANG~1\AppData\Local\Temp/ipykernel_1504/3074229933.py in <module>
10
11 if student_info[1][1] == student_info[i][1]:
---> 12 name.append(student_info[i][1])
13 name.sort()
14 for i in name:
AttributeError: 'str' object has no attribute 'append'
A:
I've spotted a couple of issues in your code and hope this helps.
Your name variable has been assigned by the input(), which is a
string type. So declare a different variable name.
For getting the student name, it should be
name.append(student_info[i][0]).
if __name__ == '__main__':
student_info = []
name = []
for _ in range(int(input())):
student_name = input()
score = float(input())
student_info.extend([[student_name ,score]])
student_info.sort(key=lambda x:x[1])
for i in range(0,len(student_info)):
if student_info[1][1] == student_info[i][1]:
name.append(student_info[i][0])
name.sort()
for i in name:
print(i)
| AttributeError: 'str' object has no attribute 'append' | I am completely newbee in programming. So I started learning python. In this proram i want to print the name of second lowest scorers and for multiple students print them alphabetically. So I have written this program and I am trying to add those student who has second lowest score.
So I add in list named "name". But it is showing the error.
My doubt is that I am not appending to any string rather to a list. Then why this error?
if __name__ == '__main__':
student_info = []
name = []
for _ in range(int(input())):
name = input()
score = float(input())
student_info.extend([[name,score]])
student_info.sort(key=lambda x:x[1])
for i in range(0,len(student_info)):
if student_info[1][1] == student_info[i][1]:
name.append(student_info[i][1])
name.sort()
for i in name:
print(i)
AttributeError Traceback (most recent call last)
C:\Users\PABANG~1\AppData\Local\Temp/ipykernel_1504/3074229933.py in <module>
10
11 if student_info[1][1] == student_info[i][1]:
---> 12 name.append(student_info[i][1])
13 name.sort()
14 for i in name:
AttributeError: 'str' object has no attribute 'append'
| [
"I've spotted a couple of issues in your code and hope this helps.\n\nYour name variable has been assigned by the input(), which is a\nstring type. So declare a different variable name.\n\nFor getting the student name, it should be\nname.append(student_info[i][0]).\n\n\nif __name__ == '__main__':\n student_info = []\n name = []\n for _ in range(int(input())):\n student_name = input()\n score = float(input())\n student_info.extend([[student_name ,score]])\n student_info.sort(key=lambda x:x[1])\n for i in range(0,len(student_info)):\n if student_info[1][1] == student_info[i][1]:\n name.append(student_info[i][0])\n name.sort()\n for i in name:\n print(i)\n\n\n"
] | [
0
] | [] | [] | [
"append",
"list",
"python"
] | stackoverflow_0074673687_append_list_python.txt |
Q:
Apache Airflow not starting in local
Getting below error on running the command airflow standalone
Error
scheduler | [2022-12-04 13:18:14 +0530] [47519] [ERROR] Can't connect to ('::', 8793)
webserver | [2022-12-04 13:18:14 +0530] [47517] [ERROR] Can't connect to ('0.0.0.0', 8080)
I have tried installing apache airflow multiple times, killing the processes in activity monitor, but still same error is coming.
A:
This error typically indicates that there is another process or service running on the same ports that the Apache Airflow webserver and scheduler are trying to use. This can cause a conflict and prevent Apache Airflow from starting properly.
To resolve this error, you will need to identify and stop the process or service that is using the ports that Apache Airflow is trying to use. This can be done by using the netstat command to list the processes that are listening on the relevant ports, and then using the kill command to stop the process.
For example, to stop the process that is using port 8080, you could run the following command:
netstat -plnt | grep :8080
This will list the process that is using port 8080. You can then use the kill command to stop the process, using the process ID that is displayed in the output of the netstat command.
Once you have stopped the conflicting process or service, you should be able to start Apache Airflow without encountering the error. You can then verify that it is running properly by accessing the Airflow web interface and checking the status of the webserver and scheduler.
| Apache Airflow not starting in local | Getting below error on running the command airflow standalone
Error
scheduler | [2022-12-04 13:18:14 +0530] [47519] [ERROR] Can't connect to ('::', 8793)
webserver | [2022-12-04 13:18:14 +0530] [47517] [ERROR] Can't connect to ('0.0.0.0', 8080)
I have tried installing apache airflow multiple times, killing the processes in activity monitor, but still same error is coming.
| [
"This error typically indicates that there is another process or service running on the same ports that the Apache Airflow webserver and scheduler are trying to use. This can cause a conflict and prevent Apache Airflow from starting properly.\nTo resolve this error, you will need to identify and stop the process or service that is using the ports that Apache Airflow is trying to use. This can be done by using the netstat command to list the processes that are listening on the relevant ports, and then using the kill command to stop the process.\nFor example, to stop the process that is using port 8080, you could run the following command:\nnetstat -plnt | grep :8080\n\nThis will list the process that is using port 8080. You can then use the kill command to stop the process, using the process ID that is displayed in the output of the netstat command.\nOnce you have stopped the conflicting process or service, you should be able to start Apache Airflow without encountering the error. You can then verify that it is running properly by accessing the Airflow web interface and checking the status of the webserver and scheduler.\n"
] | [
0
] | [] | [] | [
"airflow",
"python"
] | stackoverflow_0074673719_airflow_python.txt |
Q:
Scrape all possible results from a search bar with search result limit
Trying to scrape all the names from this website with Python:
https://profile.tmb.state.tx.us/Search.aspx?9e94dec6-c7e7-4054-b5fb-20a1fcdbab53
The issue is that it limits each search to the top 50 results.
Since the last name search allows wildcards, I tried using one search result to narrow down subsequent search results (using prefixes). However, this approach becomes difficult when more than 50 people have the same last name.
Any other ideas on how to get every possible name from this website? Thank you!!
A:
Looking at the request and JS, it seems like this limit is server-side. I don't see any way to retrieve more than 50 results.
Brute-force is the only way I think you could scrape this site, and it's not so trivial. You would need to generate queries more and more specific until the response has less than 50 results.
For each length one combination, starting with a for example's sake, you could search a*. If there are less than 50 results, scrape them and move on to the next combination. Otherwise you'll need to scrape all length two combinations of characters beginning with a: aa*, ab*, ac*, etc.
I'm sure there's some term for this, but I don't know it!
A:
I think it will be better with char decrement. Exemple AAB -> AAA. You’ll find all name that the trivial solution but it’ll take a lot of time. For the optimisation you can use headless browser.
| Scrape all possible results from a search bar with search result limit | Trying to scrape all the names from this website with Python:
https://profile.tmb.state.tx.us/Search.aspx?9e94dec6-c7e7-4054-b5fb-20a1fcdbab53
The issue is that it limits each search to the top 50 results.
Since the last name search allows wildcards, I tried using one search result to narrow down subsequent search results (using prefixes). However, this approach becomes difficult when more than 50 people have the same last name.
Any other ideas on how to get every possible name from this website? Thank you!!
| [
"Looking at the request and JS, it seems like this limit is server-side. I don't see any way to retrieve more than 50 results.\nBrute-force is the only way I think you could scrape this site, and it's not so trivial. You would need to generate queries more and more specific until the response has less than 50 results.\nFor each length one combination, starting with a for example's sake, you could search a*. If there are less than 50 results, scrape them and move on to the next combination. Otherwise you'll need to scrape all length two combinations of characters beginning with a: aa*, ab*, ac*, etc.\nI'm sure there's some term for this, but I don't know it!\n",
"I think it will be better with char decrement. Exemple AAB -> AAA. You’ll find all name that the trivial solution but it’ll take a lot of time. For the optimisation you can use headless browser.\n"
] | [
1,
0
] | [] | [] | [
"python",
"scrapy",
"search",
"selenium",
"web_scraping"
] | stackoverflow_0074673245_python_scrapy_search_selenium_web_scraping.txt |
Q:
Create 3D array using Python
I would like to create a 3D array in Python (2.7) to use like this:
distance[i][j][k]
And the sizes of the array should be the size of a variable I have. (nnn)
I tried using:
distance = [[[]*n]*n]
but that didn't seem to work.
I can only use the default libraries, and the method of multiplying (i.e.,[[0]*n]*n) wont work because they are linked to the same pointer and I need all of the values to be individual
A:
You should use a list comprehension:
>>> import pprint
>>> n = 3
>>> distance = [[[0 for k in xrange(n)] for j in xrange(n)] for i in xrange(n)]
>>> pprint.pprint(distance)
[[[0, 0, 0], [0, 0, 0], [0, 0, 0]],
[[0, 0, 0], [0, 0, 0], [0, 0, 0]],
[[0, 0, 0], [0, 0, 0], [0, 0, 0]]]
>>> distance[0][1]
[0, 0, 0]
>>> distance[0][1][2]
0
You could have produced a data structure with a statement that looked like the one you tried, but it would have had side effects since the inner lists are copy-by-reference:
>>> distance=[[[0]*n]*n]*n
>>> pprint.pprint(distance)
[[[0, 0, 0], [0, 0, 0], [0, 0, 0]],
[[0, 0, 0], [0, 0, 0], [0, 0, 0]],
[[0, 0, 0], [0, 0, 0], [0, 0, 0]]]
>>> distance[0][0][0] = 1
>>> pprint.pprint(distance)
[[[1, 0, 0], [1, 0, 0], [1, 0, 0]],
[[1, 0, 0], [1, 0, 0], [1, 0, 0]],
[[1, 0, 0], [1, 0, 0], [1, 0, 0]]]
A:
numpy.arrays are designed just for this case:
numpy.zeros((i,j,k))
will give you an array of dimensions ijk, filled with zeroes.
depending what you need it for, numpy may be the right library for your needs.
A:
The right way would be
[[[0 for _ in range(n)] for _ in range(n)] for _ in range(n)]
(What you're trying to do should be written like (for NxNxN)
[[[0]*n]*n]*n
but that is not correct, see @Adaman comment why).
A:
d3 = [[[0 for col in range(4)]for row in range(4)] for x in range(6)]
d3[1][2][1] = 144
d3[4][3][0] = 3.12
for x in range(len(d3)):
print d3[x]
[[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]]
[[0, 0, 0, 0], [0, 0, 0, 0], [0, 144, 0, 0], [0, 0, 0, 0]]
[[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]]
[[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]]
[[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [3.12, 0, 0, 0]]
[[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]]
A:
"""
Create 3D array for given dimensions - (x, y, z)
@author: Naimish Agarwal
"""
def three_d_array(value, *dim):
"""
Create 3D-array
:param dim: a tuple of dimensions - (x, y, z)
:param value: value with which 3D-array is to be filled
:return: 3D-array
"""
return [[[value for _ in xrange(dim[2])] for _ in xrange(dim[1])] for _ in xrange(dim[0])]
if __name__ == "__main__":
array = three_d_array(False, *(2, 3, 1))
x = len(array)
y = len(array[0])
z = len(array[0][0])
print x, y, z
array[0][0][0] = True
array[1][1][0] = True
print array
Prefer to use numpy.ndarray for multi-dimensional arrays.
A:
You can also use a nested for loop like shown below
n = 3
arr = []
for x in range(n):
arr.append([])
for y in range(n):
arr[x].append([])
for z in range(n):
arr[x][y].append(0)
print(arr)
A:
There are many ways to address your problem.
First one as accepted answer by @robert. Here is the generalised
solution for it:
def multi_dimensional_list(value, *args):
#args dimensions as many you like. EG: [*args = 4,3,2 => x=4, y=3, z=2]
#value can only be of immutable type. So, don't pass a list here. Acceptable value = 0, -1, 'X', etc.
if len(args) > 1:
return [ multi_dimensional_list(value, *args[1:]) for col in range(args[0])]
elif len(args) == 1: #base case of recursion
return [ value for col in range(args[0])]
else: #edge case when no values of dimensions is specified.
return None
Eg:
>>> multi_dimensional_list(-1, 3, 4) #2D list
[[-1, -1, -1, -1], [-1, -1, -1, -1], [-1, -1, -1, -1]]
>>> multi_dimensional_list(-1, 4, 3, 2) #3D list
[[[-1, -1], [-1, -1], [-1, -1]], [[-1, -1], [-1, -1], [-1, -1]], [[-1, -1], [-1, -1], [-1, -1]], [[-1, -1], [-1, -1], [-1, -1]]]
>>> multi_dimensional_list(-1, 2, 3, 2, 2 ) #4D list
[[[[-1, -1], [-1, -1]], [[-1, -1], [-1, -1]], [[-1, -1], [-1, -1]]], [[[-1, -1], [-1, -1]], [[-1, -1], [-1, -1]], [[-1, -1], [-1, -1]]]]
P.S If you are keen to do validation for correct values for args i.e. only natural numbers, then you can write a wrapper function before calling this function.
Secondly, any multidimensional dimensional array can be written as single dimension array. This means you don't need a multidimensional array. Here are the function for indexes conversion:
def convert_single_to_multi(value, max_dim):
dim_count = len(max_dim)
values = [0]*dim_count
for i in range(dim_count-1, -1, -1): #reverse iteration
values[i] = value%max_dim[i]
value /= max_dim[i]
return values
def convert_multi_to_single(values, max_dim):
dim_count = len(max_dim)
value = 0
length_of_dimension = 1
for i in range(dim_count-1, -1, -1): #reverse iteration
value += values[i]*length_of_dimension
length_of_dimension *= max_dim[i]
return value
Since, these functions are inverse of each other, here is the output:
>>> convert_single_to_multi(convert_multi_to_single([1,4,6,7],[23,45,32,14]),[23,45,32,14])
[1, 4, 6, 7]
>>> convert_multi_to_single(convert_single_to_multi(21343,[23,45,32,14]),[23,45,32,14])
21343
If you are concerned about performance issues then you can use some libraries like pandas, numpy, etc.
A:
n1=np.arange(90).reshape((3,3,-1))
print(n1)
print(n1.shape)
A:
def n_arr(n, default=0, size=1):
if n is 0:
return default
return [n_arr(n-1, default, size) for _ in range(size)]
arr = n_arr(3, 42, 3)
assert arr[2][2][2], 42
A:
I just want notice that
distance = [[[0 for k in range(n)] for j in range(n)] for i in range(n)]
can be shortened to
distance = [[[0] * n for j in range(n)] for i in range(n)]
| Create 3D array using Python | I would like to create a 3D array in Python (2.7) to use like this:
distance[i][j][k]
And the sizes of the array should be the size of a variable I have. (nnn)
I tried using:
distance = [[[]*n]*n]
but that didn't seem to work.
I can only use the default libraries, and the method of multiplying (i.e.,[[0]*n]*n) wont work because they are linked to the same pointer and I need all of the values to be individual
| [
"You should use a list comprehension:\n>>> import pprint\n>>> n = 3\n>>> distance = [[[0 for k in xrange(n)] for j in xrange(n)] for i in xrange(n)]\n>>> pprint.pprint(distance)\n[[[0, 0, 0], [0, 0, 0], [0, 0, 0]],\n [[0, 0, 0], [0, 0, 0], [0, 0, 0]],\n [[0, 0, 0], [0, 0, 0], [0, 0, 0]]]\n>>> distance[0][1]\n[0, 0, 0]\n>>> distance[0][1][2]\n0\n\nYou could have produced a data structure with a statement that looked like the one you tried, but it would have had side effects since the inner lists are copy-by-reference:\n>>> distance=[[[0]*n]*n]*n\n>>> pprint.pprint(distance)\n[[[0, 0, 0], [0, 0, 0], [0, 0, 0]],\n [[0, 0, 0], [0, 0, 0], [0, 0, 0]],\n [[0, 0, 0], [0, 0, 0], [0, 0, 0]]]\n>>> distance[0][0][0] = 1\n>>> pprint.pprint(distance)\n[[[1, 0, 0], [1, 0, 0], [1, 0, 0]],\n [[1, 0, 0], [1, 0, 0], [1, 0, 0]],\n [[1, 0, 0], [1, 0, 0], [1, 0, 0]]]\n\n",
"numpy.arrays are designed just for this case:\n numpy.zeros((i,j,k))\n\nwill give you an array of dimensions ijk, filled with zeroes.\ndepending what you need it for, numpy may be the right library for your needs.\n",
"The right way would be\n[[[0 for _ in range(n)] for _ in range(n)] for _ in range(n)]\n\n(What you're trying to do should be written like (for NxNxN)\n[[[0]*n]*n]*n\n\nbut that is not correct, see @Adaman comment why).\n",
"d3 = [[[0 for col in range(4)]for row in range(4)] for x in range(6)]\n\nd3[1][2][1] = 144\n\nd3[4][3][0] = 3.12\n\nfor x in range(len(d3)):\n print d3[x]\n\n\n\n[[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]]\n[[0, 0, 0, 0], [0, 0, 0, 0], [0, 144, 0, 0], [0, 0, 0, 0]]\n[[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]]\n[[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]]\n[[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [3.12, 0, 0, 0]]\n[[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]]\n\n",
"\"\"\"\nCreate 3D array for given dimensions - (x, y, z)\n\n@author: Naimish Agarwal\n\"\"\"\n\n\ndef three_d_array(value, *dim):\n \"\"\"\n Create 3D-array\n :param dim: a tuple of dimensions - (x, y, z)\n :param value: value with which 3D-array is to be filled\n :return: 3D-array\n \"\"\"\n\n return [[[value for _ in xrange(dim[2])] for _ in xrange(dim[1])] for _ in xrange(dim[0])]\n\nif __name__ == \"__main__\":\n array = three_d_array(False, *(2, 3, 1))\n x = len(array)\n y = len(array[0])\n z = len(array[0][0])\n print x, y, z\n\n array[0][0][0] = True\n array[1][1][0] = True\n\n print array\n\nPrefer to use numpy.ndarray for multi-dimensional arrays.\n",
"You can also use a nested for loop like shown below\nn = 3\narr = []\nfor x in range(n):\n arr.append([])\n for y in range(n):\n arr[x].append([])\n for z in range(n):\n arr[x][y].append(0)\nprint(arr)\n\n",
"There are many ways to address your problem.\n\nFirst one as accepted answer by @robert. Here is the generalised\nsolution for it:\n\ndef multi_dimensional_list(value, *args):\n #args dimensions as many you like. EG: [*args = 4,3,2 => x=4, y=3, z=2]\n #value can only be of immutable type. So, don't pass a list here. Acceptable value = 0, -1, 'X', etc.\n if len(args) > 1:\n return [ multi_dimensional_list(value, *args[1:]) for col in range(args[0])]\n elif len(args) == 1: #base case of recursion\n return [ value for col in range(args[0])]\n else: #edge case when no values of dimensions is specified.\n return None\n\nEg:\n>>> multi_dimensional_list(-1, 3, 4) #2D list\n[[-1, -1, -1, -1], [-1, -1, -1, -1], [-1, -1, -1, -1]]\n>>> multi_dimensional_list(-1, 4, 3, 2) #3D list\n[[[-1, -1], [-1, -1], [-1, -1]], [[-1, -1], [-1, -1], [-1, -1]], [[-1, -1], [-1, -1], [-1, -1]], [[-1, -1], [-1, -1], [-1, -1]]]\n>>> multi_dimensional_list(-1, 2, 3, 2, 2 ) #4D list\n[[[[-1, -1], [-1, -1]], [[-1, -1], [-1, -1]], [[-1, -1], [-1, -1]]], [[[-1, -1], [-1, -1]], [[-1, -1], [-1, -1]], [[-1, -1], [-1, -1]]]]\n\nP.S If you are keen to do validation for correct values for args i.e. only natural numbers, then you can write a wrapper function before calling this function.\n\nSecondly, any multidimensional dimensional array can be written as single dimension array. This means you don't need a multidimensional array. Here are the function for indexes conversion:\n\ndef convert_single_to_multi(value, max_dim):\n dim_count = len(max_dim)\n values = [0]*dim_count\n for i in range(dim_count-1, -1, -1): #reverse iteration\n values[i] = value%max_dim[i]\n value /= max_dim[i]\n return values\n\n\ndef convert_multi_to_single(values, max_dim):\n dim_count = len(max_dim)\n value = 0\n length_of_dimension = 1\n for i in range(dim_count-1, -1, -1): #reverse iteration\n value += values[i]*length_of_dimension\n length_of_dimension *= max_dim[i]\n return value\n\nSince, these functions are inverse of each other, here is the output:\n>>> convert_single_to_multi(convert_multi_to_single([1,4,6,7],[23,45,32,14]),[23,45,32,14])\n[1, 4, 6, 7]\n>>> convert_multi_to_single(convert_single_to_multi(21343,[23,45,32,14]),[23,45,32,14])\n21343\n\n\nIf you are concerned about performance issues then you can use some libraries like pandas, numpy, etc.\n\n",
"n1=np.arange(90).reshape((3,3,-1))\nprint(n1)\nprint(n1.shape)\n\n",
"def n_arr(n, default=0, size=1):\n if n is 0:\n return default\n\n return [n_arr(n-1, default, size) for _ in range(size)]\n\narr = n_arr(3, 42, 3)\nassert arr[2][2][2], 42\n\n",
"I just want notice that\ndistance = [[[0 for k in range(n)] for j in range(n)] for i in range(n)]\n\ncan be shortened to\ndistance = [[[0] * n for j in range(n)] for i in range(n)]\n\n"
] | [
79,
46,
9,
5,
5,
4,
1,
1,
0,
0
] | [
"If you insist on everything initializing as empty, you need an extra set of brackets on the inside ([[]] instead of [], since this is \"a list containing 1 empty list to be duplicated\" as opposed to \"a list containing nothing to duplicate\"):\ndistance=[[[[]]*n]*n]*n\n\n"
] | [
-3
] | [
"arrays",
"multidimensional_array",
"python",
"python_2.7"
] | stackoverflow_0010668341_arrays_multidimensional_array_python_python_2.7.txt |
Q:
Rps game not working as expected from the shown website that I posted
ROCK, PAPER, SCISSORS
0 Wins,0 Losses, 0 Ties
Enter your move: (r)ock (p)aper (s)cissors or (q)uit
P
Enter your move: (r)ock (p)aper (s)cissors or (q)uit
S
Enter your move: (r)ock (p)aper (s)cissors or (q)uit
Q
Enter your move: (r)ock (p)aper (s)cissors or (q)uit
p
Enter your move: (r)ock (p)aper (s)cissors or (q)uit
r
Enter your move: (r)ock (p)aper (s)cissors or (q)uit
s
Enter your move: (r)ock (p)aper (s)cissors or (q)uit
p
Enter your move: (r)ock (p)aper (s)cissors or (q)uit
r
Enter your move: (r)ock (p)aper (s)cissors or (q)uit
s
Enter your move: (r)ock (p)aper (s)cissors or (q)uit
ss
Enter your move: (r)ock (p)aper (s)cissors or (q)uit
s
Enter your move: (r)ock (p)aper (s)cissors or (q)uit
s
Enter your move: (r)ock (p)aper (s)cissors or (q)uit
Enter your move: (r)ock (p)aper (s)cissors or (q)uit
s
Enter your move: (r)ock (p)aper (s)cissors or (q)uit
Enter your move: (r)ock (p)aper (s)cissors or (q)uit
Enter your move: (r)ock (p)aper (s)cissors or (q)uits
s
Enter your move: (r)ock (p)aper (s)cissors or (q)uit
s
Enter your move: (r)ock (p)aper (s)cissors or (q)uit
Enter your move: (r)ock (p)aper (s)cissors or (q)uit
This is the result of the execution of what I put to create rps game, as you see, it does not not move on to next step.
import random, sys
print('ROCK, PAPER, SCISSORS')
#These variables keep track of the number of wins, losses, and ties.
wins = 0
losses = 0
ties = 0
while True: # The main game loop.
print('%s Wins, %s Losses, %s Ties' % (wins, losses, ties))
while True: # The player input loop.
print('Enter your move: (r)ock (p)aper (s)cissors or (q)uit')
playerMove = input()
if playerMove == 'q':
sys.exit() # Quit the program.
if playerMove == 'r' or playerMove == 'p' or playerMove == 's':
break # Break out of the player input loop.
print('Type one of r, p, s, or q.')
# Display what the player chose:
if playerMove == 'r':
print('ROCK versus...')
elif playerMove == 'p':
print('PAPER versus...')
elif playerMove == 's':
print('SCISSORS versus...')
# Display what the computer chose:
randomNumber = random.randiant(1,3)
if randomNumber == 1:
computerMove = 'r'
print('ROCK')
elif randomNumber == 2:
computerMove = 'p'
print('PAPER')
elif randomNumber == 3:
computerMove = 's'
print('SCISSORS')
# Display and record the win/loss/tie:
if playerMove == computerMove:
print('It is a tie!')
ties = ties + 1
elif playerMove == 'r' and computerMove == 's':
print('You win!')
wins = wins + 1
elif playerMove == 'p' and computerMove == 'r':
print('You win!')
wins = wins + 1
elif playerMove == 's' and computerMove == 'p':
print('You win!')
wins = wins + 1
elif playerMove == 'r' and computerMove == 'p':
print('You lose!')
losses = losses + 1
elif playerMove == 'p' and computerMove == 's':
print('You lose!')
losses = losses + 1
elif playerMove == 's' and computerMove == 'r':
print('You lose!')
losses = losses + 1
please refer to this website, 'https://automatetheboringstuff.com/2e/chapter2/', it is a book name 'Automate the Boring Stuff with Python'. As shown at the bottom of this content title named, "A SHORT, PROGRAM: ROCK, PAPER, SISSORS shows how to program rps game, and I put exactly same source code into the file editor from above website, but game does not work as expected on the interactive shell.
A:
Your solution with a few changes to make it work.
Reindented some parts :
parts of the code were unreachable : the player could never enter his move
all the game logic was outside of the loop
Note the new position of if playerMove == "r" that is now at the same indentation level as if playerMove == "q".
Replaced randiant() that doesn't exist with randint().
Indentation is very important in Python because there are no characters to delimit blocks as brackets in other languages.
Control structures (if, elif, else) and loops depend on indentation.
import random, sys
print("ROCK, PAPER, SCISSORS")
# These variables keep track of the number of wins, losses, and ties.
wins = 0
losses = 0
ties = 0
while True: # The main game loop.
print("%s Wins, %s Losses, %s Ties" % (wins, losses, ties))
while True: # The player input loop.
print("Enter your move: (r)ock (p)aper (s)cissors or (q)uit")
playerMove = input()
if playerMove == "q":
sys.exit() # Quit the program.
if playerMove == "r" or playerMove == "p" or playerMove == "s":
break # Break out of the player input loop.
print("Type one of r, p, s, or q.")
# Display what the player chose:
if playerMove == "r":
print("ROCK versus...")
elif playerMove == "p":
print("PAPER versus...")
elif playerMove == "s":
print("SCISSORS versus...")
# Display what the computer chose:
randomNumber = random.randint(1, 3)
if randomNumber == 1:
computerMove = "r"
print("ROCK")
elif randomNumber == 2:
computerMove = "p"
print("PAPER")
elif randomNumber == 3:
computerMove = "s"
print("SCISSORS")
# Display and record the win/loss/tie:
if playerMove == computerMove:
print("It is a tie!")
ties = ties + 1
elif playerMove == "r" and computerMove == "s":
print("You win!")
wins = wins + 1
elif playerMove == "p" and computerMove == "r":
print("You win!")
wins = wins + 1
elif playerMove == "s" and computerMove == "p":
print("You win!")
wins = wins + 1
elif playerMove == "r" and computerMove == "p":
print("You lose!")
losses = losses + 1
elif playerMove == "p" and computerMove == "s":
print("You lose!")
losses = losses + 1
elif playerMove == "s" and computerMove == "r":
print("You lose!")
losses = losses + 1
| Rps game not working as expected from the shown website that I posted | ROCK, PAPER, SCISSORS
0 Wins,0 Losses, 0 Ties
Enter your move: (r)ock (p)aper (s)cissors or (q)uit
P
Enter your move: (r)ock (p)aper (s)cissors or (q)uit
S
Enter your move: (r)ock (p)aper (s)cissors or (q)uit
Q
Enter your move: (r)ock (p)aper (s)cissors or (q)uit
p
Enter your move: (r)ock (p)aper (s)cissors or (q)uit
r
Enter your move: (r)ock (p)aper (s)cissors or (q)uit
s
Enter your move: (r)ock (p)aper (s)cissors or (q)uit
p
Enter your move: (r)ock (p)aper (s)cissors or (q)uit
r
Enter your move: (r)ock (p)aper (s)cissors or (q)uit
s
Enter your move: (r)ock (p)aper (s)cissors or (q)uit
ss
Enter your move: (r)ock (p)aper (s)cissors or (q)uit
s
Enter your move: (r)ock (p)aper (s)cissors or (q)uit
s
Enter your move: (r)ock (p)aper (s)cissors or (q)uit
Enter your move: (r)ock (p)aper (s)cissors or (q)uit
s
Enter your move: (r)ock (p)aper (s)cissors or (q)uit
Enter your move: (r)ock (p)aper (s)cissors or (q)uit
Enter your move: (r)ock (p)aper (s)cissors or (q)uits
s
Enter your move: (r)ock (p)aper (s)cissors or (q)uit
s
Enter your move: (r)ock (p)aper (s)cissors or (q)uit
Enter your move: (r)ock (p)aper (s)cissors or (q)uit
This is the result of the execution of what I put to create rps game, as you see, it does not not move on to next step.
import random, sys
print('ROCK, PAPER, SCISSORS')
#These variables keep track of the number of wins, losses, and ties.
wins = 0
losses = 0
ties = 0
while True: # The main game loop.
print('%s Wins, %s Losses, %s Ties' % (wins, losses, ties))
while True: # The player input loop.
print('Enter your move: (r)ock (p)aper (s)cissors or (q)uit')
playerMove = input()
if playerMove == 'q':
sys.exit() # Quit the program.
if playerMove == 'r' or playerMove == 'p' or playerMove == 's':
break # Break out of the player input loop.
print('Type one of r, p, s, or q.')
# Display what the player chose:
if playerMove == 'r':
print('ROCK versus...')
elif playerMove == 'p':
print('PAPER versus...')
elif playerMove == 's':
print('SCISSORS versus...')
# Display what the computer chose:
randomNumber = random.randiant(1,3)
if randomNumber == 1:
computerMove = 'r'
print('ROCK')
elif randomNumber == 2:
computerMove = 'p'
print('PAPER')
elif randomNumber == 3:
computerMove = 's'
print('SCISSORS')
# Display and record the win/loss/tie:
if playerMove == computerMove:
print('It is a tie!')
ties = ties + 1
elif playerMove == 'r' and computerMove == 's':
print('You win!')
wins = wins + 1
elif playerMove == 'p' and computerMove == 'r':
print('You win!')
wins = wins + 1
elif playerMove == 's' and computerMove == 'p':
print('You win!')
wins = wins + 1
elif playerMove == 'r' and computerMove == 'p':
print('You lose!')
losses = losses + 1
elif playerMove == 'p' and computerMove == 's':
print('You lose!')
losses = losses + 1
elif playerMove == 's' and computerMove == 'r':
print('You lose!')
losses = losses + 1
please refer to this website, 'https://automatetheboringstuff.com/2e/chapter2/', it is a book name 'Automate the Boring Stuff with Python'. As shown at the bottom of this content title named, "A SHORT, PROGRAM: ROCK, PAPER, SISSORS shows how to program rps game, and I put exactly same source code into the file editor from above website, but game does not work as expected on the interactive shell.
| [
"Your solution with a few changes to make it work.\nReindented some parts :\n\nparts of the code were unreachable : the player could never enter his move\nall the game logic was outside of the loop\n\nNote the new position of if playerMove == \"r\" that is now at the same indentation level as if playerMove == \"q\".\nReplaced randiant() that doesn't exist with randint().\nIndentation is very important in Python because there are no characters to delimit blocks as brackets in other languages.\nControl structures (if, elif, else) and loops depend on indentation.\nimport random, sys\n\nprint(\"ROCK, PAPER, SCISSORS\")\n\n# These variables keep track of the number of wins, losses, and ties.\nwins = 0\nlosses = 0\nties = 0\n\nwhile True: # The main game loop.\n print(\"%s Wins, %s Losses, %s Ties\" % (wins, losses, ties))\n while True: # The player input loop.\n print(\"Enter your move: (r)ock (p)aper (s)cissors or (q)uit\")\n playerMove = input()\n if playerMove == \"q\":\n sys.exit() # Quit the program.\n if playerMove == \"r\" or playerMove == \"p\" or playerMove == \"s\":\n break # Break out of the player input loop.\n print(\"Type one of r, p, s, or q.\")\n\n # Display what the player chose:\n if playerMove == \"r\":\n print(\"ROCK versus...\")\n elif playerMove == \"p\":\n print(\"PAPER versus...\")\n elif playerMove == \"s\":\n print(\"SCISSORS versus...\")\n\n # Display what the computer chose:\n randomNumber = random.randint(1, 3)\n if randomNumber == 1:\n computerMove = \"r\"\n print(\"ROCK\")\n elif randomNumber == 2:\n computerMove = \"p\"\n print(\"PAPER\")\n elif randomNumber == 3:\n computerMove = \"s\"\n print(\"SCISSORS\")\n\n # Display and record the win/loss/tie:\n if playerMove == computerMove:\n print(\"It is a tie!\")\n ties = ties + 1\n elif playerMove == \"r\" and computerMove == \"s\":\n print(\"You win!\")\n wins = wins + 1\n elif playerMove == \"p\" and computerMove == \"r\":\n print(\"You win!\")\n wins = wins + 1\n elif playerMove == \"s\" and computerMove == \"p\":\n print(\"You win!\")\n wins = wins + 1\n elif playerMove == \"r\" and computerMove == \"p\":\n print(\"You lose!\")\n losses = losses + 1\n elif playerMove == \"p\" and computerMove == \"s\":\n print(\"You lose!\")\n losses = losses + 1\n elif playerMove == \"s\" and computerMove == \"r\":\n print(\"You lose!\")\n losses = losses + 1\n\n"
] | [
0
] | [] | [] | [
"python"
] | stackoverflow_0074656256_python.txt |
Q:
AttributeError: 'Series' object has no attribute 'Mean_μg_L'
Why am I getting this error if the column name exists.
I have tried everything. I am out of ideas
A:
Since the AttributeError is raised at the first column with a name containing a mathematical symbol (µ), I would suggest you these two solutions :
Use replace right before the loop to get rid of this special character
df.columns = df.columns.str.replace("_\wg_", "_ug_", regex=True)
#change df to Table_1_1a_Tarawa_Terrace_System_1975_to_1985
Then inside the loop, use row.Mean_ug_L, .. instead of row.Mean_µg_L, ..
Use row["col_name"] (highly recommended) to refer to the column rather than row.col_name
for index, row in Table_1_1a_Tarawa_Terrace_System_1975_to_1985.iterrows():
SQL_VALUES_Tarawa = (row["Chemicals"], rows["Contamminant"], row["Mean_µg_L"], row["Median_µg_L"], row["Range_µg_L"], row["Num_Months_Greater_MCL"], row["Num_Months_Greater_100_µg_L"])
cursor.execute(SQL_insert_Tarawa, SQL_VALUES_Tarawa)
counting = cursor.rowcount
print(counting, "Record added")
conn.commit()
| AttributeError: 'Series' object has no attribute 'Mean_μg_L' |
Why am I getting this error if the column name exists.
I have tried everything. I am out of ideas
| [
"Since the AttributeError is raised at the first column with a name containing a mathematical symbol (µ), I would suggest you these two solutions :\n\nUse replace right before the loop to get rid of this special character\ndf.columns = df.columns.str.replace(\"_\\wg_\", \"_ug_\", regex=True)\n#change df to Table_1_1a_Tarawa_Terrace_System_1975_to_1985\n\n\n\nThen inside the loop, use row.Mean_ug_L, .. instead of row.Mean_µg_L, ..\n\nUse row[\"col_name\"] (highly recommended) to refer to the column rather than row.col_name\n for index, row in Table_1_1a_Tarawa_Terrace_System_1975_to_1985.iterrows():\n SQL_VALUES_Tarawa = (row[\"Chemicals\"], rows[\"Contamminant\"], row[\"Mean_µg_L\"], row[\"Median_µg_L\"], row[\"Range_µg_L\"], row[\"Num_Months_Greater_MCL\"], row[\"Num_Months_Greater_100_µg_L\"])\n cursor.execute(SQL_insert_Tarawa, SQL_VALUES_Tarawa)\n counting = cursor.rowcount\n print(counting, \"Record added\")\n conn.commit()\n\n\n\n"
] | [
0
] | [] | [] | [
"dataframe",
"pandas",
"python",
"sqlite"
] | stackoverflow_0074673550_dataframe_pandas_python_sqlite.txt |
Q:
PyCharm doesn't recognize installed module
I'm having trouble with using 'requests' module on my Mac. I use python34 and I installed 'requests' module via pip. I can verify this via running installation again and it'll show me that module is already installed.
15:49:29|mymac [~]:pip install requests
Requirement already satisfied (use --upgrade to upgrade): requests in /opt/local/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages
Although I can import 'requests' module via interactive Python interpreter, trying to execute 'import requests' in PyCharm yields error 'No module named requests'. I checked my PyCharm Python interpreter settings and (I believe) it's set to same python34 as used in my environment. However, I can't see 'requests' module listed in PyCharm either.
It's obvious that I'm missing something here. Can you guys advise where should I look or what should I fix in order to get this module working? I was living under impression that when I install module via pip in my environment, PyCharm will detect these changes. However, it seems something is broken on my side ...
A:
If you are using PyCharms CE (Community Edition), then click on:
File->Default Settings->Project Interpretor
See the + sign at the bottom, click on it. It will open another dialog with a host of modules available. Select your package (e.g. requests) and PyCharm will do the rest.
MD
A:
In my case, using a pre-existing virtualenv did not work in the editor - all modules were marked as unresolved reference (running naturally works, as this is outside of the editor's config, just running an external process (not so easy for debugging)).
Turns out PyCharm did not add the site-packages directory... the fix is to manually add it.
Open File -> Settings -> Project Interpreter, pick "Show All..." (to edit the config) (1), pick your interpreter (2), and click "Show paths of selected interpreter" (3).
In that screen, manually add the "site-packages" directory of the virtual environment (4) (I've added the "Lib" also, for a good measure); once done and saved, they will turn up in the interpreter paths.
The other thing that won't hurt to do is select "Associate this virtual environment with the current project", in the interpreter's edit box.
A:
This issue arises when the package you're using was installed outside of the environment (Anaconda or virtualenv, for example). In order to have PyCharm recognize packages installed outside of your particular environment, execute the following steps:
Go to
Preferences -> Project -> Project Interpreter -> 3 dots -> Show All ->
Select relevant interpreter -> click on tree icon Show paths for the selected interpreter
Now check what paths are available and add the path that points to the package installation directory outside of your environment to the interpreter paths.
To find a package location use:
$ pip show gym
Name: gym
Version: 0.13.0
Summary: The OpenAI Gym: A toolkit for developing and comparing your reinforcement learning agents.
Home-page: https://github.com/openai/gym
Author: OpenAI
Author-email: [email protected]
License: UNKNOWN
Location: /usr/local/lib/python3.7/site-packages
...
Add the path specified under Location to the interpreter paths, here
/usr/local/lib/python3.7/site-packages
Then, let indexing finish and perhaps additionally reopen your project.
A:
Open python console of your pyCharm. Click on Rerun.
It will say something like following on the very first line
/System/Library/Frameworks/Python.framework/Versions/2.7/bin/python2.7 /Applications/PyCharm.app/Contents/helpers/pydev/pydevconsole.py 52631 52632
in this scenario pyCharm is using following interpretor
/System/Library/Frameworks/Python.framework/Versions/2.7/bin/python2.7
Now fire up console and run following command
sudo /System/Library/Frameworks/Python.framework/Versions/2.7/bin/python2.7 -m pip install <name of the package>
This should install your package :)
A:
Pycharm is unable to recognize installed local modules, since python interpreter selected is wrong. It should be the one, where your pip packages are installed i.e. virtual environment.
I had installed packages via pip in Windows. In Pycharm, they were neither detected nor any other Python interpreter was being shown (only python 3.6 is installed on my system).
I restarted the IDE. Now I was able to see python interpreter created in my virtual environment. Select that python interpreter and all your packages will be shown and detected. Enjoy!
A:
Using dual python 2.7 and 3.4 with 2.7 as default, I've always used pip3 to install modules for the 3.4 interpreter, and pip to install modules for the 2.7 interpreter.
Try this:
pip3 install requests
A:
This is because you have not selected two options while creating your project:-
** inherit global site packages
** make available to all projects
Now you need to create a new project and don't forget to tick these two options while selecting project interpreter.
A:
The solution is easy (PyCharm 2021.2.3 Community Edition).
I'm on Windows but the user interface should be the same.
In the project tree, open External libraries > Python interpreter > venv > pyvenv.cfg.
Then change:
include-system-site-packages = false
to:
include-system-site-packages = true
A:
If you go to pycharm project interpreter -> clicked on one of the installed packages then hover -> you will see where pycharm is installing the packages. This is where you are supposed to have your package installed.
Now if you did sudo -H pip3 install <package>
pip3 installs it to different directory which is /usr/local/lib/site-packages
since it is different directory from what pycharm knows hence your package is not showing in pycharm.
Solution: just install the package using pycharm by going to File->Settings->Project->Project Interpreter -> click on (+) and search the package you want to install and just click ok.
-> you will be prompted package successfully installed and you will see it pycharm.
A:
Before going further, I want to point out how to configure a Python interpreter in PyCharm: [SO]: How to install Python using the "embeddable zip file" (@CristiFati's answer). Although the question is for Win, and has some particularities, configuring PyCharm is generic enough and should apply to any situation (with minor changes).
There are multiple possible reasons for this behavior.
1. Python instance mismatch
Happens when there are multiple Python instances (installed, VEnvs, Conda, custom built, ...) on a machine. Users think they're using one particular instance (with a set of properties (installed packages)), but in fact they are using another (with different properties), hence the confusion. It's harder to figure out things when the 2 instances have the same version (and somehow similar locations)
Happens mostly due to environmental configuration (whichever path comes 1st in ${PATH}, aliases (on Nix), ...)
It's not PyCharm specific (meaning that it's more generic, also happens outside it), but a typical PyCharm related example is different console interpreter and project interpreter, leading to confusion
The fix is to specify full paths (and pay attention to them) when using tools like Python, PIP, .... Check [SO]: How to install a package for a specific Python version on Windows 10? (@CristiFati's answer) for more details
This is precisely the reason why this question exists. There are 2 Python versions involved:
Project interpreter: /Library/Frameworks/Python.framework/Versions/3.4
Interpreter having the Requests module: /opt/local/Library/Frameworks/Python.framework/Versions/3.4
well, assuming the 2 paths are not somehow related (SymLinked), but in latest OSX versions that I had the chance to check (Catalina, Big Sur, Monterey) this doesn't happen (by default)
2. Python's module search mechanism misunderstanding
According to [Python.Docs]: Modules - The Module Search Path:
When a module named spam is imported, the interpreter first searches for a built-in module with that name. These module names are listed in sys.builtin_module_names. If not found, it then searches for a file named spam.py in a list of directories given by the variable sys.path. sys.path is initialized from these locations:
The directory containing the input script (or the current directory when no file is specified).
PYTHONPATH (a list of directory names, with the same syntax as the shell variable PATH).
The installation-dependent default (by convention including a site-packages directory, handled by the site module).
A module might be located in the current dir, or its path might be added to ${PYTHONPATH}. That could trick users into making them believe that the module is actually installed in the current Python instance ('s site-packages). But, when running the current Python instance from a different dir (or with different ${PYTHONPATH}) the module would be missing, yielding lots of headaches
For a fix, check [SO]: How PyCharm imports differently than system command prompt (Windows) (@CristiFati's answer)
3. A PyCharm bug
Not very likely, but it could happen. An example (not related to this question): [SO]: PyCharm 2019.2 not showing Traceback on Exception (@CristiFati's answer)
To fix, follow one of the options from the above URL
4. A glitch
Not likely, but mentioning anyway. Due to some cause (e.g.: HW / SW failure), the system ended up in an inconsistent state, yielding all kinds of strange behaviors
Possible fixes:
Restart PyCharm
Restart the machine
Recreate the project (remove the .idea dir from the project)
Reset PyCharm settings: from menu select File -> Manage IDE Settings -> Restore Default Settings.... Check [JetBrains]: Configuring PyCharm settings or [JetBrains.IntelliJ-Support]: Changing IDE default directories used for config, plugins, and caches storage for more details
Reinstall PyCharm
Needless to say that the last 2 options should only be attempted as a last resort, and only by experts, as they might mess up other projects and not even fix the problem
Not quite related to the question, but posting a PyCharm related investigation from a while ago: [SO]: Run / Debug a Django application's UnitTests from the mouse right click context menu in PyCharm Community Edition?.
A:
This did my head in as well, and turns out, the only thing I needed to do is RESTART Pycharm. Sometimes after you've installed the pip, you can't load it into your project, even if the pip shows as installed in your Settings. Bummer.
A:
I got this issue when I created the project using Virtualenv.
Solution suggested by Neeraj Aggarwal worked for me. However, if you do not want to create a new project then the following can resolve the issue.
Close the project
Find the file <Your Project Folder>/venv/pyvenv.cfg
Open this file with any text editor
Set the include-system-site-packages = true
Save it
Open the project
A:
For Anaconda:
Start Anaconda Navigator -> Enviroments -> "Your_Enviroment" -> Update Index -> Restart IDE.
Solved it for me.
A:
If any one faces the same problem that he/she installs the python packages but the PyCharm IDE doesn't shows these packages then following the following steps:
Go to the project in the left side of the PyCharm IDE then
Click on the venv library then
Open the pyvenv.cfg file in any editor then
Change this piece of code (include-system-site-packages = flase) from false to true
Then save it and close it and also close then pycharm then
Open PyCharm again and your problem is solved.
Thanks
A:
After pip installing everything I needed. I went to the interpreter and re-pointed it back to where it was at already.
My case: python3.6 in /anaconda3/bin/python using virtualenv...
Additionally, before I hit the plus "+" sign to install a new package. I had to deselect the conda icon to the right of it. Seems like it would be the opposite, but only then did it recognize the packages I had/needed via query.
A:
In my case the packages were installed via setup.py + easy_install, and the they ends up in *.egg directories in site_package dir, which can be recognized by python but not pycharm.
I removed them all then reinstalled with pip install and it works after that, luckily the project I was working on came up with a requirements.txt file, so the command for it was:
pip install -r ./requirement.txt
A:
On windows I had to cd into the venv folder and then cd into the scripts folder, then pip install module started to work
cd venv
cd scripts
pip install module
A:
I just ran into this issue in a brand new install/project, but I'm using the Python plugin for IntelliJ IDEA. It's essentially the same as PyCharm but the project settings are a little different. For me, the project was pointing to the right Python virtual environment but not even built-in modules were being recognized.
It turns out the SDK classpath was empty. I added paths for venv/lib/python3.8 and venv/lib/python3.8/site-packages and the issue was resolved. File->Project Structure and under Platform Settings, click SDKs, select your Python SDK, and make sure the class paths are there.
A:
pip install --user discord
above command solves my problem, just use the "--user" flag
A:
I fixed my particular issue by installing directly to the interpreter. Go to settings and hit the "+" below the in-use interpreter then search for the package and install. I believe I'm having the issue in the first place because I didn't set up with my interpreter correctly with my venv (not exactly sure, but this fixed it).
I was having issues with djangorestframework-simplejwt because it was the first package I hadn't installed to this interpreter from previous projects before starting the current one, but should work for any other package that isn't showing as imported. To reiterate though I think this is a workaround that doesn't solve the setup issue causing this.
A:
instead of running pip install in the terminal -> local use terminal -> command prompt
see below image
pycharm_command_prompt_image
A:
If you are having issues with the underlying (i.e. pycharm's languge server) mark everything as root and create a new project. See details: https://stackoverflow.com/a/73418320/1601580 this seems to happy to me only when I install packages as in editable mode with pip (i.e. pip install -e . or conda develop). Details: https://stackoverflow.com/a/73418320/1601580
A:
--WINDOWS--
if using Pycharm GUI package installer works fine for installing packages for your virtual environment but you cannot do the same in the terminal,
this is because you did not setup virtual env in your terminal, instead, your terminal uses Power Shell which doesn't use your virtual env
there should be (venv) before you're command line as shown instead of (PS)
if you have (PS), this means your terminal is using Power Shell instead of cmd
to fix this, click on the down arrow and select the command prompt
select command prompt
now you will get (venv) and just type pip install #package name# and the package will be added to your virtual environment
A:
i just use the terminal option for output then it works. terminal allows to run python interpreter directly from pycharm platform
| PyCharm doesn't recognize installed module | I'm having trouble with using 'requests' module on my Mac. I use python34 and I installed 'requests' module via pip. I can verify this via running installation again and it'll show me that module is already installed.
15:49:29|mymac [~]:pip install requests
Requirement already satisfied (use --upgrade to upgrade): requests in /opt/local/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages
Although I can import 'requests' module via interactive Python interpreter, trying to execute 'import requests' in PyCharm yields error 'No module named requests'. I checked my PyCharm Python interpreter settings and (I believe) it's set to same python34 as used in my environment. However, I can't see 'requests' module listed in PyCharm either.
It's obvious that I'm missing something here. Can you guys advise where should I look or what should I fix in order to get this module working? I was living under impression that when I install module via pip in my environment, PyCharm will detect these changes. However, it seems something is broken on my side ...
| [
"If you are using PyCharms CE (Community Edition), then click on:\nFile->Default Settings->Project Interpretor\n\nSee the + sign at the bottom, click on it. It will open another dialog with a host of modules available. Select your package (e.g. requests) and PyCharm will do the rest.\nMD\n",
"In my case, using a pre-existing virtualenv did not work in the editor - all modules were marked as unresolved reference (running naturally works, as this is outside of the editor's config, just running an external process (not so easy for debugging)).\nTurns out PyCharm did not add the site-packages directory... the fix is to manually add it.\nOpen File -> Settings -> Project Interpreter, pick \"Show All...\" (to edit the config) (1), pick your interpreter (2), and click \"Show paths of selected interpreter\" (3).\nIn that screen, manually add the \"site-packages\" directory of the virtual environment (4) (I've added the \"Lib\" also, for a good measure); once done and saved, they will turn up in the interpreter paths.\n\nThe other thing that won't hurt to do is select \"Associate this virtual environment with the current project\", in the interpreter's edit box.\n",
"This issue arises when the package you're using was installed outside of the environment (Anaconda or virtualenv, for example). In order to have PyCharm recognize packages installed outside of your particular environment, execute the following steps:\nGo to\nPreferences -> Project -> Project Interpreter -> 3 dots -> Show All ->\nSelect relevant interpreter -> click on tree icon Show paths for the selected interpreter\nNow check what paths are available and add the path that points to the package installation directory outside of your environment to the interpreter paths.\nTo find a package location use:\n$ pip show gym\nName: gym\nVersion: 0.13.0\nSummary: The OpenAI Gym: A toolkit for developing and comparing your reinforcement learning agents.\nHome-page: https://github.com/openai/gym\nAuthor: OpenAI\nAuthor-email: [email protected]\nLicense: UNKNOWN\nLocation: /usr/local/lib/python3.7/site-packages\n...\n\nAdd the path specified under Location to the interpreter paths, here\n\n/usr/local/lib/python3.7/site-packages\n\nThen, let indexing finish and perhaps additionally reopen your project.\n",
"Open python console of your pyCharm. Click on Rerun.\n It will say something like following on the very first line\n/System/Library/Frameworks/Python.framework/Versions/2.7/bin/python2.7 /Applications/PyCharm.app/Contents/helpers/pydev/pydevconsole.py 52631 52632\n\nin this scenario pyCharm is using following interpretor\n/System/Library/Frameworks/Python.framework/Versions/2.7/bin/python2.7 \n\nNow fire up console and run following command\nsudo /System/Library/Frameworks/Python.framework/Versions/2.7/bin/python2.7 -m pip install <name of the package>\n\nThis should install your package :)\n",
"Pycharm is unable to recognize installed local modules, since python interpreter selected is wrong. It should be the one, where your pip packages are installed i.e. virtual environment.\nI had installed packages via pip in Windows. In Pycharm, they were neither detected nor any other Python interpreter was being shown (only python 3.6 is installed on my system).\n\nI restarted the IDE. Now I was able to see python interpreter created in my virtual environment. Select that python interpreter and all your packages will be shown and detected. Enjoy!\n\n",
"Using dual python 2.7 and 3.4 with 2.7 as default, I've always used pip3 to install modules for the 3.4 interpreter, and pip to install modules for the 2.7 interpreter.\nTry this:\npip3 install requests\n",
"This is because you have not selected two options while creating your project:-\n** inherit global site packages\n** make available to all projects\nNow you need to create a new project and don't forget to tick these two options while selecting project interpreter.\n",
"The solution is easy (PyCharm 2021.2.3 Community Edition).\nI'm on Windows but the user interface should be the same.\nIn the project tree, open External libraries > Python interpreter > venv > pyvenv.cfg.\nThen change:\ninclude-system-site-packages = false\n\nto:\ninclude-system-site-packages = true\n\n\n",
"\nIf you go to pycharm project interpreter -> clicked on one of the installed packages then hover -> you will see where pycharm is installing the packages. This is where you are supposed to have your package installed.\nNow if you did sudo -H pip3 install <package>\npip3 installs it to different directory which is /usr/local/lib/site-packages\n\nsince it is different directory from what pycharm knows hence your package is not showing in pycharm.\nSolution: just install the package using pycharm by going to File->Settings->Project->Project Interpreter -> click on (+) and search the package you want to install and just click ok.\n-> you will be prompted package successfully installed and you will see it pycharm.\n",
"Before going further, I want to point out how to configure a Python interpreter in PyCharm: [SO]: How to install Python using the \"embeddable zip file\" (@CristiFati's answer). Although the question is for Win, and has some particularities, configuring PyCharm is generic enough and should apply to any situation (with minor changes).\nThere are multiple possible reasons for this behavior.\n1. Python instance mismatch\n\nHappens when there are multiple Python instances (installed, VEnvs, Conda, custom built, ...) on a machine. Users think they're using one particular instance (with a set of properties (installed packages)), but in fact they are using another (with different properties), hence the confusion. It's harder to figure out things when the 2 instances have the same version (and somehow similar locations)\n\nHappens mostly due to environmental configuration (whichever path comes 1st in ${PATH}, aliases (on Nix), ...)\n\nIt's not PyCharm specific (meaning that it's more generic, also happens outside it), but a typical PyCharm related example is different console interpreter and project interpreter, leading to confusion\n\nThe fix is to specify full paths (and pay attention to them) when using tools like Python, PIP, .... Check [SO]: How to install a package for a specific Python version on Windows 10? (@CristiFati's answer) for more details\n\nThis is precisely the reason why this question exists. There are 2 Python versions involved:\n\nProject interpreter: /Library/Frameworks/Python.framework/Versions/3.4\n\nInterpreter having the Requests module: /opt/local/Library/Frameworks/Python.framework/Versions/3.4\n\n\nwell, assuming the 2 paths are not somehow related (SymLinked), but in latest OSX versions that I had the chance to check (Catalina, Big Sur, Monterey) this doesn't happen (by default)\n\n\n2. Python's module search mechanism misunderstanding\n\nAccording to [Python.Docs]: Modules - The Module Search Path:\n\nWhen a module named spam is imported, the interpreter first searches for a built-in module with that name. These module names are listed in sys.builtin_module_names. If not found, it then searches for a file named spam.py in a list of directories given by the variable sys.path. sys.path is initialized from these locations:\n\nThe directory containing the input script (or the current directory when no file is specified).\n\nPYTHONPATH (a list of directory names, with the same syntax as the shell variable PATH).\n\nThe installation-dependent default (by convention including a site-packages directory, handled by the site module).\n\n\n\nA module might be located in the current dir, or its path might be added to ${PYTHONPATH}. That could trick users into making them believe that the module is actually installed in the current Python instance ('s site-packages). But, when running the current Python instance from a different dir (or with different ${PYTHONPATH}) the module would be missing, yielding lots of headaches\n\nFor a fix, check [SO]: How PyCharm imports differently than system command prompt (Windows) (@CristiFati's answer)\n\n\n3. A PyCharm bug\n\nNot very likely, but it could happen. An example (not related to this question): [SO]: PyCharm 2019.2 not showing Traceback on Exception (@CristiFati's answer)\n\nTo fix, follow one of the options from the above URL\n\n\n4. A glitch\n\nNot likely, but mentioning anyway. Due to some cause (e.g.: HW / SW failure), the system ended up in an inconsistent state, yielding all kinds of strange behaviors\n\nPossible fixes:\n\nRestart PyCharm\n\nRestart the machine\n\nRecreate the project (remove the .idea dir from the project)\n\nReset PyCharm settings: from menu select File -> Manage IDE Settings -> Restore Default Settings.... Check [JetBrains]: Configuring PyCharm settings or [JetBrains.IntelliJ-Support]: Changing IDE default directories used for config, plugins, and caches storage for more details\n\nReinstall PyCharm\n\n\nNeedless to say that the last 2 options should only be attempted as a last resort, and only by experts, as they might mess up other projects and not even fix the problem\n\n\nNot quite related to the question, but posting a PyCharm related investigation from a while ago: [SO]: Run / Debug a Django application's UnitTests from the mouse right click context menu in PyCharm Community Edition?.\n",
"This did my head in as well, and turns out, the only thing I needed to do is RESTART Pycharm. Sometimes after you've installed the pip, you can't load it into your project, even if the pip shows as installed in your Settings. Bummer.\n",
"I got this issue when I created the project using Virtualenv.\nSolution suggested by Neeraj Aggarwal worked for me. However, if you do not want to create a new project then the following can resolve the issue.\n\nClose the project\nFind the file <Your Project Folder>/venv/pyvenv.cfg\nOpen this file with any text editor\nSet the include-system-site-packages = true\nSave it\nOpen the project\n\n",
"For Anaconda:\nStart Anaconda Navigator -> Enviroments -> \"Your_Enviroment\" -> Update Index -> Restart IDE.\nSolved it for me.\n",
"If any one faces the same problem that he/she installs the python packages but the PyCharm IDE doesn't shows these packages then following the following steps:\n\nGo to the project in the left side of the PyCharm IDE then\nClick on the venv library then\nOpen the pyvenv.cfg file in any editor then\nChange this piece of code (include-system-site-packages = flase) from false to true\nThen save it and close it and also close then pycharm then\nOpen PyCharm again and your problem is solved.\nThanks\n\n",
"After pip installing everything I needed. I went to the interpreter and re-pointed it back to where it was at already. \n My case: python3.6 in /anaconda3/bin/python using virtualenv...\nAdditionally, before I hit the plus \"+\" sign to install a new package. I had to deselect the conda icon to the right of it. Seems like it would be the opposite, but only then did it recognize the packages I had/needed via query.\n",
"In my case the packages were installed via setup.py + easy_install, and the they ends up in *.egg directories in site_package dir, which can be recognized by python but not pycharm.\nI removed them all then reinstalled with pip install and it works after that, luckily the project I was working on came up with a requirements.txt file, so the command for it was: \npip install -r ./requirement.txt\n",
"On windows I had to cd into the venv folder and then cd into the scripts folder, then pip install module started to work\ncd venv\ncd scripts\npip install module\n\n",
"I just ran into this issue in a brand new install/project, but I'm using the Python plugin for IntelliJ IDEA. It's essentially the same as PyCharm but the project settings are a little different. For me, the project was pointing to the right Python virtual environment but not even built-in modules were being recognized.\nIt turns out the SDK classpath was empty. I added paths for venv/lib/python3.8 and venv/lib/python3.8/site-packages and the issue was resolved. File->Project Structure and under Platform Settings, click SDKs, select your Python SDK, and make sure the class paths are there. \n\n",
"pip install --user discord\n\nabove command solves my problem, just use the \"--user\" flag\n",
"I fixed my particular issue by installing directly to the interpreter. Go to settings and hit the \"+\" below the in-use interpreter then search for the package and install. I believe I'm having the issue in the first place because I didn't set up with my interpreter correctly with my venv (not exactly sure, but this fixed it).\nI was having issues with djangorestframework-simplejwt because it was the first package I hadn't installed to this interpreter from previous projects before starting the current one, but should work for any other package that isn't showing as imported. To reiterate though I think this is a workaround that doesn't solve the setup issue causing this.\n\n",
"instead of running pip install in the terminal -> local use terminal -> command prompt\nsee below image\npycharm_command_prompt_image\n\n",
"If you are having issues with the underlying (i.e. pycharm's languge server) mark everything as root and create a new project. See details: https://stackoverflow.com/a/73418320/1601580 this seems to happy to me only when I install packages as in editable mode with pip (i.e. pip install -e . or conda develop). Details: https://stackoverflow.com/a/73418320/1601580\n",
"--WINDOWS--\nif using Pycharm GUI package installer works fine for installing packages for your virtual environment but you cannot do the same in the terminal,\nthis is because you did not setup virtual env in your terminal, instead, your terminal uses Power Shell which doesn't use your virtual env\nthere should be (venv) before you're command line as shown instead of (PS) \nif you have (PS), this means your terminal is using Power Shell instead of cmd\nto fix this, click on the down arrow and select the command prompt\nselect command prompt\nnow you will get (venv) and just type pip install #package name# and the package will be added to your virtual environment\n",
"i just use the terminal option for output then it works. terminal allows to run python interpreter directly from pycharm platform\n"
] | [
43,
42,
12,
11,
8,
7,
6,
3,
2,
2,
1,
1,
1,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"In your pycharm terminal run pip/pip3 install package_name\n"
] | [
-2
] | [
"anaconda",
"pip",
"pycharm",
"python",
"virtualenv"
] | stackoverflow_0031235376_anaconda_pip_pycharm_python_virtualenv.txt |
Q:
Tkinter: Calling function when button is pressed but i am getting attribute error 'Application' object has no attribute 'hi'
from tkinter import *
import sqlite3
conn = sqlite3.connect('database.db')
c = conn.cursor()
class Application:
def __init__(self, master):
self.master = master
self.left = Frame(master, width=800, height=720, bg='lightgreen')
self.left.pack(side=LEFT)
self.right = Frame(master, width=400, height=720, bg='lightblue')
self.right.pack(side=RIGHT)
self.heading=Label(self.left,text="HM hospital Appointments",font=('arial 40 bold'),fg='white',bg='lightgreen')
self.heading.place(x=0,y=0)
self.name=Label(self.left,text="Patient Name",font=('arial 20 italic'),fg='black',bg='lightgreen')
self.name.place(x=0,y=100)
self.age=Label(self.left,text="Age",font=('arial 20 italic'),fg='black',bg='lightgreen')
self.age.place(x=0,y=150)
self.gender=Label(self.left,text="Gender",font=('arial 20 italic'),fg='black',bg='lightgreen')
self.gender.place(x=0,y=200)
self.location=Label(self.left,text="Location",font=('arial 20 italic'),fg='black',bg='lightgreen')
self.location.place(x=0,y=250)
self.time=Label(self.left,text="Appointment Time",font=('arial 20 italic'),fg='black',bg='lightgreen')
self.time.place(x=0,y=300)
self.name_ent=Entry(self.left,width=30)
self.name_ent.place(x=250,y=110)
self.age_ent=Entry(self.left,width=30)
self.age_ent.place(x=250,y=160)
self.gender_ent=Entry(self.left,width=30)
self.gender_ent.place(x=250,y=210)
self.location_ent=Entry(self.left,width=30)
self.location_ent.place(x=250,y=260)
self.time_ent=Entry(self.left,width=30)
self.time_ent.place(x=250,y=310)
self.submit=Button(self.left,text="Add appointment",width=20,height=2,bg='steelblue',command=self.hi)
self.submit.place(x=300,y=350)
def hi(self):
self.val1=self.name_ent.get()
root = Tk()
b = Application(root)
root.geometry("1200x720+0+0")
root.resizable(False, False)
root.mainloop()
I looked many videos to solve it but can't find the right one so please help me.
A:
To fix the error you are seeing, you need to move the definition of the hi function inside the Application class and not inside __init__, like this:
class Application:
def __init__(self, master):
# code for the rest of the __init__ method
self.submit=Button(self.left,text="Add appointment",width=20,height=2,bg='steelblue',command=self.hi)
...
...
def hi(self):
self.val1=self.name_ent.get()
| Tkinter: Calling function when button is pressed but i am getting attribute error 'Application' object has no attribute 'hi' | from tkinter import *
import sqlite3
conn = sqlite3.connect('database.db')
c = conn.cursor()
class Application:
def __init__(self, master):
self.master = master
self.left = Frame(master, width=800, height=720, bg='lightgreen')
self.left.pack(side=LEFT)
self.right = Frame(master, width=400, height=720, bg='lightblue')
self.right.pack(side=RIGHT)
self.heading=Label(self.left,text="HM hospital Appointments",font=('arial 40 bold'),fg='white',bg='lightgreen')
self.heading.place(x=0,y=0)
self.name=Label(self.left,text="Patient Name",font=('arial 20 italic'),fg='black',bg='lightgreen')
self.name.place(x=0,y=100)
self.age=Label(self.left,text="Age",font=('arial 20 italic'),fg='black',bg='lightgreen')
self.age.place(x=0,y=150)
self.gender=Label(self.left,text="Gender",font=('arial 20 italic'),fg='black',bg='lightgreen')
self.gender.place(x=0,y=200)
self.location=Label(self.left,text="Location",font=('arial 20 italic'),fg='black',bg='lightgreen')
self.location.place(x=0,y=250)
self.time=Label(self.left,text="Appointment Time",font=('arial 20 italic'),fg='black',bg='lightgreen')
self.time.place(x=0,y=300)
self.name_ent=Entry(self.left,width=30)
self.name_ent.place(x=250,y=110)
self.age_ent=Entry(self.left,width=30)
self.age_ent.place(x=250,y=160)
self.gender_ent=Entry(self.left,width=30)
self.gender_ent.place(x=250,y=210)
self.location_ent=Entry(self.left,width=30)
self.location_ent.place(x=250,y=260)
self.time_ent=Entry(self.left,width=30)
self.time_ent.place(x=250,y=310)
self.submit=Button(self.left,text="Add appointment",width=20,height=2,bg='steelblue',command=self.hi)
self.submit.place(x=300,y=350)
def hi(self):
self.val1=self.name_ent.get()
root = Tk()
b = Application(root)
root.geometry("1200x720+0+0")
root.resizable(False, False)
root.mainloop()
I looked many videos to solve it but can't find the right one so please help me.
| [
"To fix the error you are seeing, you need to move the definition of the hi function inside the Application class and not inside __init__, like this:\nclass Application:\n def __init__(self, master):\n # code for the rest of the __init__ method\n\n self.submit=Button(self.left,text=\"Add appointment\",width=20,height=2,bg='steelblue',command=self.hi)\n\n ...\n ...\n def hi(self):\n self.val1=self.name_ent.get()\n\n"
] | [
4
] | [] | [] | [
"python",
"tkinter"
] | stackoverflow_0074673731_python_tkinter.txt |
Q:
Can you explain how the feature is extracted from the following code of CNN
How the Image Features are extracted from the following convolutional neural network code
import tensorflow as tf
from tensorflow.keras.utils import img_to_array
df['PubChem_ID'] = df['PubChem_ID'].apply(str)
df_image = []
for i in tqdm(range(df.shape[0])):
img = image.load_img('/content/drive/MyDrive/3D Conformer/Conformer/'+df['PubChem_ID']
[i]+'.png',target_size=(256,256,3))
img = image.img_to_array(img)
img = img/255
df_image.append(img)
X = np.array(df_image)
The image is converted into the size 256 x 256 x 3 in matrix with three layers (RGB), where each layer contains 256 x 256 values.
y = np.array(df.drop(['PubChem_ID'],axis=1))
model = Sequential()
model.add(Convolution2D(64, kernel_size=(3, 3),padding='same',input_shape=(256,256,3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.2))
model.add(Flatten())
model.add(Dense(29))
model.add(Activation('sigmoid'))
A:
In the given code, a convolutional neural network (CNN) is used to extract image features from a dataset of images. The images in the dataset are first converted to a size of 256 x 256 x 3, where the 3 represents the 3 color channels (red, green, and blue) of the image.
The image features are extracted using the following steps:
The Convolution2D layer applies a set of filters to the input image, each of which is a 3 x 3 matrix of weights. This layer performs a convolution operation on the input image to create a new feature map.
The Activation layer applies a non-linear activation function (in this case, the ReLU function) to the output of the Convolution2D layer. This allows the network to learn more complex patterns in the data.
The MaxPooling2D layer performs a max pooling operation on the output of the Activation layer, which reduces the spatial dimensions of the feature map. This helps to reduce the number of parameters in the model and to prevent overfitting.
The Dropout layer randomly sets a fraction of the output values to zero, which helps to prevent overfitting by reducing the dependence on any one feature.
The Flatten layer flattens the output of the Dropout layer into a single vector of values. This allows the output to be fed into the next layer of the network.
The Dense layer applies a linear transformation to the flattened feature vector, which produces a 29-dimensional output vector. This layer represents the final set of image features extracted by the network.
The Activation layer applies the sigmoid activation function to the output of the Dense layer, which produces a final output vector of probabilities. This output can be used for classification or other tasks.
Overall, the given code uses a CNN to extract a set of 29 image features from the input images. These features are learned by the network during training and can be used to represent the visual content of the images in a compact and useful form.
| Can you explain how the feature is extracted from the following code of CNN | How the Image Features are extracted from the following convolutional neural network code
import tensorflow as tf
from tensorflow.keras.utils import img_to_array
df['PubChem_ID'] = df['PubChem_ID'].apply(str)
df_image = []
for i in tqdm(range(df.shape[0])):
img = image.load_img('/content/drive/MyDrive/3D Conformer/Conformer/'+df['PubChem_ID']
[i]+'.png',target_size=(256,256,3))
img = image.img_to_array(img)
img = img/255
df_image.append(img)
X = np.array(df_image)
The image is converted into the size 256 x 256 x 3 in matrix with three layers (RGB), where each layer contains 256 x 256 values.
y = np.array(df.drop(['PubChem_ID'],axis=1))
model = Sequential()
model.add(Convolution2D(64, kernel_size=(3, 3),padding='same',input_shape=(256,256,3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.2))
model.add(Flatten())
model.add(Dense(29))
model.add(Activation('sigmoid'))
| [
"In the given code, a convolutional neural network (CNN) is used to extract image features from a dataset of images. The images in the dataset are first converted to a size of 256 x 256 x 3, where the 3 represents the 3 color channels (red, green, and blue) of the image.\nThe image features are extracted using the following steps:\nThe Convolution2D layer applies a set of filters to the input image, each of which is a 3 x 3 matrix of weights. This layer performs a convolution operation on the input image to create a new feature map.\nThe Activation layer applies a non-linear activation function (in this case, the ReLU function) to the output of the Convolution2D layer. This allows the network to learn more complex patterns in the data.\nThe MaxPooling2D layer performs a max pooling operation on the output of the Activation layer, which reduces the spatial dimensions of the feature map. This helps to reduce the number of parameters in the model and to prevent overfitting.\nThe Dropout layer randomly sets a fraction of the output values to zero, which helps to prevent overfitting by reducing the dependence on any one feature.\nThe Flatten layer flattens the output of the Dropout layer into a single vector of values. This allows the output to be fed into the next layer of the network.\nThe Dense layer applies a linear transformation to the flattened feature vector, which produces a 29-dimensional output vector. This layer represents the final set of image features extracted by the network.\nThe Activation layer applies the sigmoid activation function to the output of the Dense layer, which produces a final output vector of probabilities. This output can be used for classification or other tasks.\nOverall, the given code uses a CNN to extract a set of 29 image features from the input images. These features are learned by the network during training and can be used to represent the visual content of the images in a compact and useful form.\n"
] | [
1
] | [] | [] | [
"conv_neural_network",
"feature_extraction",
"image_preprocessing",
"python"
] | stackoverflow_0074673792_conv_neural_network_feature_extraction_image_preprocessing_python.txt |
Q:
Command does not work during discord bot creation
I typed !mycommand2 [name] [role] because it didn't come out even when I typed command !캐릭터생성 [name] [role], but it's still the same. Why? And description's role(Is it like an annotation? Explain to the developer what command this is without a role?) and...I also wonder about command hidden.
char = I want to mack a instance.....char.py has class char.
import discord, asyncio
import char
from discord.ext import commands
intents=discord.Intents.all()
client = discord.Client(intents=intents)
bot = commands.Bot(command_prefix='!',intents=intents,help_command=None)
@client.event
async def on_ready():
await client.change_presence(status=discord.Status.online, activity=discord.Game("언성듀엣!"))
@bot.command(name="테스트", description="테스트용 함수", hidden=False) #set hidden to True to hide it in the help
async def mycommand1(ctx, argument1, argument2):
await ctx.channel.send ("{} | {}, Hello".format(ctx.author, ctx.author.mention))
await ctx.author.send ("{} | {}, User, Hello".format(ctx.author, ctx.author.mention))
char_num = 1
@bot.command(name="캐릭터생성", description="테스트용 함수", hidden=False) #set hidden to True to hide it in the help
async def mycommand2(ctx, context1, context2):
global char_num
globals()['char_{}'.format(char_num)]=char(name=context1,Sffter=context2,username=ctx.author.name)
char_num+=1
await ctx.message.channel.send ("done", context1,"!")
client.run('-')
A:
Change the function name to the command name
async def urcommandname(ctx,arg1,arg2):
| Command does not work during discord bot creation | I typed !mycommand2 [name] [role] because it didn't come out even when I typed command !캐릭터생성 [name] [role], but it's still the same. Why? And description's role(Is it like an annotation? Explain to the developer what command this is without a role?) and...I also wonder about command hidden.
char = I want to mack a instance.....char.py has class char.
import discord, asyncio
import char
from discord.ext import commands
intents=discord.Intents.all()
client = discord.Client(intents=intents)
bot = commands.Bot(command_prefix='!',intents=intents,help_command=None)
@client.event
async def on_ready():
await client.change_presence(status=discord.Status.online, activity=discord.Game("언성듀엣!"))
@bot.command(name="테스트", description="테스트용 함수", hidden=False) #set hidden to True to hide it in the help
async def mycommand1(ctx, argument1, argument2):
await ctx.channel.send ("{} | {}, Hello".format(ctx.author, ctx.author.mention))
await ctx.author.send ("{} | {}, User, Hello".format(ctx.author, ctx.author.mention))
char_num = 1
@bot.command(name="캐릭터생성", description="테스트용 함수", hidden=False) #set hidden to True to hide it in the help
async def mycommand2(ctx, context1, context2):
global char_num
globals()['char_{}'.format(char_num)]=char(name=context1,Sffter=context2,username=ctx.author.name)
char_num+=1
await ctx.message.channel.send ("done", context1,"!")
client.run('-')
| [
"Change the function name to the command name\nasync def urcommandname(ctx,arg1,arg2):\n\n"
] | [
0
] | [] | [] | [
"discord",
"python"
] | stackoverflow_0074673267_discord_python.txt |
Q:
Python(sympy) : How to graph smoothly in 2nd ODE solution with Sympy?
I'm studing about structural dynamic analysis.
I solved a problem : 1 degree of freedom
The question is
m*y'' + cy' + ky = 900 sin(5.3x)
m=6938.78, c=5129.907, k=379259, y is the function of x
I solved it's response using by Python and Sympy library.
I drew the response by pyplot. But it's shape is not smooth like below
enter image description here
How can I draw the respone smoothly?
I tried to draw smoothly by substituting each x to y by numpy, but could not insert x into sin(5.3x).
from sympy import *
import matplotlib.pyplot as plt
x, y=symbols("x, y")
f=symbols('f',cls=Function)
y=f(x)
eq=Eq( 6938.78*diff(y,x,2) + 5129.907*diff(y,x) + 379259*y-900*sin(5.3*x),0)
eq_done=dsolve(eq,y, ics={ f(0):0, diff(y,x).subs(x,0):0 } )
plot(eq_done.rhs,(x,0,10))
A:
To get a smoother line you can turn off the adaptive algorithm and set the number of points per line:
plot(eq_done.rhs,(x,0,10), adaptive=False, nb_of_points=1000)
Also, the help() function is your friend, as it allows to quickly access the documentation of a particular function. Execute help(plot) to learn more about the plot command.
| Python(sympy) : How to graph smoothly in 2nd ODE solution with Sympy? | I'm studing about structural dynamic analysis.
I solved a problem : 1 degree of freedom
The question is
m*y'' + cy' + ky = 900 sin(5.3x)
m=6938.78, c=5129.907, k=379259, y is the function of x
I solved it's response using by Python and Sympy library.
I drew the response by pyplot. But it's shape is not smooth like below
enter image description here
How can I draw the respone smoothly?
I tried to draw smoothly by substituting each x to y by numpy, but could not insert x into sin(5.3x).
from sympy import *
import matplotlib.pyplot as plt
x, y=symbols("x, y")
f=symbols('f',cls=Function)
y=f(x)
eq=Eq( 6938.78*diff(y,x,2) + 5129.907*diff(y,x) + 379259*y-900*sin(5.3*x),0)
eq_done=dsolve(eq,y, ics={ f(0):0, diff(y,x).subs(x,0):0 } )
plot(eq_done.rhs,(x,0,10))
| [
"To get a smoother line you can turn off the adaptive algorithm and set the number of points per line:\nplot(eq_done.rhs,(x,0,10), adaptive=False, nb_of_points=1000)\n\nAlso, the help() function is your friend, as it allows to quickly access the documentation of a particular function. Execute help(plot) to learn more about the plot command.\n"
] | [
0
] | [] | [] | [
"graphing",
"python",
"sympy"
] | stackoverflow_0074664776_graphing_python_sympy.txt |
Q:
Visual Studio Code does not detect Virtual Environments
Visual Studio Code does not detect virtual environments. I run vscode in the folder where the venv folder is located, when I try to select the kernel in vscode I can see the main environment and one located elsewhere on the disk.
Jupyter running in vscode also doesn't see this environment. I have installed ipykernel in this environment. I tried to reinstall vscode and python extension.
I tried to set the path in settings.json inside .vscode:
{
"python.pythonPath": ".\\venv\\Scripts\\python.exe"
}
Windows 10
Python 3.6.7 (64-bit)
VSCode 1.54.3
A:
In VSCode open your command palette — Ctrl+Shift+P by default
Look for Python: Select Interpreter
In Select Interpreter choose Enter interpreter path... and then Find...
Navigate to your venv folder — eg, ~/pyenvs/myenv/ or \Users\Foo\Bar\PyEnvs\MyEnv\
In the virtual environment folder choose <your-venv-name>/bin/python or <your-venv-name>/bin/python3
The issue is that VSCode's Python extension by default uses the main python or python3 program while venv effectively creates a "new" python/python3 executable (that is kind of the point of venv) so the extension does not have access to anything (available modules, namespaces, etc) that you have installed through a venv since the venv specific installations are not available to the main Python interpreter (again, this is by design—like how applications installed in a VM are not available to the host OS)
A:
1.In VSCode open your command palette — Ctrl+Shift+P by default
2.Look for Python: Select Interpreter
3.In Select Interpreter choose Enter interpreter path... and then Find...
4.Locate env folder, open Scripts folder , and choose python or python3
windows - venv
A:
OK, I found a solution.
Firstly uninstall Visual Studio Code. Go to C:\Users\Your_profile and delete the folders related to Visual Studio Code that start with a period. Then turn on showing hidden folders and go to C:\Users\Your_profile\AppData. Type vscode in the file finder and remove all foders and files related to Visual Studio Code. Finally, install Visual Studio Code and enjoy the virtual environments. :)
A:
VS Code: Python Interpreter can't find my venv
The only solution I found was to delete the venv and recreate it. I followed these steps but I'll provide a brief summary for Windows:
Activate your virtualenv. Go to the parent folder where your Virtual Environment is located and run venv\scripts\activate. Keep in mind that the first name "venv" can vary.
Create a requirements.txt file. pip freeze requirements.txt
deactivate to exit the venv
rm venv to delete the venv
py -m venv venv to create a new one
pip install -r requirements.txt to install the requirements.
This worked for me, I didn't delete the old, but created a new python -m venv /path/newVenv in the ~/Envs folder, C:\Users\Admin\Envs. Maybe VS Code is searching in the ~/Envs folder, or it needs to be added to the python.path in the View -> Command Pallete -> >Preferences: Open User Settings.
A:
None of the suggestions on this thread worked for me. That said, I don't think the issue lies with VS Code, it's venv. I wound up installing PyCharm to fix this. After you’ve downloaded:
PyCharm > Preferences > search “interpreter” > Project: Python Interpreter > Click ‘+’ > in Virtualenv Environment > New environment (should automatically populate everything for a new env). Select OK, OK, OK.
In the bottom left, you’ll see Git | TODO | Problems | Terminal…etc. Click “Terminal” and you should see your environment already activated. From there, pip3 install your dependencies. Close PyCharm.
Go back to VS Code, open your project, and follow the suggestions above to select the Virtualenv (mine was 'venv': venv) as your interpreter.
Finally resolved.
A:
If you're a Linux user, and you've used this or similaar to create your virtual environment:
python3 -m venv venv
and you cannot get the debug to work, remove your venv and create it from the VS Code terminal (click Ctrl + back-tick to open).
When you create it from the VS Code terminal, VS Code will ask if you want to use this new environment it amazingly detected for this workspace, say yes.
A:
Part of the confusion here may stem from UI behavior that is at odds with the VScode documentation. The docs state:
When you create a new virtual environment, a prompt will be displayed
to allow you to select it for the workspace.
That didn't happen in my case (VScode 1.66.2 running on Windows 10 with Remote - WSL plugin version 0.66.2). I followed the steps outlined here; I did not see the pop-up described by the VScode docs but clicking on the Python interpreter version in the status bar showed that VScode had automatically selected the interpreter installed in the virtual environment. Furthermore, I did observe that VScode was sourcing .venv/bin/activate as described in the post linked above
Run the code by clicking the play button, note the .venv and source
“/Users/jemurray/Google
Drive/scripts/personalPython/helloworld/.venv/bin/activate” in the
terminal shows the script is activated and running in the virtual
environment
A:
I was having the same error in my scripts with a virtual environment called "venv", so searching the Visual Studio documentation I found that the virtual environment starts with a dot "." but they never mentioned this, then I created my virtual environment ".venv" and that fixes the error:
https://code.visualstudio.com/docs/python/environments#_create-a-virtual-environment
A:
In my own case, I was trying to activate the venv in Windows PowerShell while the venv was created in wsl. So, I had to recreate the venv with PowerShell albeit with different environment name and reinstall the requirements.
A:
Here's the answer. Add this to your user and/or workspace settings.json file:
"python.defaultInterpreterPath": "${env:VIRTUAL_ENV}".
Then the first time you launch a workspace from an active virtual environment, vscode will set the interpreter correctly. Thereafter it will use whatever interpreter was set the last time the workspace was closed. As long as you don't manually change it, you're set. For existing workspaces, just manually set the interpreter and vscode will always use the interpreter from the prior session. It will never use anything in settings.json (or .env or .venv) except the first time a workspace is launched (and in that case, I think it only uses the settings.json name-value pair shown above).
That will work as-is for virtualenvs managed by pyenv-virtualenv (or virtualenvwrapper). Should work for regular virtualenv too. For conda, replace VIRTUAL_ENV with whatever it uses, assuming it sets a similar variable. Just activate something and type env to see all the environment variables.
This is the solution as long as you create a virtualenv, then launch a workspace for the first time, and the association between the workspace and virtualenv does not change. Unfortunately, it appears you have to set the interpreter manually if the association changes, but you only have to do it once.
The official explanation is here, specifically where it says the interpreter is stored internally i.e. not in any configuration file exposed to the user:
A:
This issue in VS code was fixed for me my simply using Command Prompt in VS code instead of PowerShell as the Terminal
A:
"python.venvPath" is the command to provide the venv path.
In VScode settings.json add
"python.terminal.activateEnvironment": true,
"python.venvPath": "Add_Venv_DirectoryPath_here",
A:
After some search I found the next property in the vs-code settings which fix the problem for me: Python: Env File, where the default value is ${workspaceFolder}/.env.
Usually I call my venv folder .venv so I fixed the settings to be
${workspaceFolder}/.venv.
Now the venv python version appeared in the select interpreter option.
vs code venv file property
A:
I have similar problem, and found a very easy and simple solution. I am using a mac and this is how it works.
I structured my development folder like this: "Users/my_user_name/Dev/venv"
I created multiple virtual environments at the same level on the "venv". The problem is I fill out the "python.venvPath" with "Users/my_user_name/Dev/venv1" or one of the virtual environment. This prevent VS Code form detecting my other virtual environment. So the fix is very simple, just change the value of "python.venvPath" from "Users/my_user_name/Dev/venv1" to this "Users/my_user_name/Dev/" and voila, it detects all of my virtual environment.
I hope this answer helps whoever having similar problem.
| Visual Studio Code does not detect Virtual Environments | Visual Studio Code does not detect virtual environments. I run vscode in the folder where the venv folder is located, when I try to select the kernel in vscode I can see the main environment and one located elsewhere on the disk.
Jupyter running in vscode also doesn't see this environment. I have installed ipykernel in this environment. I tried to reinstall vscode and python extension.
I tried to set the path in settings.json inside .vscode:
{
"python.pythonPath": ".\\venv\\Scripts\\python.exe"
}
Windows 10
Python 3.6.7 (64-bit)
VSCode 1.54.3
| [
"\nIn VSCode open your command palette — Ctrl+Shift+P by default\n\nLook for Python: Select Interpreter\n\nIn Select Interpreter choose Enter interpreter path... and then Find...\n\nNavigate to your venv folder — eg, ~/pyenvs/myenv/ or \\Users\\Foo\\Bar\\PyEnvs\\MyEnv\\\n\nIn the virtual environment folder choose <your-venv-name>/bin/python or <your-venv-name>/bin/python3\n\n\n\nThe issue is that VSCode's Python extension by default uses the main python or python3 program while venv effectively creates a \"new\" python/python3 executable (that is kind of the point of venv) so the extension does not have access to anything (available modules, namespaces, etc) that you have installed through a venv since the venv specific installations are not available to the main Python interpreter (again, this is by design—like how applications installed in a VM are not available to the host OS)\n",
"1.In VSCode open your command palette — Ctrl+Shift+P by default\n2.Look for Python: Select Interpreter\n3.In Select Interpreter choose Enter interpreter path... and then Find...\n4.Locate env folder, open Scripts folder , and choose python or python3\n\nwindows - venv\n\n",
"OK, I found a solution.\nFirstly uninstall Visual Studio Code. Go to C:\\Users\\Your_profile and delete the folders related to Visual Studio Code that start with a period. Then turn on showing hidden folders and go to C:\\Users\\Your_profile\\AppData. Type vscode in the file finder and remove all foders and files related to Visual Studio Code. Finally, install Visual Studio Code and enjoy the virtual environments. :)\n",
"VS Code: Python Interpreter can't find my venv\n\nThe only solution I found was to delete the venv and recreate it. I followed these steps but I'll provide a brief summary for Windows:\n\nActivate your virtualenv. Go to the parent folder where your Virtual Environment is located and run venv\\scripts\\activate. Keep in mind that the first name \"venv\" can vary.\nCreate a requirements.txt file. pip freeze requirements.txt\ndeactivate to exit the venv\nrm venv to delete the venv\npy -m venv venv to create a new one\npip install -r requirements.txt to install the requirements.\n\n\nThis worked for me, I didn't delete the old, but created a new python -m venv /path/newVenv in the ~/Envs folder, C:\\Users\\Admin\\Envs. Maybe VS Code is searching in the ~/Envs folder, or it needs to be added to the python.path in the View -> Command Pallete -> >Preferences: Open User Settings.\n",
"None of the suggestions on this thread worked for me. That said, I don't think the issue lies with VS Code, it's venv. I wound up installing PyCharm to fix this. After you’ve downloaded:\nPyCharm > Preferences > search “interpreter” > Project: Python Interpreter > Click ‘+’ > in Virtualenv Environment > New environment (should automatically populate everything for a new env). Select OK, OK, OK.\nIn the bottom left, you’ll see Git | TODO | Problems | Terminal…etc. Click “Terminal” and you should see your environment already activated. From there, pip3 install your dependencies. Close PyCharm.\nGo back to VS Code, open your project, and follow the suggestions above to select the Virtualenv (mine was 'venv': venv) as your interpreter.\nFinally resolved.\n",
"If you're a Linux user, and you've used this or similaar to create your virtual environment:\npython3 -m venv venv\n\nand you cannot get the debug to work, remove your venv and create it from the VS Code terminal (click Ctrl + back-tick to open).\nWhen you create it from the VS Code terminal, VS Code will ask if you want to use this new environment it amazingly detected for this workspace, say yes.\n",
"Part of the confusion here may stem from UI behavior that is at odds with the VScode documentation. The docs state:\n\nWhen you create a new virtual environment, a prompt will be displayed\nto allow you to select it for the workspace.\n\nThat didn't happen in my case (VScode 1.66.2 running on Windows 10 with Remote - WSL plugin version 0.66.2). I followed the steps outlined here; I did not see the pop-up described by the VScode docs but clicking on the Python interpreter version in the status bar showed that VScode had automatically selected the interpreter installed in the virtual environment. Furthermore, I did observe that VScode was sourcing .venv/bin/activate as described in the post linked above\n\nRun the code by clicking the play button, note the .venv and source\n“/Users/jemurray/Google\nDrive/scripts/personalPython/helloworld/.venv/bin/activate” in the\nterminal shows the script is activated and running in the virtual\nenvironment\n\n",
"I was having the same error in my scripts with a virtual environment called \"venv\", so searching the Visual Studio documentation I found that the virtual environment starts with a dot \".\" but they never mentioned this, then I created my virtual environment \".venv\" and that fixes the error:\nhttps://code.visualstudio.com/docs/python/environments#_create-a-virtual-environment\n",
"In my own case, I was trying to activate the venv in Windows PowerShell while the venv was created in wsl. So, I had to recreate the venv with PowerShell albeit with different environment name and reinstall the requirements.\n",
"Here's the answer. Add this to your user and/or workspace settings.json file:\n\"python.defaultInterpreterPath\": \"${env:VIRTUAL_ENV}\".\nThen the first time you launch a workspace from an active virtual environment, vscode will set the interpreter correctly. Thereafter it will use whatever interpreter was set the last time the workspace was closed. As long as you don't manually change it, you're set. For existing workspaces, just manually set the interpreter and vscode will always use the interpreter from the prior session. It will never use anything in settings.json (or .env or .venv) except the first time a workspace is launched (and in that case, I think it only uses the settings.json name-value pair shown above).\nThat will work as-is for virtualenvs managed by pyenv-virtualenv (or virtualenvwrapper). Should work for regular virtualenv too. For conda, replace VIRTUAL_ENV with whatever it uses, assuming it sets a similar variable. Just activate something and type env to see all the environment variables.\nThis is the solution as long as you create a virtualenv, then launch a workspace for the first time, and the association between the workspace and virtualenv does not change. Unfortunately, it appears you have to set the interpreter manually if the association changes, but you only have to do it once.\nThe official explanation is here, specifically where it says the interpreter is stored internally i.e. not in any configuration file exposed to the user:\n\n",
"This issue in VS code was fixed for me my simply using Command Prompt in VS code instead of PowerShell as the Terminal\n\n",
"\"python.venvPath\" is the command to provide the venv path.\nIn VScode settings.json add\n \"python.terminal.activateEnvironment\": true,\n\n \"python.venvPath\": \"Add_Venv_DirectoryPath_here\",\n\n",
"After some search I found the next property in the vs-code settings which fix the problem for me: Python: Env File, where the default value is ${workspaceFolder}/.env.\nUsually I call my venv folder .venv so I fixed the settings to be\n${workspaceFolder}/.venv.\nNow the venv python version appeared in the select interpreter option.\nvs code venv file property\n",
"I have similar problem, and found a very easy and simple solution. I am using a mac and this is how it works.\nI structured my development folder like this: \"Users/my_user_name/Dev/venv\"\nI created multiple virtual environments at the same level on the \"venv\". The problem is I fill out the \"python.venvPath\" with \"Users/my_user_name/Dev/venv1\" or one of the virtual environment. This prevent VS Code form detecting my other virtual environment. So the fix is very simple, just change the value of \"python.venvPath\" from \"Users/my_user_name/Dev/venv1\" to this \"Users/my_user_name/Dev/\" and voila, it detects all of my virtual environment.\nI hope this answer helps whoever having similar problem.\n"
] | [
36,
5,
2,
2,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [] | [] | [
"jupyter",
"python",
"virtual_environment",
"visual_studio_code"
] | stackoverflow_0066869413_jupyter_python_virtual_environment_visual_studio_code.txt |
Q:
I get error "unmatched '}'" when I scrape website(korter.az)
I want to crawl all advertisements but output is "unmatched '}'". Is there any easy way to do it? I tried Beautifulsoup before but I think It's not correct way to do it or I'm using it wrong way.
How can I scrape all '199 yeni tikili binalar' from the website.
from ast import literal_eval
from bs4 import BeautifulSoup as bs
import requests
import re
import json
import requests
import pandas as pd
from ast import literal_eval
url = "https://korter.az/yasayis-kompleksleri-baku"
html_doc = requests.get(url).text
data = re.search(r'2804\.jpg"\}\}\}\],(".*")', html_doc).group(1)
data = json.loads(literal_eval(data))
df = pd.DataFrame(data)
df.to_excel('korter.xlsx', index=False)
A:
The site has an api which can be accessed by request
Url of the API is : "https://korter.az/api/building/listing?mainGeoObjectId=1&page=1&lang=az-AZ&locale=az-AZ"
Full Code
import requests
import math
import pandas as pd
def roundup(x):
return int(math.ceil(x / 20.0)) * 20
# Gettig no of results
url1 = f"https://korter.az/api/building/listing?mainGeoObjectId=1&page=1&lang=az-AZ&locale=az-AZ"
r = requests.get(url1)
no_of_outcomes = r.json()["totalBuildingsCount"]
# since the data is 199 i am rounding up to 20 since i will divide no of outcomes by 20 as the api only provides with 20 results at a time
no_of_outcomes = roundup(no_of_outcomes)
# Getting Sub Url from each Page by looping.
result_url = []
previous_subdata = []
for k in range(1, int(no_of_outcomes/20)+1):
url = f"https://korter.az/api/building/listing?mainGeoObjectId=1&page={k}&lang=az-AZ&locale=az-AZ"
r = requests.get(url)
subdata = r.json()["buildings"]
for i in subdata:
suburl = "https://korter.az"+i["url"]
result_url.append(suburl)
print(len(result_url))
df = pd.DataFrame(result_url)
print(df)
Output
199
0
0 https://korter.az/toca-residence-baki
1 https://korter.az/malibu-residence-baki
2 https://korter.az/zirve-park-baki
3 https://korter.az/melissa-park-baki
4 https://korter.az/white-hotel-baki
.. ...
194 https://korter.az/yasham-boulevard-baki
195 https://korter.az/koroglu-baki
196 https://korter.az/luxor-palace-baki
197 https://korter.az/shirvanshahlar-residence-baki
198 https://korter.az/baki-baglari-baki
[199 rows x 1 columns]
Hope this helps. Happy Coding :)
| I get error "unmatched '}'" when I scrape website(korter.az) | I want to crawl all advertisements but output is "unmatched '}'". Is there any easy way to do it? I tried Beautifulsoup before but I think It's not correct way to do it or I'm using it wrong way.
How can I scrape all '199 yeni tikili binalar' from the website.
from ast import literal_eval
from bs4 import BeautifulSoup as bs
import requests
import re
import json
import requests
import pandas as pd
from ast import literal_eval
url = "https://korter.az/yasayis-kompleksleri-baku"
html_doc = requests.get(url).text
data = re.search(r'2804\.jpg"\}\}\}\],(".*")', html_doc).group(1)
data = json.loads(literal_eval(data))
df = pd.DataFrame(data)
df.to_excel('korter.xlsx', index=False)
| [
"The site has an api which can be accessed by request\nUrl of the API is : \"https://korter.az/api/building/listing?mainGeoObjectId=1&page=1&lang=az-AZ&locale=az-AZ\"\nFull Code\nimport requests\nimport math\nimport pandas as pd\n\n\ndef roundup(x):\n return int(math.ceil(x / 20.0)) * 20\n\n\n# Gettig no of results\nurl1 = f\"https://korter.az/api/building/listing?mainGeoObjectId=1&page=1&lang=az-AZ&locale=az-AZ\"\nr = requests.get(url1)\nno_of_outcomes = r.json()[\"totalBuildingsCount\"]\n# since the data is 199 i am rounding up to 20 since i will divide no of outcomes by 20 as the api only provides with 20 results at a time\nno_of_outcomes = roundup(no_of_outcomes)\n\n# Getting Sub Url from each Page by looping.\n\nresult_url = []\nprevious_subdata = []\n\nfor k in range(1, int(no_of_outcomes/20)+1):\n url = f\"https://korter.az/api/building/listing?mainGeoObjectId=1&page={k}&lang=az-AZ&locale=az-AZ\"\n r = requests.get(url)\n subdata = r.json()[\"buildings\"]\n for i in subdata:\n suburl = \"https://korter.az\"+i[\"url\"]\n result_url.append(suburl)\n\n\nprint(len(result_url))\ndf = pd.DataFrame(result_url)\nprint(df)\n\nOutput\n199\n 0\n0 https://korter.az/toca-residence-baki\n1 https://korter.az/malibu-residence-baki\n2 https://korter.az/zirve-park-baki\n3 https://korter.az/melissa-park-baki\n4 https://korter.az/white-hotel-baki\n.. ...\n194 https://korter.az/yasham-boulevard-baki\n195 https://korter.az/koroglu-baki\n196 https://korter.az/luxor-palace-baki\n197 https://korter.az/shirvanshahlar-residence-baki\n198 https://korter.az/baki-baglari-baki\n\n[199 rows x 1 columns]\n\nHope this helps. Happy Coding :)\n"
] | [
0
] | [] | [] | [
"python",
"python_re",
"web_scraping"
] | stackoverflow_0074673490_python_python_re_web_scraping.txt |
Q:
How to create a random list that satisfy a condition (in one try)?
I have written the following code to generate a random list. I want the list to have elements between 0 and 500, but the summation of all elements does not exceed 1300. I dont know how to continue my code to do that. I have written other codes; for example, to create a list of random vectors and then pick among those that satisfy the condition. But here I want to create such a list in one try.
nv = 5
bounds = [(0, 500), (0, 500), (0, 500), (0, 500), (0, 500)]
var =[]
for j in range(nv):
var.append(random.uniform(bounds[j][0], bounds[j][1]))
summ = sum(var)
if summ > 1300:
????
A:
Don't append until after you've validated the value.
Use while len() < maxLen so that you can handle repeat attempts.
You don't really need nv since len(bounds) dictates the final value of len(var).
len(var) is also the next index of the var list that is unused so you can use that to keep track of where you are in bounds.
A running sum is more efficient than using sum() on every check. (Though on small lists, it's not going to make a noticeable difference.)
The * in the .uniform() call splits a list into individual arguments. (Asterisks in Python: what they are and how to use them seems like a good tutorial on the subject.)
import random
bounds = [(0, 500), (0, 500), (0, 500), (0, 500), (0, 500)]
var = []
runningSum = 0
while len(var) < len(bounds):
sample = random.uniform(*bounds[len(var)])
if runningSum + sample < 1300:
runningSum += sample
var.append(sample)
print(repr(var))
A:
Without the aid of numpy you could do this:
from random import uniform
def func1():
LIMIT = 1_300
bounds = [(0, 500), (0, 500), (0, 500), (0, 500), (0, 500)]
while sum(result := [uniform(lo, hi) for lo, hi in bounds]) > LIMIT:
pass
return result
| How to create a random list that satisfy a condition (in one try)? | I have written the following code to generate a random list. I want the list to have elements between 0 and 500, but the summation of all elements does not exceed 1300. I dont know how to continue my code to do that. I have written other codes; for example, to create a list of random vectors and then pick among those that satisfy the condition. But here I want to create such a list in one try.
nv = 5
bounds = [(0, 500), (0, 500), (0, 500), (0, 500), (0, 500)]
var =[]
for j in range(nv):
var.append(random.uniform(bounds[j][0], bounds[j][1]))
summ = sum(var)
if summ > 1300:
????
| [
"Don't append until after you've validated the value.\nUse while len() < maxLen so that you can handle repeat attempts.\nYou don't really need nv since len(bounds) dictates the final value of len(var).\nlen(var) is also the next index of the var list that is unused so you can use that to keep track of where you are in bounds.\nA running sum is more efficient than using sum() on every check. (Though on small lists, it's not going to make a noticeable difference.)\nThe * in the .uniform() call splits a list into individual arguments. (Asterisks in Python: what they are and how to use them seems like a good tutorial on the subject.)\nimport random\n\nbounds = [(0, 500), (0, 500), (0, 500), (0, 500), (0, 500)]\nvar = []\nrunningSum = 0\nwhile len(var) < len(bounds):\n sample = random.uniform(*bounds[len(var)])\n if runningSum + sample < 1300:\n runningSum += sample\n var.append(sample)\n\nprint(repr(var))\n\n",
"Without the aid of numpy you could do this:\nfrom random import uniform\n\ndef func1():\n LIMIT = 1_300\n bounds = [(0, 500), (0, 500), (0, 500), (0, 500), (0, 500)]\n\n while sum(result := [uniform(lo, hi) for lo, hi in bounds]) > LIMIT:\n pass\n\n return result\n\n"
] | [
1,
0
] | [] | [] | [
"list",
"numpy",
"python",
"random"
] | stackoverflow_0074673377_list_numpy_python_random.txt |
Q:
Calling mean() Function Without Removing Non-Numeric Columns In Dataframe
I have the following dataframe:
import pandas as pd
fertilityRates = pd.read_csv('fertility_rate.csv')
fertilityRatesRowCount = len(fertilityRates.axes[0])
fertilityRates.head(fertilityRatesRowCount)
I have found a way to find the mean for each row over columns 1960-1969, but would like to do so without removing the column called "Country".
The following is what is outputted after I execute the following commands:
Mean1960To1970 = fertilityRates.iloc[:, 1:11].mean(axis=1)
Mean1960To1970
A:
You can use pandas.DataFrame.loc to select a range of years (e.g "1960":"1968" means from 1960 to 1968).
Try this :
Mean1960To1968 = (
fertilityRates[["Country"]]
.assign(Mean= fertilityRates.loc[:, "1960":"1968"].mean(axis=1))
)
# Output :
print(Mean1960To1968)
Country Mean
0 _World 5.004444
1 Afghanistan 7.450000
2 Albania 5.913333
3 Algeria 7.635556
4 Angola 7.030000
5 Antigua and Barbuda 4.223333
6 Arab World 7.023333
7 Argentina 3.073333
8 Armenia 4.133333
9 Aruba 4.044444
10 Australia 3.167778
11 Austria 2.715556
| Calling mean() Function Without Removing Non-Numeric Columns In Dataframe | I have the following dataframe:
import pandas as pd
fertilityRates = pd.read_csv('fertility_rate.csv')
fertilityRatesRowCount = len(fertilityRates.axes[0])
fertilityRates.head(fertilityRatesRowCount)
I have found a way to find the mean for each row over columns 1960-1969, but would like to do so without removing the column called "Country".
The following is what is outputted after I execute the following commands:
Mean1960To1970 = fertilityRates.iloc[:, 1:11].mean(axis=1)
Mean1960To1970
| [
"You can use pandas.DataFrame.loc to select a range of years (e.g \"1960\":\"1968\" means from 1960 to 1968).\nTry this :\nMean1960To1968 = (\n fertilityRates[[\"Country\"]]\n .assign(Mean= fertilityRates.loc[:, \"1960\":\"1968\"].mean(axis=1))\n )\n\n# Output :\nprint(Mean1960To1968)\n\n Country Mean\n0 _World 5.004444\n1 Afghanistan 7.450000\n2 Albania 5.913333\n3 Algeria 7.635556\n4 Angola 7.030000\n5 Antigua and Barbuda 4.223333\n6 Arab World 7.023333\n7 Argentina 3.073333\n8 Armenia 4.133333\n9 Aruba 4.044444\n10 Australia 3.167778\n11 Austria 2.715556\n\n"
] | [
0
] | [] | [] | [
"dataframe",
"pandas",
"python"
] | stackoverflow_0074673594_dataframe_pandas_python.txt |
Q:
Cython Buffer types only allowed as function local variables
I create a function that take x, y, batch size as input and yield mini batch as output with cython to sped up the process.
import numpy as np
cimport cython
cimport numpy as np
ctypedef np.float64_t DTYPE_t
@cython.boundscheck(False)
def create_mini_batches(np.ndarray[DTYPE_t, ndim=2] X, np.ndarray[DTYPE_t, ndim=2] y, int batch_size):
cdef int m
cdef double num_of_batch
cdef np.ndarray[DTYPE_t, ndim=2] shuffle_X
cdef np.ndarray[DTYPE_t, ndim=2] shuffle_y
cdef int permutation
X, y = X.T, y.T
m = X.shape[0]
num_of_batch = m // batch_size
permutation = list(np.random.permutation(m))
shuffle_X = X[permutation, :]
shuffle_y = y[permutation, :]
for t in range(num_of_batch):
mini_x = shuffle_X[t * batch_size: (t + 1) * batch_size, :]
mini_y = shuffle_y[t * batch_size: (t + 1) * batch_size, :]
yield (mini_x.T, mini_y.T)
if m % batch_size != 0:
mini_x = shuffle_X[m // batch_size * batch_size: , :]
mini_y = shuffle_y[m // batch_size * batch_size: , :]
yield (mini_x.T, mini_y.T)
When I compile the program with this code python setup.py build_ext --inplace the following error showed up.
@cython.boundscheck(False)
def create_mini_batches(np.ndarray\[DTYPE_t, ndim=2\] X, np.ndarray\[DTYPE_t, ndim=2\] y, int batch_size):
^
test.pyx:8:24: Buffer types only allowed as function local variables
Can someone help me how to solved the error and why it is a error?
A:
It's a sightly confusing error message in this case but you're getting it because it's a generator rather than a function. This means that Cython has to create an internal data structure to hold the generator state while it works.
Typed Numpy array variables (e.g. np.ndarray[DTYPE_t, ndim=2]) were implemented in a way where it's very hard to handle their reference counting correctly. Therefore Cython can only handle them as variables in a regular function. It cannot store them in a class, and thus cannot use them in a generator.
To solve it your either need to drop the typing, or you should switch to the more recent typed memoryviews which were designed better so don't have this limitation.
| Cython Buffer types only allowed as function local variables | I create a function that take x, y, batch size as input and yield mini batch as output with cython to sped up the process.
import numpy as np
cimport cython
cimport numpy as np
ctypedef np.float64_t DTYPE_t
@cython.boundscheck(False)
def create_mini_batches(np.ndarray[DTYPE_t, ndim=2] X, np.ndarray[DTYPE_t, ndim=2] y, int batch_size):
cdef int m
cdef double num_of_batch
cdef np.ndarray[DTYPE_t, ndim=2] shuffle_X
cdef np.ndarray[DTYPE_t, ndim=2] shuffle_y
cdef int permutation
X, y = X.T, y.T
m = X.shape[0]
num_of_batch = m // batch_size
permutation = list(np.random.permutation(m))
shuffle_X = X[permutation, :]
shuffle_y = y[permutation, :]
for t in range(num_of_batch):
mini_x = shuffle_X[t * batch_size: (t + 1) * batch_size, :]
mini_y = shuffle_y[t * batch_size: (t + 1) * batch_size, :]
yield (mini_x.T, mini_y.T)
if m % batch_size != 0:
mini_x = shuffle_X[m // batch_size * batch_size: , :]
mini_y = shuffle_y[m // batch_size * batch_size: , :]
yield (mini_x.T, mini_y.T)
When I compile the program with this code python setup.py build_ext --inplace the following error showed up.
@cython.boundscheck(False)
def create_mini_batches(np.ndarray\[DTYPE_t, ndim=2\] X, np.ndarray\[DTYPE_t, ndim=2\] y, int batch_size):
^
test.pyx:8:24: Buffer types only allowed as function local variables
Can someone help me how to solved the error and why it is a error?
| [
"It's a sightly confusing error message in this case but you're getting it because it's a generator rather than a function. This means that Cython has to create an internal data structure to hold the generator state while it works.\nTyped Numpy array variables (e.g. np.ndarray[DTYPE_t, ndim=2]) were implemented in a way where it's very hard to handle their reference counting correctly. Therefore Cython can only handle them as variables in a regular function. It cannot store them in a class, and thus cannot use them in a generator.\nTo solve it your either need to drop the typing, or you should switch to the more recent typed memoryviews which were designed better so don't have this limitation.\n"
] | [
0
] | [] | [] | [
"cython",
"numpy",
"numpy_ndarray",
"python"
] | stackoverflow_0074673759_cython_numpy_numpy_ndarray_python.txt |
Q:
How to create 3D array with filled value along one dimension?
It's easy to create a 2D array with filled values:
import numpy as np
np.full((5, 3), [1])
np.full((5, 3), [1, 2, 3])
Then, I wanna create a 3D array with same value for last two dimensions:
import numpy as np
np.full((2, 3, 1), [[1], [2]])
'''
# perferred result
[[[1],
[1],
[1]]
[[2],
[2],
[2]]]
'''
However, I got this error:
ValueError: could not broadcast input array from the shape (2,1) into shape (2,3,1)
Does anyone know the correct way to use np.full() for 3D array?
A:
In order to boardcast the value to the desired shape, you require the value in shape (2, 1, 1) to match with the input shape (2, 3, 1)
np.full((2, 3, 1), [[[1]], [[2]]])
output:
array([[[1],
[1],
[1]],
[[2],
[2],
[2]]])
| How to create 3D array with filled value along one dimension? | It's easy to create a 2D array with filled values:
import numpy as np
np.full((5, 3), [1])
np.full((5, 3), [1, 2, 3])
Then, I wanna create a 3D array with same value for last two dimensions:
import numpy as np
np.full((2, 3, 1), [[1], [2]])
'''
# perferred result
[[[1],
[1],
[1]]
[[2],
[2],
[2]]]
'''
However, I got this error:
ValueError: could not broadcast input array from the shape (2,1) into shape (2,3,1)
Does anyone know the correct way to use np.full() for 3D array?
| [
"In order to boardcast the value to the desired shape, you require the value in shape (2, 1, 1) to match with the input shape (2, 3, 1)\nnp.full((2, 3, 1), [[[1]], [[2]]])\n\noutput:\narray([[[1],\n [1],\n [1]],\n\n [[2],\n [2],\n [2]]])\n\n"
] | [
0
] | [] | [] | [
"arrays",
"numpy",
"numpy_ndarray",
"python"
] | stackoverflow_0074673888_arrays_numpy_numpy_ndarray_python.txt |
Q:
Why nested When().Then() is slower than Left Join in Rust Polars?
In Rust Polars(might apply to python pandas as well) assigning values in a new column with a complex logic involving values of other columns can be achieved in two ways. The default way is using a nested WhenThen expression. Another way to achieve same thing is with LeftJoin. Naturally I would expect When Then to be much faster than Join, but it is not the case. In this example, When Then is 6 times slower than Join. Is that actually expected? Am I using When Then wrong?
In this example the goal is to assign weights/multipliers column based on three other columns: country, city and bucket.
use std::collections::HashMap;
use polars::prelude::*;
use rand::{distributions::Uniform, Rng}; // 0.6.5
pub fn bench() {
// PREPARATION
// This MAP is to be used for Left Join
let mut weights = df![
"country"=>vec!["UK"; 5],
"city"=>vec!["London"; 5],
"bucket" => ["1","2","3","4","5"],
"weights" => [0.1, 0.2, 0.3, 0.4, 0.5]
].unwrap().lazy();
weights = weights.with_column(concat_lst([col("weights")]).alias("weihts"));
// This MAP to be used in When.Then
let weight_map = bucket_weight_map(&[0.1, 0.2, 0.3, 0.4, 0.5], 1);
// Generate the DataSet itself
let mut rng = rand::thread_rng();
let range = Uniform::new(1, 5);
let b: Vec<String> = (0..10_000_000).map(|_| rng.sample(&range).to_string()).collect();
let rc = vec!["UK"; 10_000_000];
let rf = vec!["London"; 10_000_000];
let val = vec![1; 10_000_000];
let frame = df!(
"country" => rc,
"city" => rf,
"bucket" => b,
"val" => val,
).unwrap().lazy();
// Test with Left Join
use std::time::Instant;
let now = Instant::now();
let r = frame.clone()
.join(weights, [col("country"), col("city"), col("bucket")], [col("country"), col("city"), col("bucket")], JoinType::Left)
.collect().unwrap();
let elapsed = now.elapsed();
println!("Left Join took: {:.2?}", elapsed);
// Test with nested When Then
let now = Instant::now();
let r1 = frame.clone().with_column(
when(col("country").eq(lit("UK")))
.then(
when(col("city").eq(lit("London")))
.then(rf_rw_map(col("bucket"),weight_map,NULL.lit()))
.otherwise(NULL.lit())
)
.otherwise(NULL.lit())
)
.collect().unwrap();
let elapsed = now.elapsed();
println!("Chained When Then: {:.2?}", elapsed);
// Check results are identical
dbg!(r.tail(Some(10)));
dbg!(r1.tail(Some(10)));
}
/// All this does is building a chained When().Then().Otherwise()
fn rf_rw_map(col: Expr, map: HashMap<String, Expr>, other: Expr) -> Expr {
// buf is a placeholder
let mut it = map.into_iter();
let (k, v) = it.next().unwrap(); //The map will have at least one value
let mut buf = when(lit::<bool>(false)) // buffer WhenThen
.then(lit::<f64>(0.).list()) // buffer WhenThen, needed to "chain on to"
.when(col.clone().eq(lit(k)))
.then(v);
for (k, v) in it {
buf = buf
.when(col.clone().eq(lit(k)))
.then(v);
}
buf.otherwise(other)
}
fn bucket_weight_map(arr: &[f64], ntenors: u8) -> HashMap<String, Expr> {
let mut bucket_weights: HashMap<String, Expr> = HashMap::default();
for (i, n) in arr.iter().enumerate() {
let j = i + 1;
bucket_weights.insert(
format!["{j}"],
Series::from_vec("weight", vec![*n; ntenors as usize])
.lit()
.list(),
);
}
bucket_weights
}
The result is surprising to me: Left Join took: 561.26ms vs Chained When Then: 3.22s
Thoughts?
UPDATE
This does not make much difference. Nested WhenThen is still over 3s
// Test with nested When Then
let now = Instant::now();
let r1 = frame.clone().with_column(
when(col("country").eq(lit("UK")).and(col("city").eq(lit("London"))))
.then(rf_rw_map(col("bucket"),weight_map,NULL.lit()))
.otherwise(NULL.lit())
)
.collect().unwrap();
let elapsed = now.elapsed();
println!("Chained When Then: {:.2?}", elapsed);
A:
It's difficult to say for certain without more context, but the difference in performance between using a nested When().Then() expression and a LeftJoin in Rust Polars may be due to the implementation of each method. LeftJoin is likely more optimized for this kind of operation than a nested When().Then() expression, so it may be faster in general. Additionally, using LeftJoin may allow the program to take advantage of parallelization, which can improve performance. It's also possible that the specific inputs to the two methods in the example are causing the LeftJoin to be faster.
A:
The joins are one of the most optimized algorithms in polars. A left join will be executed fully in parallel and has many performance related fast paths. If you want to combine data based on equality, you should almost always choose a join.
| Why nested When().Then() is slower than Left Join in Rust Polars? | In Rust Polars(might apply to python pandas as well) assigning values in a new column with a complex logic involving values of other columns can be achieved in two ways. The default way is using a nested WhenThen expression. Another way to achieve same thing is with LeftJoin. Naturally I would expect When Then to be much faster than Join, but it is not the case. In this example, When Then is 6 times slower than Join. Is that actually expected? Am I using When Then wrong?
In this example the goal is to assign weights/multipliers column based on three other columns: country, city and bucket.
use std::collections::HashMap;
use polars::prelude::*;
use rand::{distributions::Uniform, Rng}; // 0.6.5
pub fn bench() {
// PREPARATION
// This MAP is to be used for Left Join
let mut weights = df![
"country"=>vec!["UK"; 5],
"city"=>vec!["London"; 5],
"bucket" => ["1","2","3","4","5"],
"weights" => [0.1, 0.2, 0.3, 0.4, 0.5]
].unwrap().lazy();
weights = weights.with_column(concat_lst([col("weights")]).alias("weihts"));
// This MAP to be used in When.Then
let weight_map = bucket_weight_map(&[0.1, 0.2, 0.3, 0.4, 0.5], 1);
// Generate the DataSet itself
let mut rng = rand::thread_rng();
let range = Uniform::new(1, 5);
let b: Vec<String> = (0..10_000_000).map(|_| rng.sample(&range).to_string()).collect();
let rc = vec!["UK"; 10_000_000];
let rf = vec!["London"; 10_000_000];
let val = vec![1; 10_000_000];
let frame = df!(
"country" => rc,
"city" => rf,
"bucket" => b,
"val" => val,
).unwrap().lazy();
// Test with Left Join
use std::time::Instant;
let now = Instant::now();
let r = frame.clone()
.join(weights, [col("country"), col("city"), col("bucket")], [col("country"), col("city"), col("bucket")], JoinType::Left)
.collect().unwrap();
let elapsed = now.elapsed();
println!("Left Join took: {:.2?}", elapsed);
// Test with nested When Then
let now = Instant::now();
let r1 = frame.clone().with_column(
when(col("country").eq(lit("UK")))
.then(
when(col("city").eq(lit("London")))
.then(rf_rw_map(col("bucket"),weight_map,NULL.lit()))
.otherwise(NULL.lit())
)
.otherwise(NULL.lit())
)
.collect().unwrap();
let elapsed = now.elapsed();
println!("Chained When Then: {:.2?}", elapsed);
// Check results are identical
dbg!(r.tail(Some(10)));
dbg!(r1.tail(Some(10)));
}
/// All this does is building a chained When().Then().Otherwise()
fn rf_rw_map(col: Expr, map: HashMap<String, Expr>, other: Expr) -> Expr {
// buf is a placeholder
let mut it = map.into_iter();
let (k, v) = it.next().unwrap(); //The map will have at least one value
let mut buf = when(lit::<bool>(false)) // buffer WhenThen
.then(lit::<f64>(0.).list()) // buffer WhenThen, needed to "chain on to"
.when(col.clone().eq(lit(k)))
.then(v);
for (k, v) in it {
buf = buf
.when(col.clone().eq(lit(k)))
.then(v);
}
buf.otherwise(other)
}
fn bucket_weight_map(arr: &[f64], ntenors: u8) -> HashMap<String, Expr> {
let mut bucket_weights: HashMap<String, Expr> = HashMap::default();
for (i, n) in arr.iter().enumerate() {
let j = i + 1;
bucket_weights.insert(
format!["{j}"],
Series::from_vec("weight", vec![*n; ntenors as usize])
.lit()
.list(),
);
}
bucket_weights
}
The result is surprising to me: Left Join took: 561.26ms vs Chained When Then: 3.22s
Thoughts?
UPDATE
This does not make much difference. Nested WhenThen is still over 3s
// Test with nested When Then
let now = Instant::now();
let r1 = frame.clone().with_column(
when(col("country").eq(lit("UK")).and(col("city").eq(lit("London"))))
.then(rf_rw_map(col("bucket"),weight_map,NULL.lit()))
.otherwise(NULL.lit())
)
.collect().unwrap();
let elapsed = now.elapsed();
println!("Chained When Then: {:.2?}", elapsed);
| [
"It's difficult to say for certain without more context, but the difference in performance between using a nested When().Then() expression and a LeftJoin in Rust Polars may be due to the implementation of each method. LeftJoin is likely more optimized for this kind of operation than a nested When().Then() expression, so it may be faster in general. Additionally, using LeftJoin may allow the program to take advantage of parallelization, which can improve performance. It's also possible that the specific inputs to the two methods in the example are causing the LeftJoin to be faster.\n",
"The joins are one of the most optimized algorithms in polars. A left join will be executed fully in parallel and has many performance related fast paths. If you want to combine data based on equality, you should almost always choose a join.\n"
] | [
0,
0
] | [] | [] | [
"dataframe",
"pandas",
"python",
"python_polars",
"rust"
] | stackoverflow_0074671361_dataframe_pandas_python_python_polars_rust.txt |
Q:
I cannot get over 50% accuracy on my test data in this simple CNN Tensorflow Keras model for image classification
The code is as follows. I have a highly imbalanced dataset for chest x rays with heart enlargement. The images are separated into a training folder split into positive for cardiomegaly and negative for cardiomegaly subfolders (467 pos images and ~20,000 neg). (Then I have a testing folder with two subfolders (300 pos, 300 neg). Each time I test I keep getting a 50% accuracy with the eval method below. When I look at the predictions it is always that they are all one class (normally negative), however if I give the positive values a very high weight (1000+ compared to the negative values 1) the model will flip and say that they are all instead positive. This leads me to believe it is overfitting but all my attempts to resolve this have come up with issues.
import pandas as pd
import os
import matplotlib.pyplot as plt
import numpy as np
import skimage as sk
import skimage.io as skio
import skimage.transform as sktr
import skimage.filters as skfl
import skimage.feature as skft
import skimage.color as skcol
import skimage.exposure as skexp
import skimage.morphology as skmr
import skimage.util as skut
import skimage.measure as skme
import sklearn.model_selection as le_ms
import sklearn.decomposition as le_de
import sklearn.discriminant_analysis as le_di
import sklearn.preprocessing as le_pr
import sklearn.linear_model as le_lm
import sklearn.metrics as le_me
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.models import Sequential
classNames = ["trainpos","trainneg"]
testclassNames = ["testpos", "test"]
train_ds = tf.keras.preprocessing.image_dataset_from_directory(
'./data/trainup/',
labels='inferred',
label_mode='categorical',
class_names=classNames,
color_mode='grayscale',
batch_size=32,
image_size=(256, 256),
shuffle=True,
seed=123,
validation_split=0.2,
subset="training",
interpolation='gaussian',
follow_links=False,
)
val_ds = tf.keras.preprocessing.image_dataset_from_directory(
'./data/trainup/',
labels='inferred',
label_mode='categorical',
class_names=classNames,
color_mode='grayscale',
batch_size=32,
image_size=(256, 256),
shuffle=True,
seed=23,
validation_split=0.2,
subset="validation",
interpolation='gaussian',
follow_links=False,
)
test_ds = tf.keras.preprocessing.image_dataset_from_directory(
'./data/testup/',
labels='inferred',
label_mode='categorical',
class_names=testclassNames,
color_mode='grayscale',
batch_size=32,
image_size=(256, 256),
shuffle=True,
interpolation='gaussian',
follow_links=False,
)
AUTOTUNE = tf.data.experimental.AUTOTUNE
train_ds = train_ds.cache().shuffle(1000).prefetch(buffer_size=AUTOTUNE)
val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)
model = tf.keras.Sequential([
tf.keras.layers.experimental.preprocessing.Rescaling(1./255, input_shape=(256, 256, 1)),
tf.keras.layers.Conv2D(16, 4, padding='same', activation='relu'),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Conv2D(32, 4, padding='same', activation='relu'),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(2)
])
opt = keras.optimizers.Adam(learning_rate=0.0001)
model.compile(optimizer=opt,
loss=tf.keras.losses.CategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
class_weight = {0: 29, 1: 1}
history = model.fit(
train_ds,
validation_data=val_ds,
epochs=5,
class_weight=class_weight
)
test_loss, test_accuracy = model.evaluate(test_ds)
print("Test Loss: ", test_loss)
print("Test Accuracy: ", test_accuracy)
19/19 [==============================] - 7s 376ms/step - loss: 3.4121 - accuracy: 0.5000
Test Loss: 3.4121198654174805
Test Accuracy: 0.5
I have tried updating the learning rate to values between 0.1 and 0.00001, adding epochs, removing epochs, changing to SGP for the optimizer, attempting to unpack the test_ds after subscripting it gave me the error that it is a batchdataset and can't be subscripted. This then shows me that the test_ds is giving me ~19 tensors of 32 images each except the last one which has about 25. I then wanted to predict each of these images individually and get the results because it looked like it was grouping all 32 (or 25 for the last one) together and then predicting based on that but that led me down rabbitholes that I haven't come out of with results. Tried many other things I can't fully remember normally tweaking the model itself or adding data augmentation (I am using tensorflow 2.3 as this is for a class with a repeating assignment so the data augmentation cannot be done with the current docs (mostly just vertical and horizontal changes in this version from what I can tell)
A:
The best thing to do is to eliminate the imbalance to begin with. You have 467 positive images which is more than enough for a model to perform on. So randomly select only 467 negative images from the 20,000 available. This is called under sampling and it works well. Another method is to use both undersampling and image augmentation. Example code to do this is shown below where I limit the number of images in the negative class to 1000, then create 533 augment images and add them to the positive class directory. NOTE- CAUTION the code below will delete images from your negative class directory and add augmented images to the positive class directory so before you run the code you might wish to create backups of these two directories so your original data is recoverable. In the demo code I had 1263 images in the positive directory and 467 images in the positive class directory. I tested the code and it works as desired. Now if your running a notebook on Kagle the code below will not work because you can not change the data in the input directories. So in that case you have to copy the input directories to the kagle working directory first. Then set the paths to those directories.
!pip install -U albumentations
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.preprocessing.image import ImageDataGenerator
import os
import numpy as np
import random
import cv2
import albumentations as A
from tqdm import tqdm
def get_augmented_image(image): # this function returns an augmented version of the input img
# see albumentations documentation at URL https://albumentations.ai/docs/getting_started/image_augmentation/
# for information on various type of augmentations available these are examples below
width=int(image.shape[1]*.8)
height=int(image.shape[0]*.8)
transform= A.Compose([
A.HorizontalFlip(p=.5),
A.RandomBrightnessContrast(p=.5),
A.RandomGamma(p=.5),
A.RandomCrop(width=width, height=height, p=.25) ])
return transform(image=image)['image']
negative_limit=1000
negative_dir_path=r'C:\Temp\data\trainup\negative'# path to directory holding the negative images
positive_dir_path=r'C:\Temp\data\trainup\positive' # path to directory holding positive images
negative_file_list=os.listdir(negative_dir_path)
positive_file_list=os.listdir(positive_dir_path)
sampled_negative_file_list=np.random.choice(negative_file_list, size=negative_limit, replace=False)
for f in tqdm(negative_file_list, ncols=120, unit='files', colour='blue', desc='deleting excess neg files'): # this for loop leaves only 1000 images in the negative_image_directory
if f not in sampled_negative_file_list:
fpath=os.path.join(negative_dir_path,f)
os.remove(fpath)
# now create augmented images
delta=negative_limit-len(os.listdir(positive_dir_path)) # this is the number of augmented images to create to balance the dataset
sampled_positive_image_list=np.random.choice(positive_file_list, delta, replace=True) # replace=True because delta>number of positive images
i=0
for f in tqdm(sampled_positive_image_list, ncols=120, unit='files', colour='blue',desc='creating augment images'): # this loop creates augmented images and stores them in the positive image directory
fpath=os.path.join(positive_dir_path,f)
img=cv2.imread(fpath)
dest_file_name='aug' +str(i) + '-' + f # create the filename with a unique numeric prefix
dest_path=os.path.join(positive_dir_path, dest_file_name) # store augmented images witha numeric prefix in the filename
augmented_image=get_augmented_image(img)
cv2.imwrite(dest_path, augmented_image)
i +=1
# when these loops are done, the negative_image_directory will have 1000 images
# and the positive_image_directory will also have 1000 images, 533 of which are augmented images````
In your code you have
tf.keras.layers.Dense(2)
change to
tf.keras.layers.Dense(2, activation='softmax')
In model.comple remove (from_logits=True)
| I cannot get over 50% accuracy on my test data in this simple CNN Tensorflow Keras model for image classification | The code is as follows. I have a highly imbalanced dataset for chest x rays with heart enlargement. The images are separated into a training folder split into positive for cardiomegaly and negative for cardiomegaly subfolders (467 pos images and ~20,000 neg). (Then I have a testing folder with two subfolders (300 pos, 300 neg). Each time I test I keep getting a 50% accuracy with the eval method below. When I look at the predictions it is always that they are all one class (normally negative), however if I give the positive values a very high weight (1000+ compared to the negative values 1) the model will flip and say that they are all instead positive. This leads me to believe it is overfitting but all my attempts to resolve this have come up with issues.
import pandas as pd
import os
import matplotlib.pyplot as plt
import numpy as np
import skimage as sk
import skimage.io as skio
import skimage.transform as sktr
import skimage.filters as skfl
import skimage.feature as skft
import skimage.color as skcol
import skimage.exposure as skexp
import skimage.morphology as skmr
import skimage.util as skut
import skimage.measure as skme
import sklearn.model_selection as le_ms
import sklearn.decomposition as le_de
import sklearn.discriminant_analysis as le_di
import sklearn.preprocessing as le_pr
import sklearn.linear_model as le_lm
import sklearn.metrics as le_me
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.models import Sequential
classNames = ["trainpos","trainneg"]
testclassNames = ["testpos", "test"]
train_ds = tf.keras.preprocessing.image_dataset_from_directory(
'./data/trainup/',
labels='inferred',
label_mode='categorical',
class_names=classNames,
color_mode='grayscale',
batch_size=32,
image_size=(256, 256),
shuffle=True,
seed=123,
validation_split=0.2,
subset="training",
interpolation='gaussian',
follow_links=False,
)
val_ds = tf.keras.preprocessing.image_dataset_from_directory(
'./data/trainup/',
labels='inferred',
label_mode='categorical',
class_names=classNames,
color_mode='grayscale',
batch_size=32,
image_size=(256, 256),
shuffle=True,
seed=23,
validation_split=0.2,
subset="validation",
interpolation='gaussian',
follow_links=False,
)
test_ds = tf.keras.preprocessing.image_dataset_from_directory(
'./data/testup/',
labels='inferred',
label_mode='categorical',
class_names=testclassNames,
color_mode='grayscale',
batch_size=32,
image_size=(256, 256),
shuffle=True,
interpolation='gaussian',
follow_links=False,
)
AUTOTUNE = tf.data.experimental.AUTOTUNE
train_ds = train_ds.cache().shuffle(1000).prefetch(buffer_size=AUTOTUNE)
val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)
model = tf.keras.Sequential([
tf.keras.layers.experimental.preprocessing.Rescaling(1./255, input_shape=(256, 256, 1)),
tf.keras.layers.Conv2D(16, 4, padding='same', activation='relu'),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Conv2D(32, 4, padding='same', activation='relu'),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(2)
])
opt = keras.optimizers.Adam(learning_rate=0.0001)
model.compile(optimizer=opt,
loss=tf.keras.losses.CategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
class_weight = {0: 29, 1: 1}
history = model.fit(
train_ds,
validation_data=val_ds,
epochs=5,
class_weight=class_weight
)
test_loss, test_accuracy = model.evaluate(test_ds)
print("Test Loss: ", test_loss)
print("Test Accuracy: ", test_accuracy)
19/19 [==============================] - 7s 376ms/step - loss: 3.4121 - accuracy: 0.5000
Test Loss: 3.4121198654174805
Test Accuracy: 0.5
I have tried updating the learning rate to values between 0.1 and 0.00001, adding epochs, removing epochs, changing to SGP for the optimizer, attempting to unpack the test_ds after subscripting it gave me the error that it is a batchdataset and can't be subscripted. This then shows me that the test_ds is giving me ~19 tensors of 32 images each except the last one which has about 25. I then wanted to predict each of these images individually and get the results because it looked like it was grouping all 32 (or 25 for the last one) together and then predicting based on that but that led me down rabbitholes that I haven't come out of with results. Tried many other things I can't fully remember normally tweaking the model itself or adding data augmentation (I am using tensorflow 2.3 as this is for a class with a repeating assignment so the data augmentation cannot be done with the current docs (mostly just vertical and horizontal changes in this version from what I can tell)
| [
"The best thing to do is to eliminate the imbalance to begin with. You have 467 positive images which is more than enough for a model to perform on. So randomly select only 467 negative images from the 20,000 available. This is called under sampling and it works well. Another method is to use both undersampling and image augmentation. Example code to do this is shown below where I limit the number of images in the negative class to 1000, then create 533 augment images and add them to the positive class directory. NOTE- CAUTION the code below will delete images from your negative class directory and add augmented images to the positive class directory so before you run the code you might wish to create backups of these two directories so your original data is recoverable. In the demo code I had 1263 images in the positive directory and 467 images in the positive class directory. I tested the code and it works as desired. Now if your running a notebook on Kagle the code below will not work because you can not change the data in the input directories. So in that case you have to copy the input directories to the kagle working directory first. Then set the paths to those directories.\n!pip install -U albumentations\nimport tensorflow as tf\nfrom tensorflow import keras\nfrom tensorflow.keras.preprocessing.image import ImageDataGenerator\nimport os\nimport numpy as np\nimport random\nimport cv2\nimport albumentations as A\nfrom tqdm import tqdm\n\ndef get_augmented_image(image): # this function returns an augmented version of the input img\n # see albumentations documentation at URL https://albumentations.ai/docs/getting_started/image_augmentation/\n # for information on various type of augmentations available these are examples below\n width=int(image.shape[1]*.8)\n height=int(image.shape[0]*.8)\n transform= A.Compose([\n A.HorizontalFlip(p=.5),\n A.RandomBrightnessContrast(p=.5),\n A.RandomGamma(p=.5),\n A.RandomCrop(width=width, height=height, p=.25) ]) \n return transform(image=image)['image']\n\nnegative_limit=1000\nnegative_dir_path=r'C:\\Temp\\data\\trainup\\negative'# path to directory holding the negative images\npositive_dir_path=r'C:\\Temp\\data\\trainup\\positive' # path to directory holding positive images\nnegative_file_list=os.listdir(negative_dir_path)\npositive_file_list=os.listdir(positive_dir_path)\nsampled_negative_file_list=np.random.choice(negative_file_list, size=negative_limit, replace=False) \nfor f in tqdm(negative_file_list, ncols=120, unit='files', colour='blue', desc='deleting excess neg files'): # this for loop leaves only 1000 images in the negative_image_directory\n if f not in sampled_negative_file_list:\n fpath=os.path.join(negative_dir_path,f) \n os.remove(fpath)\n# now create augmented images\ndelta=negative_limit-len(os.listdir(positive_dir_path)) # this is the number of augmented images to create to balance the dataset\nsampled_positive_image_list=np.random.choice(positive_file_list, delta, replace=True) # replace=True because delta>number of positive images\ni=0\nfor f in tqdm(sampled_positive_image_list, ncols=120, unit='files', colour='blue',desc='creating augment images'): # this loop creates augmented images and stores them in the positive image directory\n fpath=os.path.join(positive_dir_path,f)\n img=cv2.imread(fpath)\n dest_file_name='aug' +str(i) + '-' + f # create the filename with a unique numeric prefix\n dest_path=os.path.join(positive_dir_path, dest_file_name) # store augmented images witha numeric prefix in the filename\n augmented_image=get_augmented_image(img)\n cv2.imwrite(dest_path, augmented_image)\n i +=1\n# when these loops are done, the negative_image_directory will have 1000 images\n# and the positive_image_directory will also have 1000 images, 533 of which are augmented images````\n\nIn your code you have\ntf.keras.layers.Dense(2)\n\nchange to\ntf.keras.layers.Dense(2, activation='softmax')\n\nIn model.comple remove (from_logits=True)\n"
] | [
0
] | [] | [] | [
"conv_neural_network",
"image_classification",
"overfitting_underfitting",
"python",
"tensorflow"
] | stackoverflow_0074672833_conv_neural_network_image_classification_overfitting_underfitting_python_tensorflow.txt |
Q:
auto built cli tool in to an object in python
first sorry for my bad terminology, I am an electrical engineer, so maybe my coding terms are not so accurate or even far from that.
we have a CLI in the company, accessed from the Linux terminal, you know usual stuff, `{command.exe} {plugin} {options}, and you get the output on the terminal screen.
In order to unit test the product, we need it in a python class, which is returned as an object to the test environment, and eventually, prints that open a process that execute that command.
to build the command, we have a dictionary of the plugin, the subplugin, and the option for each cmd:
self.commands = {
"plugin": ['subplugin', 'subsubplugin', '-a', 'flaga', '-b', 'flagb'],...
and we built a function for every command we want, from the plugin list extracted from the dict above
I am looking for a better approach that auto-built the tool entirely, sort of what the OS does for prediction.
I am assuming that would include the "set_attr" method of classes and stuff like that.
at the end of all this, I expect to access the plugin like this: cli.plugin.subplugin.subsubplugin(arg,arg,arg)
and that would generate a command cli, or at least the list above so I could inject it into the existing infra.
can anyone help, please?
thx in advance
I am more looking for guidence then say what I tried and fix it.
A:
I found my answer, this code worked for me yo achieve what I was looking for.
thanks for the commenters.
import re
import subprocess
PKG_NAME = "sudo mycli"
PKG_PLUGIN_START = "The following are all installed plugin extensions:" # this is the message before the commands list in the cli help
PKG_PLUGIN_END = f"See 'mycli <plugin> help' for more information on a plugin" # hit is the message after the commands list in the cli help
PKG_CMD_START = "The following are all implemented sub-commands:"
PKG_CMD_END = "See 'mycli help <command>' for more information on a specific command"
PLUGIN_CMD_START = PKG_CMD_START
PLUGIN_CMD_END = "See 'mycli <plugin> help <command>' for more information on a specific command"
def get_help(s):
s += " help"
return subprocess.getoutput([s])
def get_plugin_list(s, start, end):
s = '\n'.join(l.strip() for l in s.splitlines() if l)
res = re.search(f'{start}([\s\S]*){end}', s) # regex that matches everything between both strings
if not res:
raise ValueError("Couldn't find plugin list in string")
return [l.split(' ')[0] for l in res.group(1).strip().splitlines()] # remove the unnecessary text and return the plugins as a list
class CMD():
def __init__(self, name, parent_plugin_name=None, *args):
self.args = args
self.pkg_name = PKG_NAME
self.parent_plugin_name = parent_plugin_name
self.name = name
def __call__(self, *args, **kwargs):
if self.parent_plugin_name:
command = " ".join([self.pkg_name, self.parent_plugin_name])
else:
command = self.pkg_name
command = " ".join([command, self.name, *args, " "])
command += " ".join([f"-{each[0]}={each[1]}" for each in list(kwargs.items())])
return subprocess.getoutput(command)
class Plugin():
def __init__(self, name, parent_pkg_name):
self.name = name
self.parent_pkg_name = PKG_NAME
plugin_cmd_start = PLUGIN_CMD_START
plugin_cmd_end = PLUGIN_CMD_END.replace("<plugin>", self.name)
for cmd in get_plugin_list(get_help(f"{self.parent_pkg_name} {self.name}"), plugin_cmd_start, plugin_cmd_end):
setattr(self, cmd, CMD(cmd, parent_plugin_name=self.name))
class Package():
def __init__(self, name, root=True):
self.name = name
if root:
self.name = "sudo " + self.name
self.command_string = f"{self.name}"
for cmd in get_plugin_list(get_help(self.name), PKG_CMD_START, PKG_CMD_END):
setattr(self, cmd, CMD(cmd))
for plugin in get_plugin_list(get_help(self.name), PKG_PLUGIN_START, PKG_PLUGIN_END):
setattr(self, plugin, Plugin(plugin, parent_pkg_name=self.name))
if __name__ == "__main__":
mycli_tool = Package("mycli")
print()
print(mycli_tool.cmd())
print()
print(mycli_tool.system.get_disk_usage("-x0"))
print()
print(mycli_tool.system.get_disk_usage(x=0))
print()
print(mycli_tool.system.get_disk_usage(json=1))
| auto built cli tool in to an object in python | first sorry for my bad terminology, I am an electrical engineer, so maybe my coding terms are not so accurate or even far from that.
we have a CLI in the company, accessed from the Linux terminal, you know usual stuff, `{command.exe} {plugin} {options}, and you get the output on the terminal screen.
In order to unit test the product, we need it in a python class, which is returned as an object to the test environment, and eventually, prints that open a process that execute that command.
to build the command, we have a dictionary of the plugin, the subplugin, and the option for each cmd:
self.commands = {
"plugin": ['subplugin', 'subsubplugin', '-a', 'flaga', '-b', 'flagb'],...
and we built a function for every command we want, from the plugin list extracted from the dict above
I am looking for a better approach that auto-built the tool entirely, sort of what the OS does for prediction.
I am assuming that would include the "set_attr" method of classes and stuff like that.
at the end of all this, I expect to access the plugin like this: cli.plugin.subplugin.subsubplugin(arg,arg,arg)
and that would generate a command cli, or at least the list above so I could inject it into the existing infra.
can anyone help, please?
thx in advance
I am more looking for guidence then say what I tried and fix it.
| [
"I found my answer, this code worked for me yo achieve what I was looking for.\nthanks for the commenters.\nimport re\nimport subprocess\n\nPKG_NAME = \"sudo mycli\"\nPKG_PLUGIN_START = \"The following are all installed plugin extensions:\" # this is the message before the commands list in the cli help\nPKG_PLUGIN_END = f\"See 'mycli <plugin> help' for more information on a plugin\" # hit is the message after the commands list in the cli help\nPKG_CMD_START = \"The following are all implemented sub-commands:\"\nPKG_CMD_END = \"See 'mycli help <command>' for more information on a specific command\"\nPLUGIN_CMD_START = PKG_CMD_START\nPLUGIN_CMD_END = \"See 'mycli <plugin> help <command>' for more information on a specific command\"\n\n\ndef get_help(s):\n s += \" help\"\n return subprocess.getoutput([s])\n\n\ndef get_plugin_list(s, start, end):\n s = '\\n'.join(l.strip() for l in s.splitlines() if l)\n res = re.search(f'{start}([\\s\\S]*){end}', s) # regex that matches everything between both strings\n\n if not res:\n raise ValueError(\"Couldn't find plugin list in string\")\n\n return [l.split(' ')[0] for l in res.group(1).strip().splitlines()] # remove the unnecessary text and return the plugins as a list\n\n\nclass CMD():\n def __init__(self, name, parent_plugin_name=None, *args):\n self.args = args\n self.pkg_name = PKG_NAME\n self.parent_plugin_name = parent_plugin_name\n self.name = name\n\n def __call__(self, *args, **kwargs):\n if self.parent_plugin_name:\n command = \" \".join([self.pkg_name, self.parent_plugin_name])\n else:\n command = self.pkg_name\n command = \" \".join([command, self.name, *args, \" \"])\n command += \" \".join([f\"-{each[0]}={each[1]}\" for each in list(kwargs.items())])\n return subprocess.getoutput(command)\n\n\nclass Plugin():\n\n def __init__(self, name, parent_pkg_name):\n self.name = name\n self.parent_pkg_name = PKG_NAME\n plugin_cmd_start = PLUGIN_CMD_START\n plugin_cmd_end = PLUGIN_CMD_END.replace(\"<plugin>\", self.name)\n for cmd in get_plugin_list(get_help(f\"{self.parent_pkg_name} {self.name}\"), plugin_cmd_start, plugin_cmd_end):\n setattr(self, cmd, CMD(cmd, parent_plugin_name=self.name))\n\n\nclass Package():\n def __init__(self, name, root=True):\n self.name = name\n if root:\n self.name = \"sudo \" + self.name\n self.command_string = f\"{self.name}\"\n for cmd in get_plugin_list(get_help(self.name), PKG_CMD_START, PKG_CMD_END):\n setattr(self, cmd, CMD(cmd))\n for plugin in get_plugin_list(get_help(self.name), PKG_PLUGIN_START, PKG_PLUGIN_END):\n setattr(self, plugin, Plugin(plugin, parent_pkg_name=self.name))\n\n\nif __name__ == \"__main__\":\n mycli_tool = Package(\"mycli\")\n print()\n print(mycli_tool.cmd())\n print()\n print(mycli_tool.system.get_disk_usage(\"-x0\"))\n print()\n print(mycli_tool.system.get_disk_usage(x=0))\n print()\n print(mycli_tool.system.get_disk_usage(json=1))\n\n"
] | [
0
] | [] | [] | [
"api",
"auto_generate",
"command_line_interface",
"python",
"python_3.x"
] | stackoverflow_0074612528_api_auto_generate_command_line_interface_python_python_3.x.txt |
Q:
Subprocess not opening files
I am writing a program to open other programs for me. os.system() would always freeze my app, so I switched to subprocess. I did some research and this is how a tutorial told me to open a program. I have only replaced the path for my variable, which contains the path. After I run this, only a commabd prompt window opens and nothing else. How can I fix this?
Code:
from subprocess import Popen
filename1 = "C:/Program Files/Google/Chrome/Application/chrome.exe"
Popen(["cmd", "/c", "start", filename1)
A:
You need to create a single string with double quotes around it. In Python terms, you basically want r'"c:\torture\thanks Microsoft"' where the single quotes and the r create a Python string, which contains the file name inside double quotes.
from subprocess import Popen
filename1 = "C:/Program Files/Google/Chrome/Application/chrome.exe"
Popen(["cmd", "/c", "start", f'"{filename1}"'])
Quoting with CMD is always bewildering; maybe think about ways you can avoid it (or Windows altogether, if you have a choice).
A:
import subprocess
filename1 = "C:\Program Files\Google\Chrome\Application\chrome.exe"
subprocess.Popen(filename1)
| Subprocess not opening files | I am writing a program to open other programs for me. os.system() would always freeze my app, so I switched to subprocess. I did some research and this is how a tutorial told me to open a program. I have only replaced the path for my variable, which contains the path. After I run this, only a commabd prompt window opens and nothing else. How can I fix this?
Code:
from subprocess import Popen
filename1 = "C:/Program Files/Google/Chrome/Application/chrome.exe"
Popen(["cmd", "/c", "start", filename1)
| [
"You need to create a single string with double quotes around it. In Python terms, you basically want r'\"c:\\torture\\thanks Microsoft\"' where the single quotes and the r create a Python string, which contains the file name inside double quotes.\nfrom subprocess import Popen\n\nfilename1 = \"C:/Program Files/Google/Chrome/Application/chrome.exe\"\nPopen([\"cmd\", \"/c\", \"start\", f'\"{filename1}\"'])\n\nQuoting with CMD is always bewildering; maybe think about ways you can avoid it (or Windows altogether, if you have a choice).\n",
"import subprocess\nfilename1 = \"C:\\Program Files\\Google\\Chrome\\Application\\chrome.exe\"\nsubprocess.Popen(filename1)\n\n"
] | [
0,
0
] | [] | [] | [
"popen",
"python",
"python_3.x",
"subprocess"
] | stackoverflow_0074181574_popen_python_python_3.x_subprocess.txt |
Q:
How to open jupyter notebook from Windows 10 task bar
Through some wizardry I cannot recall, I managed to install and implement Jupyter Notebook with an icon that opens Jupyter directly in browser
I am occasionally asked how I did this. However, and slightly emparisingly, I cannot remember how I did this and am unable to help. I cannot seem to recreate this Jupyter Icon in any other set up
Also, in attempting to recreate this Icon, I somehow managed to implement two Anaconda Prompts, Anaconda PowerShell Prompt and Anaconda Prompt
What is the difference between the two? Which one should I remove?
A:
I somehow managed to implement two Anaconda Prompts, Anaconda PowerShell Prompt and Anaconda Prompt
That is standard. The first Anaconda Prompt, will open the legacy cmd configured for conda. The second will open a powershell configured for conda. SO just keep both and use the one you are more comfortable with.
How to open jupyter notebook from Windows 10 task bar
Simply search for jupyter in the start menu and select Pin To Taskbar
Creating it manually
In case the above does not work, then you can manually create a shortcut and pin it to the taskbar. For that, we will need two paths, which for me are these:
pathBase=C:\Users\FlyingTeller\miniconda3 #main folder of miniconda (or anaconda)
pathEnv=C:\Users\FlyingTeller\miniconda3\envs\py37 #Folder of the environment where jupyter notebook is installed
Then you do the following steps:
Right Click on Desktop->New->Shortcut, enter as target path:
<pathBase>\python.exe <pathBase>\cwp.py <pathEnv> <pathEnv>\python.exe <pathEnv>\Scripts\jupyter-notebook-script.py "%USERPROFILE%/
replacing the paths with the ones from above. Save the shortcut and then do Right Click->Properties.
Now you can change the Start In directory to wherever you want the notebook to start. Additionally, you can change the icon to the jupyter icon, which is in
<pathEnv>\Menu
Now you have
A shortcut to start the notebook on your desktop
The possibility to simply do Right Click-> Pin to Taskbar for that Shortcut
A:
Search for Anaconda Navigator in your computer then Right Click and Select "Open file location".
windows search for anaconda
In the folder that is opened you can find shortcuts of programs that are installed via anaconda. You can copy and paste them anywhere you want.
shortcuts folder
| How to open jupyter notebook from Windows 10 task bar | Through some wizardry I cannot recall, I managed to install and implement Jupyter Notebook with an icon that opens Jupyter directly in browser
I am occasionally asked how I did this. However, and slightly emparisingly, I cannot remember how I did this and am unable to help. I cannot seem to recreate this Jupyter Icon in any other set up
Also, in attempting to recreate this Icon, I somehow managed to implement two Anaconda Prompts, Anaconda PowerShell Prompt and Anaconda Prompt
What is the difference between the two? Which one should I remove?
| [
"\nI somehow managed to implement two Anaconda Prompts, Anaconda PowerShell Prompt and Anaconda Prompt\n\nThat is standard. The first Anaconda Prompt, will open the legacy cmd configured for conda. The second will open a powershell configured for conda. SO just keep both and use the one you are more comfortable with.\n\nHow to open jupyter notebook from Windows 10 task bar\n\nSimply search for jupyter in the start menu and select Pin To Taskbar\n\nCreating it manually\nIn case the above does not work, then you can manually create a shortcut and pin it to the taskbar. For that, we will need two paths, which for me are these:\npathBase=C:\\Users\\FlyingTeller\\miniconda3 #main folder of miniconda (or anaconda)\npathEnv=C:\\Users\\FlyingTeller\\miniconda3\\envs\\py37 #Folder of the environment where jupyter notebook is installed\n\nThen you do the following steps:\nRight Click on Desktop->New->Shortcut, enter as target path:\n<pathBase>\\python.exe <pathBase>\\cwp.py <pathEnv> <pathEnv>\\python.exe <pathEnv>\\Scripts\\jupyter-notebook-script.py \"%USERPROFILE%/\n\nreplacing the paths with the ones from above. Save the shortcut and then do Right Click->Properties.\nNow you can change the Start In directory to wherever you want the notebook to start. Additionally, you can change the icon to the jupyter icon, which is in\n<pathEnv>\\Menu\n\nNow you have\n\nA shortcut to start the notebook on your desktop\nThe possibility to simply do Right Click-> Pin to Taskbar for that Shortcut\n\n",
"Search for Anaconda Navigator in your computer then Right Click and Select \"Open file location\".\nwindows search for anaconda\nIn the folder that is opened you can find shortcuts of programs that are installed via anaconda. You can copy and paste them anywhere you want.\nshortcuts folder\n"
] | [
2,
0
] | [] | [] | [
"anaconda",
"jupyter_notebook",
"miniconda",
"powershell",
"python"
] | stackoverflow_0068420377_anaconda_jupyter_notebook_miniconda_powershell_python.txt |
Q:
how to merge two json data by mapping
I have two json datas as
json_1 = [{'purchasedPerson__id': 2, 'credit': 3000}, {'purchasedPerson__id': 4, 'credit': 5000}]
json_2 = [{'purchasedPerson__id': 1, 'debit': 8526}, {'purchasedPerson__id': 4, 'debit': 2000}]
i want to merge both the json and needed optput as
json_final = [{'purchasedPerson__id': 2, 'credit': 3000 , 'debit'=0},
{'purchasedPerson__id': 4, 'credit': 5000 , 'debit'=2000},
{'purchasedPerson__id': 1, 'credit'=0, 'debit': 8526}]
how the above method can be done
A:
This is a case where pandascan be very convenient. By converting to dataframes and merging on "purchasedPerson__id", you will get the desired output:
import pandas as pd
json_1 = [{'purchasedPerson__id': 2, 'credit': 3000}, {'purchasedPerson__id': 4, 'credit': 5000}]
json_2 = [{'purchasedPerson__id': 1, 'debit': 8526}, {'purchasedPerson__id': 4, 'debit': 2000}]
df1 = pd.DataFrame(json_1)
df2 = pd.DataFrame(json_2)
df_out = pd.merge(df1, df2, on="purchasedPerson__id", how="outer").fillna(0)
df_out.to_dict(orient="records")
Output:
[{'purchasedPerson__id': 2, 'credit': 3000.0, 'debit': 0.0}, {'purchasedPerson__id': 4, 'credit': 5000.0, 'debit': 2000.0}, {'purchasedPerson__id': 1, 'credit': 0.0, 'debit': 8526.0}]
A:
json_1 = [{'purchasedPerson__id': 2, 'credit': 3000}, {'purchasedPerson__id': 4, 'credit': 5000}]
json_2 = [{'purchasedPerson__id': 1, 'debit': 8526}, {'purchasedPerson__id': 4, 'debit': 2000}]
# create a dictionary for the merged data
data = {}
# loop through each JSON and add the data to the dictionary
for j in json_1:
data[j['purchasedPerson__id']] = {'credit': j['credit'], 'debit': 0}
for j in json_2:
if j['purchasedPerson__id'] in data:
data[j['purchasedPerson__id']] = {'credit': data[j['purchasedPerson__id']]['credit'], 'debit': j['debit']}
else:
data[j['purchasedPerson__id']] = {'credit': 0, 'debit': j['debit']}
# convert the dictionary to a list
json_final = []
for key, value in data.items():
json_final.append({'purchasedPerson__id': key, 'credit': value['credit'], 'debit': value['debit']})
print(json_final)
A:
# Initialize the final JSON array
json_final = []
# Loop through the first JSON data set
for item in json_1:
# Initialize the final JSON object for this item
final_item = {'purchasedPerson__id': item['purchasedPerson__id'], 'credit': item['credit'], 'debit': 0}
# Loop through the second JSON data set
for item2 in json_2:
# If the id matches, update the final item with the debit value
if item['purchasedPerson__id'] == item2['purchasedPerson__id']:
final_item['debit'] = item2['debit']
# Add the final item to the final JSON array
json_final.append(final_item)
# Loop through the second JSON data set
for item in json_2:
# Initialize a flag to keep track of whether the item already exists in the final JSON array
exists = False
# Loop through the final JSON array
for final_item in json_final:
# If the id matches, set the exists flag to True
if final_item['purchasedPerson__id'] == item['purchasedPerson__id']:
exists = True
# If the item does not exist in the final JSON array, add it with credit and debit values of 0
if not exists:
json_final.append({'purchasedPerson__id': item['purchasedPerson__id'], 'credit': 0, 'debit': item['debit']})
| how to merge two json data by mapping | I have two json datas as
json_1 = [{'purchasedPerson__id': 2, 'credit': 3000}, {'purchasedPerson__id': 4, 'credit': 5000}]
json_2 = [{'purchasedPerson__id': 1, 'debit': 8526}, {'purchasedPerson__id': 4, 'debit': 2000}]
i want to merge both the json and needed optput as
json_final = [{'purchasedPerson__id': 2, 'credit': 3000 , 'debit'=0},
{'purchasedPerson__id': 4, 'credit': 5000 , 'debit'=2000},
{'purchasedPerson__id': 1, 'credit'=0, 'debit': 8526}]
how the above method can be done
| [
"This is a case where pandascan be very convenient. By converting to dataframes and merging on \"purchasedPerson__id\", you will get the desired output:\nimport pandas as pd\n\njson_1 = [{'purchasedPerson__id': 2, 'credit': 3000}, {'purchasedPerson__id': 4, 'credit': 5000}]\njson_2 = [{'purchasedPerson__id': 1, 'debit': 8526}, {'purchasedPerson__id': 4, 'debit': 2000}]\ndf1 = pd.DataFrame(json_1)\ndf2 = pd.DataFrame(json_2)\n\ndf_out = pd.merge(df1, df2, on=\"purchasedPerson__id\", how=\"outer\").fillna(0)\ndf_out.to_dict(orient=\"records\")\n\nOutput:\n[{'purchasedPerson__id': 2, 'credit': 3000.0, 'debit': 0.0}, {'purchasedPerson__id': 4, 'credit': 5000.0, 'debit': 2000.0}, {'purchasedPerson__id': 1, 'credit': 0.0, 'debit': 8526.0}]\n\n",
"json_1 = [{'purchasedPerson__id': 2, 'credit': 3000}, {'purchasedPerson__id': 4, 'credit': 5000}]\njson_2 = [{'purchasedPerson__id': 1, 'debit': 8526}, {'purchasedPerson__id': 4, 'debit': 2000}]\n\n# create a dictionary for the merged data\ndata = {}\n\n# loop through each JSON and add the data to the dictionary\nfor j in json_1:\n data[j['purchasedPerson__id']] = {'credit': j['credit'], 'debit': 0}\n\nfor j in json_2:\n if j['purchasedPerson__id'] in data:\n data[j['purchasedPerson__id']] = {'credit': data[j['purchasedPerson__id']]['credit'], 'debit': j['debit']}\n else:\n data[j['purchasedPerson__id']] = {'credit': 0, 'debit': j['debit']}\n\n# convert the dictionary to a list\njson_final = []\nfor key, value in data.items():\n json_final.append({'purchasedPerson__id': key, 'credit': value['credit'], 'debit': value['debit']})\n\nprint(json_final)\n\n",
"# Initialize the final JSON array\njson_final = []\n\n# Loop through the first JSON data set\nfor item in json_1:\n # Initialize the final JSON object for this item\n final_item = {'purchasedPerson__id': item['purchasedPerson__id'], 'credit': item['credit'], 'debit': 0}\n # Loop through the second JSON data set\n for item2 in json_2:\n # If the id matches, update the final item with the debit value\n if item['purchasedPerson__id'] == item2['purchasedPerson__id']:\n final_item['debit'] = item2['debit']\n # Add the final item to the final JSON array\n json_final.append(final_item)\n\n# Loop through the second JSON data set\nfor item in json_2:\n # Initialize a flag to keep track of whether the item already exists in the final JSON array\n exists = False\n # Loop through the final JSON array\n for final_item in json_final:\n # If the id matches, set the exists flag to True\n if final_item['purchasedPerson__id'] == item['purchasedPerson__id']:\n exists = True\n # If the item does not exist in the final JSON array, add it with credit and debit values of 0\n if not exists:\n json_final.append({'purchasedPerson__id': item['purchasedPerson__id'], 'credit': 0, 'debit': item['debit']})\n\n"
] | [
1,
1,
0
] | [] | [] | [
"json",
"python",
"python_jsons",
"python_jsonschema"
] | stackoverflow_0074673859_json_python_python_jsons_python_jsonschema.txt |
Q:
How to convolution integration(Duhamel Integration) by python?
Hi~ I'm studying about structural dynamics.
I want to make a code about Duhamel Integration which is kind of Convoution Integration.
If the initial conditions are y(0)=0 and y'(0)=0,
Duhamel Integration is like this.
enter image description here
Using Ti Nspire
I solved this problem with my Ti Npire softwere. The result is like that.
enter image description here
Its response(y) of t=1 is -0.006238
Using python(sympy)
I tried to solve this problem using by Python(Jupyter Notebook).
But I couldn't solve the problem.
I wrote the code like this.
from sympy import *
t, tau=symbols('t, tau')
m=6938.78
k=379259
wn=sqrt(k/m)
wd=wn*sqrt(1-0.05**2)
eq1=(900*sin(5.3*tau))
eq2=exp(-0.05*wn*(t-tau))
eq3=sin(wd*(t-tau))
y0=1/(m*wd)*integrate(eq1*eq2*eq3,(tau,0,t))
y0
But I couldn't get the result.
enter image description here
Is there other way to solve this problem?
A:
Use the unevaluated Integral and then substitute in a value for t and use the doit method:
...
>>> y0=1/(m*wd)*Integral(eq1*eq2*eq3,(tau,0,t))
>>> y0.subs(t,1).doit()
-0.00623772329557205
| How to convolution integration(Duhamel Integration) by python? | Hi~ I'm studying about structural dynamics.
I want to make a code about Duhamel Integration which is kind of Convoution Integration.
If the initial conditions are y(0)=0 and y'(0)=0,
Duhamel Integration is like this.
enter image description here
Using Ti Nspire
I solved this problem with my Ti Npire softwere. The result is like that.
enter image description here
Its response(y) of t=1 is -0.006238
Using python(sympy)
I tried to solve this problem using by Python(Jupyter Notebook).
But I couldn't solve the problem.
I wrote the code like this.
from sympy import *
t, tau=symbols('t, tau')
m=6938.78
k=379259
wn=sqrt(k/m)
wd=wn*sqrt(1-0.05**2)
eq1=(900*sin(5.3*tau))
eq2=exp(-0.05*wn*(t-tau))
eq3=sin(wd*(t-tau))
y0=1/(m*wd)*integrate(eq1*eq2*eq3,(tau,0,t))
y0
But I couldn't get the result.
enter image description here
Is there other way to solve this problem?
| [
"Use the unevaluated Integral and then substitute in a value for t and use the doit method:\n...\n>>> y0=1/(m*wd)*Integral(eq1*eq2*eq3,(tau,0,t))\n>>> y0.subs(t,1).doit()\n-0.00623772329557205\n\n"
] | [
1
] | [] | [] | [
"convolution",
"integrate",
"python",
"response",
"sympy"
] | stackoverflow_0074672385_convolution_integrate_python_response_sympy.txt |
Q:
My Django Admin input doesn't allow me to add more than one image
I'm trying to make a Django model, with Django Rest Framework. I want this to allow me to load one or more images in the same input.
MODELS:
from django.db import models
from datetime import datetime
from apps.category.models import Category
from django.conf import settings
class Product(models.Model):
code = models.CharField(max_length=255, null=True)
name = models.CharField(max_length=255)
image = models.ImageField(upload_to='photos/%Y/%m/', blank = True, null=True, default='')
description = models.TextField()
caracteristicas = models.JSONField(default=dict)
price = models.DecimalField(max_digits=6, decimal_places=2)
compare_price = models.DecimalField(max_digits=6, decimal_places=2)
category = models.ForeignKey(Category, on_delete=models.CASCADE)
quantity = models.IntegerField(default=0)
sold = models.IntegerField(default=0)
date_created = models.DateTimeField(default=datetime.now)
def __str__(self):
return self.name
class ProductImage(models.Model):
product = models.ForeignKey(Product, on_delete=models.CASCADE, related_name = 'images')
image = models.ImageField(upload_to='photos/%Y/%m/', default="", null=True, blank=True)
SERIALIZER:
from rest_framework import serializers
from .models import Product, ProductImage
class ProductImageSerializer(serializers.ModelSerializer):
class Meta:
model = ProductImage
fields = ["id", "product", "image"]
class ProductSerializer(serializers.ModelSerializer):
images = ProductImageSerializer(many=True, read_only=True)
uploaded_images = serializers.ListField(
child = serializers.ImageField(max_length = 1000000, allow_empty_file = False, use_url = False),
write_only=True
)
class Meta:
model = Product
fields = [
'id',
'code',
'name',
'description',
'caracteristicas',
'price',
'compare_price',
'category',
'quantity',
'sold',
'date_created',
'images',
'uploaded_images'
]
def create(self, validated_data):
uploaded_images = validated_data.pop("uploaded_images")
product = Product.objects.create(**validated_data)
for image in uploaded_images:
newproduct_image = ProductImage.objects.create(product=product, image=image)
return product
I would simply like how to make the following input field allow me to load more than one image:
Imagen de referencia input
thank you very much
A:
You didn't post your admin.py but my guess is that you also need to register your ProductImage model as an inlines since you already use a One2Many relationship between Product and ProductImage:
In your admin.py:
class ProductImageAdmin(admin.StackedInline):
model = ProductImage
class ProductAdmin(admin.ModelAdmin):
inlines = [ProductImageAdmin]
class Meta:
model = Product
admin.site.register(ProductImage)
admin.site.register(Product, ProductAdmin)
You can also check this SO answer out for more details.
Hope that helps :)
| My Django Admin input doesn't allow me to add more than one image | I'm trying to make a Django model, with Django Rest Framework. I want this to allow me to load one or more images in the same input.
MODELS:
from django.db import models
from datetime import datetime
from apps.category.models import Category
from django.conf import settings
class Product(models.Model):
code = models.CharField(max_length=255, null=True)
name = models.CharField(max_length=255)
image = models.ImageField(upload_to='photos/%Y/%m/', blank = True, null=True, default='')
description = models.TextField()
caracteristicas = models.JSONField(default=dict)
price = models.DecimalField(max_digits=6, decimal_places=2)
compare_price = models.DecimalField(max_digits=6, decimal_places=2)
category = models.ForeignKey(Category, on_delete=models.CASCADE)
quantity = models.IntegerField(default=0)
sold = models.IntegerField(default=0)
date_created = models.DateTimeField(default=datetime.now)
def __str__(self):
return self.name
class ProductImage(models.Model):
product = models.ForeignKey(Product, on_delete=models.CASCADE, related_name = 'images')
image = models.ImageField(upload_to='photos/%Y/%m/', default="", null=True, blank=True)
SERIALIZER:
from rest_framework import serializers
from .models import Product, ProductImage
class ProductImageSerializer(serializers.ModelSerializer):
class Meta:
model = ProductImage
fields = ["id", "product", "image"]
class ProductSerializer(serializers.ModelSerializer):
images = ProductImageSerializer(many=True, read_only=True)
uploaded_images = serializers.ListField(
child = serializers.ImageField(max_length = 1000000, allow_empty_file = False, use_url = False),
write_only=True
)
class Meta:
model = Product
fields = [
'id',
'code',
'name',
'description',
'caracteristicas',
'price',
'compare_price',
'category',
'quantity',
'sold',
'date_created',
'images',
'uploaded_images'
]
def create(self, validated_data):
uploaded_images = validated_data.pop("uploaded_images")
product = Product.objects.create(**validated_data)
for image in uploaded_images:
newproduct_image = ProductImage.objects.create(product=product, image=image)
return product
I would simply like how to make the following input field allow me to load more than one image:
Imagen de referencia input
thank you very much
| [
"You didn't post your admin.py but my guess is that you also need to register your ProductImage model as an inlines since you already use a One2Many relationship between Product and ProductImage:\nIn your admin.py:\nclass ProductImageAdmin(admin.StackedInline):\n model = ProductImage\n\nclass ProductAdmin(admin.ModelAdmin):\n inlines = [ProductImageAdmin]\n\n class Meta:\n model = Product\n\n\nadmin.site.register(ProductImage)\nadmin.site.register(Product, ProductAdmin)\n\nYou can also check this SO answer out for more details.\nHope that helps :)\n"
] | [
0
] | [] | [] | [
"backend",
"django",
"django_admin",
"django_rest_framework",
"python"
] | stackoverflow_0074672857_backend_django_django_admin_django_rest_framework_python.txt |
Q:
Difficulty importing ThemedTK from ttkthemes
I'm trying to import ThemedTK from ttkthemes in Python3 but am getting the following error message:
line 4, in
from ttkthemes import Themed_TK
ImportError: cannot import name 'Themed_TK' from 'ttkthemes'
Any ideas?
from tkinter import filedialog
from tkinter import ttk
from ttkthemes import ThemedTK
from reportlab.lib.units import mm
from draw import bellGen
root = ThemedTK()
A:
Apparently it's ThemedTk. With lowercase "k".
| Difficulty importing ThemedTK from ttkthemes | I'm trying to import ThemedTK from ttkthemes in Python3 but am getting the following error message:
line 4, in
from ttkthemes import Themed_TK
ImportError: cannot import name 'Themed_TK' from 'ttkthemes'
Any ideas?
from tkinter import filedialog
from tkinter import ttk
from ttkthemes import ThemedTK
from reportlab.lib.units import mm
from draw import bellGen
root = ThemedTK()
| [
"Apparently it's ThemedTk. With lowercase \"k\".\n"
] | [
0
] | [] | [] | [
"python",
"ttk"
] | stackoverflow_0068376097_python_ttk.txt |
Q:
Dynamically create matrix from a vectors in numpy
I'm trying to create a matrix of shape Nx3 where N is not known at first.
This is what I'm basically trying to do:
F = np.array([[],[],[]])
for contact in contacts:
xp,yp,theta = contact
# Create vectors for points and normal
P = [xp, yp, 0]
N = [np.cos(theta), np.sin(theta), 0]
# Calculate vector product
cross_PN = np.cross(P,N)
# f = [mz, fx, fi]
mz = cross_PN[2]
fx = N[0]
fy = N[1]
f = np.array([mz, fx, fy])
F = np.vstack([F, f])
But this code doesn't work.
I can do similar thing in Matlab very easily, but that is not the case in Python using numpy.
Any help is greatly appreciated.
Thank you
I would like to create a matrix by adding new rows, but in the beginning the matrix is empty.
That is why I receive the error:
"along dimension 1, the array at index 0 has size 0 and the array at index 1 has size 3"
A:
The error you are seeing is caused by trying to stack empty arrays together using np.vstack(). When you create an empty array with np.array([[],[],[]]), the resulting array has shape (3, 0), which means that it has 3 rows but no columns. When you try to stack this empty array with another array using np.vstack(), the resulting array has shape (3, 0), which means that it still has 3 rows but no columns, and this is why you are seeing the error "along dimension 1, the array at index 0 has size 0 and the array at index 1 has size 3".
To fix this issue, you can initialize the F array with the correct number of rows and columns before you start the loop. For example, you can create an empty array with shape (0, 3) like this:
F = np.empty((0, 3))
| Dynamically create matrix from a vectors in numpy | I'm trying to create a matrix of shape Nx3 where N is not known at first.
This is what I'm basically trying to do:
F = np.array([[],[],[]])
for contact in contacts:
xp,yp,theta = contact
# Create vectors for points and normal
P = [xp, yp, 0]
N = [np.cos(theta), np.sin(theta), 0]
# Calculate vector product
cross_PN = np.cross(P,N)
# f = [mz, fx, fi]
mz = cross_PN[2]
fx = N[0]
fy = N[1]
f = np.array([mz, fx, fy])
F = np.vstack([F, f])
But this code doesn't work.
I can do similar thing in Matlab very easily, but that is not the case in Python using numpy.
Any help is greatly appreciated.
Thank you
I would like to create a matrix by adding new rows, but in the beginning the matrix is empty.
That is why I receive the error:
"along dimension 1, the array at index 0 has size 0 and the array at index 1 has size 3"
| [
"The error you are seeing is caused by trying to stack empty arrays together using np.vstack(). When you create an empty array with np.array([[],[],[]]), the resulting array has shape (3, 0), which means that it has 3 rows but no columns. When you try to stack this empty array with another array using np.vstack(), the resulting array has shape (3, 0), which means that it still has 3 rows but no columns, and this is why you are seeing the error \"along dimension 1, the array at index 0 has size 0 and the array at index 1 has size 3\".\nTo fix this issue, you can initialize the F array with the correct number of rows and columns before you start the loop. For example, you can create an empty array with shape (0, 3) like this:\nF = np.empty((0, 3))\n\n"
] | [
0
] | [] | [] | [
"numpy",
"python"
] | stackoverflow_0074673656_numpy_python.txt |
Q:
How to send a DM to a user using just their user id
I would like to send a dm to a user just by using their user id that I copied from their profile.
This is the code that I made, but it didn't work.
@client.command()
async def dm(userID, *, message):
user = client.get_user(userID)
await user.send(message)
This is the error that appeared:
discord.ext.commands.errors.CommandInvokeError: Command raised an exception: AttributeError: 'NoneType' object has no attribute 'send'
A:
All you have to do is change the userID argument to user: discord.User. That argument will accept user mentions (@user), usernames (user), and ids (904360748455698502). The full code would now be:
@client.command()
async def dm(user: discord.User, *, message):
channel = await user.create_dm()
await channel.send(message)
A:
Your code is partially correct. However, from the discord.py API reference, a User object is not messageable, i.e. you cannot use the send() function directly on the User itself.
To solve this problem, we need to first create a DMChannel with the user, and then send a message into the DMChannel.
Here is the working code:
@client.command()
async def dm(userID: int, *, message):
user = client.get_user(userID)
dmChannel = user.create_dm()
await dmchannel.send(message)
A:
You can convert the user id to user object then create dm and sent the message as following:
@client.command()
async def dm(ctx,user:discord.User, *, message = None):
if message is None:
await ctx.send("Enter the message to be sent")
try:
channel = await user.create_dm()
await channel.send(message)
except discord.Forbidden:
await ctx.send("could not send the message")
And you must use try block cause there are some users that are not allowing dm like me :)
as the docs say It raises Forbidden when the bot don't have the perms
| How to send a DM to a user using just their user id | I would like to send a dm to a user just by using their user id that I copied from their profile.
This is the code that I made, but it didn't work.
@client.command()
async def dm(userID, *, message):
user = client.get_user(userID)
await user.send(message)
This is the error that appeared:
discord.ext.commands.errors.CommandInvokeError: Command raised an exception: AttributeError: 'NoneType' object has no attribute 'send'
| [
"All you have to do is change the userID argument to user: discord.User. That argument will accept user mentions (@user), usernames (user), and ids (904360748455698502). The full code would now be:\[email protected]()\nasync def dm(user: discord.User, *, message):\n channel = await user.create_dm()\n await channel.send(message)\n\n",
"Your code is partially correct. However, from the discord.py API reference, a User object is not messageable, i.e. you cannot use the send() function directly on the User itself.\nTo solve this problem, we need to first create a DMChannel with the user, and then send a message into the DMChannel.\nHere is the working code:\[email protected]()\nasync def dm(userID: int, *, message):\n user = client.get_user(userID)\n dmChannel = user.create_dm()\n await dmchannel.send(message)\n\n",
"You can convert the user id to user object then create dm and sent the message as following:\[email protected]()\nasync def dm(ctx,user:discord.User, *, message = None):\n if message is None:\n await ctx.send(\"Enter the message to be sent\")\n try:\n channel = await user.create_dm()\n await channel.send(message)\n except discord.Forbidden:\n await ctx.send(\"could not send the message\")\n\nAnd you must use try block cause there are some users that are not allowing dm like me :)\nas the docs say It raises Forbidden when the bot don't have the perms\n"
] | [
0,
0,
0
] | [] | [] | [
"discord",
"discord.py",
"python"
] | stackoverflow_0074636410_discord_discord.py_python.txt |
Q:
How to add new key value to yaml without overwriting it in python?
I have small python script which responsible for updating my yaml file by adding new records:
data = yaml.load(file)
data['WIN']['Machine'] = dict(node_labels='+> tfs vs2022')
data['WIN']['Machine'] = dict(vs='vs2022')
yaml.dump(data, file)
Every time when I run above script I will get updated yaml file like below:
WIN:
Machine:
vs: vs2022
My desired output to have both my key: value pairs
WIN:
Machine:
node_labels: +> tfs vs2022
vs: vs2022
I'm wondering why line data['WIN'][nodeName] = dict(node_labels='+> tfs vs2022') overwritten by next line? How can add several key: values for Machine section?
A:
This is not a YAML related problem, but a conceptual problem in your non-yaml related Python code.
By assigning a dict as value to the key Machine, you set that value. By assigning
another dict to the key, you overwrite that value completely, erasing the previous key-value pair.
If you simplify your code:
data = dict(Machine=None)
data['Machine'] = dict(node_labels='+> tfs vs2022')
print('data 1', data)
data['Machine'] = dict(vs='vs2022')
print('data 2', data)
As you can see after the second assignment, the key node_labels is no longer available.
data 1 {'Machine': {'node_labels': '+> tfs vs2022'}}
data 2 {'Machine': {'vs': 'vs2022'}}
There are several ways to solve this. You can either assign a value to a key in the first dict:
data = dict(Machine=None)
data['Machine'] = added_dict = dict(node_labels='+> tfs vs2022')
print('data 1', data)
added_dict['vs'] ='vs2022'
print('data 2', data)
Now you have both keys in the second output:
data 1 {'Machine': {'node_labels': '+> tfs vs2022'}}
data 2 {'Machine': {'node_labels': '+> tfs vs2022', 'vs': 'vs2022'}}
If you don't already know there is a dict where you can add a key to, you might to use .setdefault,
either using key-value assigment, and/or by using .update (useful for updating multiple keys in one go):
data = dict()
data.setdefault('Machine', {})['node_labels'] = '+> tfs vs2022'
print('data 1', data)
data.setdefault('Machine', {}).update(dict(vs='vs2022'))
print('data 2', data)
data 1 {'Machine': {'node_labels': '+> tfs vs2022'}}
data 2 {'Machine': {'node_labels': '+> tfs vs2022', 'vs': 'vs2022'}}
Of course you can put node_labels and vs in one dict and assign, but that would overwrite any existing key-values loaded
from YAML. So the use of .update is IMO better:
import sys
from pathlib import Path
import ruamel.yaml
file_in = Path('input.yaml')
# key in YAML mapping with null value
file_in.write_text("""\
WIN:
""")
yaml = ruamel.yaml.YAML()
data = yaml.load(file_in)
if data['WIN'] is None:
data['WIN'] = {}
data['WIN'].setdefault('Machine', {}).update(dict(node_labels='+> tfs vs2022'))
data['WIN'].setdefault('Machine', {}).update(dict(vs='vs2022'))
yaml.dump(data, sys.stdout)
which gives your expected result:
WIN:
Machine:
node_labels: +> tfs vs2022
vs: vs2022
| How to add new key value to yaml without overwriting it in python? | I have small python script which responsible for updating my yaml file by adding new records:
data = yaml.load(file)
data['WIN']['Machine'] = dict(node_labels='+> tfs vs2022')
data['WIN']['Machine'] = dict(vs='vs2022')
yaml.dump(data, file)
Every time when I run above script I will get updated yaml file like below:
WIN:
Machine:
vs: vs2022
My desired output to have both my key: value pairs
WIN:
Machine:
node_labels: +> tfs vs2022
vs: vs2022
I'm wondering why line data['WIN'][nodeName] = dict(node_labels='+> tfs vs2022') overwritten by next line? How can add several key: values for Machine section?
| [
"This is not a YAML related problem, but a conceptual problem in your non-yaml related Python code.\nBy assigning a dict as value to the key Machine, you set that value. By assigning\nanother dict to the key, you overwrite that value completely, erasing the previous key-value pair.\nIf you simplify your code:\ndata = dict(Machine=None)\ndata['Machine'] = dict(node_labels='+> tfs vs2022')\nprint('data 1', data)\ndata['Machine'] = dict(vs='vs2022')\nprint('data 2', data)\n\nAs you can see after the second assignment, the key node_labels is no longer available.\ndata 1 {'Machine': {'node_labels': '+> tfs vs2022'}}\ndata 2 {'Machine': {'vs': 'vs2022'}}\n\nThere are several ways to solve this. You can either assign a value to a key in the first dict:\ndata = dict(Machine=None)\ndata['Machine'] = added_dict = dict(node_labels='+> tfs vs2022')\nprint('data 1', data)\nadded_dict['vs'] ='vs2022'\nprint('data 2', data)\n\nNow you have both keys in the second output:\ndata 1 {'Machine': {'node_labels': '+> tfs vs2022'}}\ndata 2 {'Machine': {'node_labels': '+> tfs vs2022', 'vs': 'vs2022'}}\n\nIf you don't already know there is a dict where you can add a key to, you might to use .setdefault,\neither using key-value assigment, and/or by using .update (useful for updating multiple keys in one go):\ndata = dict()\ndata.setdefault('Machine', {})['node_labels'] = '+> tfs vs2022'\nprint('data 1', data)\ndata.setdefault('Machine', {}).update(dict(vs='vs2022'))\nprint('data 2', data)\n\ndata 1 {'Machine': {'node_labels': '+> tfs vs2022'}}\ndata 2 {'Machine': {'node_labels': '+> tfs vs2022', 'vs': 'vs2022'}}\n\nOf course you can put node_labels and vs in one dict and assign, but that would overwrite any existing key-values loaded\nfrom YAML. So the use of .update is IMO better:\nimport sys\nfrom pathlib import Path\nimport ruamel.yaml\n\nfile_in = Path('input.yaml')\n# key in YAML mapping with null value\nfile_in.write_text(\"\"\"\\\nWIN:\n\"\"\")\n \nyaml = ruamel.yaml.YAML()\ndata = yaml.load(file_in)\nif data['WIN'] is None:\n data['WIN'] = {}\ndata['WIN'].setdefault('Machine', {}).update(dict(node_labels='+> tfs vs2022'))\ndata['WIN'].setdefault('Machine', {}).update(dict(vs='vs2022'))\nyaml.dump(data, sys.stdout)\n\nwhich gives your expected result:\nWIN:\n Machine:\n node_labels: +> tfs vs2022\n vs: vs2022\n\n"
] | [
0
] | [] | [] | [
"python",
"python_3.x",
"yaml"
] | stackoverflow_0074669180_python_python_3.x_yaml.txt |
Q:
Is there any possibility to speed the nested for loop in pandas dataframe?
Is there any possibility to speed the nested for loop in pandas dataframe? I have tried itertuples instead of iterrows. But the expected outcome(speed) was not good enough. How to use list comprehension and vectorization in this code.
lst3 = []
for i,j in enumerate(df2.itertuples()):
Tagging1=False
#if ("Con" in str(j["CMS Classification"])):
#print(j)
if ("Con" in j._9):
for k,l in enumerate(df3.itertuples()):
#print(1)
if(str(j._8) in str(l._1)):
print(2)
Tagging1=True
lst3.append(str(l._11))
continue
elif("Amen" in str(j["CMS Classification"])):
for m,n in enumerate(df3.itertuples(index= False)):
print(n)
if(str(j['Tagged ID']) in str(n['Amendment ID'])):
Tagging1=True
#print(n['Amend Vetting Year Qtr'])
lst3.append(str(n['Amend Vetting Year Qtr']))
continue
if(Tagging1==False):
lst3.append("")
df2['Contract/ Amend Start Qtr']=lst3
A:
Yes, it is possible to improve the performance of the nested for loop in your code by using vectorized operations and list comprehension in Pandas. Instead of using for loops to iterate over the rows of the DataFrame, you can use the apply() method and a lambda function to apply a function to each row of the DataFrame, which can be much faster than using a for loop.
Here is an example of how you can use the apply() method and a lambda function to vectorize the nested for loop in your code:
def get_start_qtr(row):
Tagging1 = False
if "Con" in row["CMS Classification"]:
matches = df3[df3["Tagged ID"].str.contains(row["Tagged ID"])]
if matches.shape[0] > 0:
Tagging1 = True
return matches.iloc[0]["Contract Vetting Year Qtr"]
elif "Amen" in row["CMS Classification"]:
matches = df3[df3["Amendment ID"].str.contains(row["Tagged ID"])]
if matches.shape[0] > 0:
Tagging1 = True
return matches.iloc[0]["Amend Vetting Year Qtr"]
if Tagging1 == False:
return ""
df2["Contract/ Amend Start Qtr"] = df2.apply(lambda row: get_start_qtr(row), axis=1)
In this example, the get_start_qtr() function takes a row of the df2 DataFrame as input and returns the corresponding value from the df3 DataFrame based on the values in the CMS Classification and Tagged ID columns. The apply() method is used to apply the get_start_qtr() function to each row of the df2 DataFrame, and the resulting values
| Is there any possibility to speed the nested for loop in pandas dataframe? | Is there any possibility to speed the nested for loop in pandas dataframe? I have tried itertuples instead of iterrows. But the expected outcome(speed) was not good enough. How to use list comprehension and vectorization in this code.
lst3 = []
for i,j in enumerate(df2.itertuples()):
Tagging1=False
#if ("Con" in str(j["CMS Classification"])):
#print(j)
if ("Con" in j._9):
for k,l in enumerate(df3.itertuples()):
#print(1)
if(str(j._8) in str(l._1)):
print(2)
Tagging1=True
lst3.append(str(l._11))
continue
elif("Amen" in str(j["CMS Classification"])):
for m,n in enumerate(df3.itertuples(index= False)):
print(n)
if(str(j['Tagged ID']) in str(n['Amendment ID'])):
Tagging1=True
#print(n['Amend Vetting Year Qtr'])
lst3.append(str(n['Amend Vetting Year Qtr']))
continue
if(Tagging1==False):
lst3.append("")
df2['Contract/ Amend Start Qtr']=lst3
| [
"Yes, it is possible to improve the performance of the nested for loop in your code by using vectorized operations and list comprehension in Pandas. Instead of using for loops to iterate over the rows of the DataFrame, you can use the apply() method and a lambda function to apply a function to each row of the DataFrame, which can be much faster than using a for loop.\nHere is an example of how you can use the apply() method and a lambda function to vectorize the nested for loop in your code:\ndef get_start_qtr(row):\n Tagging1 = False\n if \"Con\" in row[\"CMS Classification\"]:\n matches = df3[df3[\"Tagged ID\"].str.contains(row[\"Tagged ID\"])]\n if matches.shape[0] > 0:\n Tagging1 = True\n return matches.iloc[0][\"Contract Vetting Year Qtr\"]\n elif \"Amen\" in row[\"CMS Classification\"]:\n matches = df3[df3[\"Amendment ID\"].str.contains(row[\"Tagged ID\"])]\n if matches.shape[0] > 0:\n Tagging1 = True\n return matches.iloc[0][\"Amend Vetting Year Qtr\"]\n if Tagging1 == False:\n return \"\"\n\ndf2[\"Contract/ Amend Start Qtr\"] = df2.apply(lambda row: get_start_qtr(row), axis=1)\n\nIn this example, the get_start_qtr() function takes a row of the df2 DataFrame as input and returns the corresponding value from the df3 DataFrame based on the values in the CMS Classification and Tagged ID columns. The apply() method is used to apply the get_start_qtr() function to each row of the df2 DataFrame, and the resulting values\n"
] | [
0
] | [] | [] | [
"list",
"numpy",
"pandas",
"python"
] | stackoverflow_0074673201_list_numpy_pandas_python.txt |
Q:
Python function to get the t-statistic
I am looking for a Python function (or to write my own if there is not one) to get the t-statistic in order to use in a confidence interval calculation.
I have found tables that give answers for various probabilities / degrees of freedom like this one, but I would like to be able to calculate this for any given probability. For anyone not already familiar with this degrees of freedom is the number of data points (n) in your sample -1 and the numbers for the column headings at the top are probabilities (p) e.g. a 2 tailed significance level of 0.05 is used if you are looking up the t-score to use in the calculation for 95% confidence that if you repeated n tests the result would fall within the mean +/- the confidence interval.
I have looked into using various functions within scipy.stats, but none that I can see seem to allow for the simple inputs I described above.
Excel has a simple implementation of this e.g. to get the t-score for a sample of 1000, where I need to be 95% confident I would use: =TINV(0.05,999) and get the score ~1.96
Here is the code that I have used to implement confidence intervals so far, as you can see I am using a very crude way of getting the t-score at present (just allowing a few values for perc_conf and warning that it is not accurate for samples < 1000):
# -*- coding: utf-8 -*-
from __future__ import division
import math
def mean(lst):
# μ = 1/N Σ(xi)
return sum(lst) / float(len(lst))
def variance(lst):
"""
Uses standard variance formula (sum of each (data point - mean) squared)
all divided by number of data points
"""
# σ² = 1/N Σ((xi-μ)²)
mu = mean(lst)
return 1.0/len(lst) * sum([(i-mu)**2 for i in lst])
def conf_int(lst, perc_conf=95):
"""
Confidence interval - given a list of values compute the square root of
the variance of the list (v) divided by the number of entries (n)
multiplied by a constant factor of (c). This means that I can
be confident of a result +/- this amount from the mean.
The constant factor can be looked up from a table, for 95% confidence
on a reasonable size sample (>=500) 1.96 is used.
"""
if perc_conf == 95:
c = 1.96
elif perc_conf == 90:
c = 1.64
elif perc_conf == 99:
c = 2.58
else:
c = 1.96
print 'Only 90, 95 or 99 % are allowed for, using default 95%'
n, v = len(lst), variance(lst)
if n < 1000:
print 'WARNING: constant factor may not be accurate for n < ~1000'
return math.sqrt(v/n) * c
Here is an example call for the above code:
# Example: 1000 coin tosses on a fair coin. What is the range that I can be 95%
# confident the result will f all within.
# list of 1000 perfectly distributed...
perc_conf_req = 95
n, p = 1000, 0.5 # sample_size, probability of heads for each coin
l = [0 for i in range(int(n*(1-p)))] + [1 for j in range(int(n*p))]
exp_heads = mean(l) * len(l)
c_int = conf_int(l, perc_conf_req)
print 'I can be '+str(perc_conf_req)+'% confident that the result of '+str(n)+ \
' coin flips will be within +/- '+str(round(c_int*100,2))+'% of '+\
str(int(exp_heads))
x = round(n*c_int,0)
print 'i.e. between '+str(int(exp_heads-x))+' and '+str(int(exp_heads+x))+\
' heads (assuming a probability of '+str(p)+' for each flip).'
The output for this is:
I can be 95% confident that the result of 1000 coin flips will be
within +/- 3.1% of 500 i.e. between 469 and 531 heads (assuming a
probability of 0.5 for each flip).
I also looked into calculating the t-distribution for a range and then returning the t-score that got the probability closest to that required, but I had issues implementing the formula. Let me know if this is relevant and you want to see the code, but I have assumed not as there is probably an easier way.
A:
Have you tried scipy?
You will need to installl the scipy library...more about installing it here: http://www.scipy.org/install.html
Once installed, you can replicate the Excel functionality like such:
from scipy import stats
#Studnt, n=999, p<0.05, 2-tail
#equivalent to Excel TINV(0.05,999)
print stats.t.ppf(1-0.025, 999)
#Studnt, n=999, p<0.05%, Single tail
#equivalent to Excel TINV(2*0.05,999)
print stats.t.ppf(1-0.05, 999)
You can also read about installing the library here: how to install scipy for python?
A:
Try the following code:
from scipy import stats
#Studnt, n=22, 2-tail
#stats.t.ppf(1-0.025, df)
# df=n-1=22-1=21
print (stats.t.ppf(1-0.025, 21))
A:
You can try this code:
# for small samples (<50) we use t-statistics
# n = 9, degree of freedom = 9-1 = 8
# for 99% confidence interval, alpha = 1% = 0.01 and alpha/2 = 0.005
from scipy import stats
ci = 99
n = 9
t = stats.t.ppf(1- ((100-ci)/2/100), n-1) # 99% CI, t8,0.005
print(t) # 3.36
A:
scipy.stats.t has another method isf that directly returns the quantile that corresponds to the upper tail probability alpha. This is an implementation of the inverse survival function and returns the exact same value as t.ppf(1-alpha, dof).
from scipy import stats
alpha, dof = 0.05, 999
stats.t.isf(alpha, dof)
# 1.6463803454275356
For two-tailed, halve alpha:
stats.t.isf(alpha/2, dof)
# 1.962341461133449
| Python function to get the t-statistic | I am looking for a Python function (or to write my own if there is not one) to get the t-statistic in order to use in a confidence interval calculation.
I have found tables that give answers for various probabilities / degrees of freedom like this one, but I would like to be able to calculate this for any given probability. For anyone not already familiar with this degrees of freedom is the number of data points (n) in your sample -1 and the numbers for the column headings at the top are probabilities (p) e.g. a 2 tailed significance level of 0.05 is used if you are looking up the t-score to use in the calculation for 95% confidence that if you repeated n tests the result would fall within the mean +/- the confidence interval.
I have looked into using various functions within scipy.stats, but none that I can see seem to allow for the simple inputs I described above.
Excel has a simple implementation of this e.g. to get the t-score for a sample of 1000, where I need to be 95% confident I would use: =TINV(0.05,999) and get the score ~1.96
Here is the code that I have used to implement confidence intervals so far, as you can see I am using a very crude way of getting the t-score at present (just allowing a few values for perc_conf and warning that it is not accurate for samples < 1000):
# -*- coding: utf-8 -*-
from __future__ import division
import math
def mean(lst):
# μ = 1/N Σ(xi)
return sum(lst) / float(len(lst))
def variance(lst):
"""
Uses standard variance formula (sum of each (data point - mean) squared)
all divided by number of data points
"""
# σ² = 1/N Σ((xi-μ)²)
mu = mean(lst)
return 1.0/len(lst) * sum([(i-mu)**2 for i in lst])
def conf_int(lst, perc_conf=95):
"""
Confidence interval - given a list of values compute the square root of
the variance of the list (v) divided by the number of entries (n)
multiplied by a constant factor of (c). This means that I can
be confident of a result +/- this amount from the mean.
The constant factor can be looked up from a table, for 95% confidence
on a reasonable size sample (>=500) 1.96 is used.
"""
if perc_conf == 95:
c = 1.96
elif perc_conf == 90:
c = 1.64
elif perc_conf == 99:
c = 2.58
else:
c = 1.96
print 'Only 90, 95 or 99 % are allowed for, using default 95%'
n, v = len(lst), variance(lst)
if n < 1000:
print 'WARNING: constant factor may not be accurate for n < ~1000'
return math.sqrt(v/n) * c
Here is an example call for the above code:
# Example: 1000 coin tosses on a fair coin. What is the range that I can be 95%
# confident the result will f all within.
# list of 1000 perfectly distributed...
perc_conf_req = 95
n, p = 1000, 0.5 # sample_size, probability of heads for each coin
l = [0 for i in range(int(n*(1-p)))] + [1 for j in range(int(n*p))]
exp_heads = mean(l) * len(l)
c_int = conf_int(l, perc_conf_req)
print 'I can be '+str(perc_conf_req)+'% confident that the result of '+str(n)+ \
' coin flips will be within +/- '+str(round(c_int*100,2))+'% of '+\
str(int(exp_heads))
x = round(n*c_int,0)
print 'i.e. between '+str(int(exp_heads-x))+' and '+str(int(exp_heads+x))+\
' heads (assuming a probability of '+str(p)+' for each flip).'
The output for this is:
I can be 95% confident that the result of 1000 coin flips will be
within +/- 3.1% of 500 i.e. between 469 and 531 heads (assuming a
probability of 0.5 for each flip).
I also looked into calculating the t-distribution for a range and then returning the t-score that got the probability closest to that required, but I had issues implementing the formula. Let me know if this is relevant and you want to see the code, but I have assumed not as there is probably an easier way.
| [
"Have you tried scipy?\nYou will need to installl the scipy library...more about installing it here: http://www.scipy.org/install.html\nOnce installed, you can replicate the Excel functionality like such:\nfrom scipy import stats\n#Studnt, n=999, p<0.05, 2-tail\n#equivalent to Excel TINV(0.05,999)\nprint stats.t.ppf(1-0.025, 999)\n\n#Studnt, n=999, p<0.05%, Single tail\n#equivalent to Excel TINV(2*0.05,999)\nprint stats.t.ppf(1-0.05, 999)\n\nYou can also read about installing the library here: how to install scipy for python?\n",
"Try the following code:\nfrom scipy import stats\n#Studnt, n=22, 2-tail\n#stats.t.ppf(1-0.025, df)\n# df=n-1=22-1=21\nprint (stats.t.ppf(1-0.025, 21))\n\n",
"You can try this code:\n# for small samples (<50) we use t-statistics\n# n = 9, degree of freedom = 9-1 = 8\n# for 99% confidence interval, alpha = 1% = 0.01 and alpha/2 = 0.005\nfrom scipy import stats\n\nci = 99\nn = 9\nt = stats.t.ppf(1- ((100-ci)/2/100), n-1) # 99% CI, t8,0.005\nprint(t) # 3.36\n\n",
"scipy.stats.t has another method isf that directly returns the quantile that corresponds to the upper tail probability alpha. This is an implementation of the inverse survival function and returns the exact same value as t.ppf(1-alpha, dof).\nfrom scipy import stats\nalpha, dof = 0.05, 999\n\nstats.t.isf(alpha, dof) \n# 1.6463803454275356\n\nFor two-tailed, halve alpha:\nstats.t.isf(alpha/2, dof)\n# 1.962341461133449\n\n"
] | [
60,
3,
0,
0
] | [] | [] | [
"confidence_interval",
"python",
"python_2.7",
"statistics"
] | stackoverflow_0019339305_confidence_interval_python_python_2.7_statistics.txt |
Q:
How to Change the Format of a DateTimeField Object when it is Displayed in HTML through Ajax?
models.py
class Log(models.Model):
source = models.CharField(max_length=1000, default='')
date = models.DateTimeField(default=datetime.now, blank = True)
views.py
The objects in the Log model are filtered so that only those with source names that match a specific account name are considered. The values of these valid objects will then be listed and returned using a JsonResponse.
def backlog_list(request):
account_name = request.POST['account_name']
access_log = Log.objects.filter(source=account_name)
return JsonResponse({"access_log":list(access_log.values())})
dashboard.html
This Ajax script is the one that brings back the account name to the views.py. If there are no valid objects, the HTML will be empty; however, it will display it like this otherwise.
<h3>You scanned the QR code during these times.</h3>
<div id="display">
</div>
<script>
$(document).ready(function(){
setInterval(function(){
$.ajax({
type: 'POST',
url : "/backlog_list",
data:{
account_name:$('#account_name').val(),
csrfmiddlewaretoken:$('input[name=csrfmiddlewaretoken]').val(),
},
success: function(response){
console.log(response);
$("#display").empty();
for (var key in response.access_log)
{
var temp="<div class='container darker'><span class='time-left'>"+response.access_log[key].date+"</span></div>";
$("#display").append(temp);
}
},
error: function(response){
alert('An error occurred')
}
});
},1000);
})
</script>
My goal is to have the Date and time displayed like "Jan. 10, 2000, 9:30:20 A.M."
I've tried changing the format directly from the models.py by adding "strftime" but the error response is triggered.
A:
You're trying to format the date in the HTML by appending it to a string. Unfortunately, this won't work because the date value will be treated as a string and not as a date object.
To format the date in the desired way, you will need to convert it to a date object in JavaScript and then use a date formatting function to convert it to the desired string format.
Here is an example of how you could do this:
// Parse the date value from the response into a date object
var date = new Date(response.access_log[key].date);
// Use the toLocaleDateString() function to format the date as "Jan. 10, 2000"
var dateString = date.toLocaleDateString('en-US', {
month: 'short',
day: 'numeric',
year: 'numeric'
});
// Use the toLocaleTimeString() function to format the time as "9:30:20 A.M."
var timeString = date.toLocaleTimeString('en-US', {
hour: 'numeric',
minute: 'numeric',
second: 'numeric',
hour12: true
});
// Append the formatted date and time to the HTML
var temp="<div class='container darker'><span class='time-left'>" + dateString + ", " + timeString + "</span></div>";
$("#display").append(temp);
You can read more about the toLocaleDateString() and toLocaleTimeString() functions in the JavaScript documentation:
toLocaleDateString()
toLocaleTimeString()
A:
One way to set the format you need is via Javascript, Tharun posted an example in his answer.
Alternatively, you can specify the format you need in views.py:
def backlog_list(request):
...
dates = [
val.strftime('%b. %d, %Y, %I:%M:%S %p')
for val in access_log.values_list("date", flat=True)
]
return JsonResponse({"access_log":[{"date": d} for d in dates]})
Format string reference - https://docs.python.org/3/library/datetime.html#strftime-and-strptime-format-codes
| How to Change the Format of a DateTimeField Object when it is Displayed in HTML through Ajax? | models.py
class Log(models.Model):
source = models.CharField(max_length=1000, default='')
date = models.DateTimeField(default=datetime.now, blank = True)
views.py
The objects in the Log model are filtered so that only those with source names that match a specific account name are considered. The values of these valid objects will then be listed and returned using a JsonResponse.
def backlog_list(request):
account_name = request.POST['account_name']
access_log = Log.objects.filter(source=account_name)
return JsonResponse({"access_log":list(access_log.values())})
dashboard.html
This Ajax script is the one that brings back the account name to the views.py. If there are no valid objects, the HTML will be empty; however, it will display it like this otherwise.
<h3>You scanned the QR code during these times.</h3>
<div id="display">
</div>
<script>
$(document).ready(function(){
setInterval(function(){
$.ajax({
type: 'POST',
url : "/backlog_list",
data:{
account_name:$('#account_name').val(),
csrfmiddlewaretoken:$('input[name=csrfmiddlewaretoken]').val(),
},
success: function(response){
console.log(response);
$("#display").empty();
for (var key in response.access_log)
{
var temp="<div class='container darker'><span class='time-left'>"+response.access_log[key].date+"</span></div>";
$("#display").append(temp);
}
},
error: function(response){
alert('An error occurred')
}
});
},1000);
})
</script>
My goal is to have the Date and time displayed like "Jan. 10, 2000, 9:30:20 A.M."
I've tried changing the format directly from the models.py by adding "strftime" but the error response is triggered.
| [
"You're trying to format the date in the HTML by appending it to a string. Unfortunately, this won't work because the date value will be treated as a string and not as a date object.\nTo format the date in the desired way, you will need to convert it to a date object in JavaScript and then use a date formatting function to convert it to the desired string format.\nHere is an example of how you could do this:\n// Parse the date value from the response into a date object\nvar date = new Date(response.access_log[key].date);\n\n// Use the toLocaleDateString() function to format the date as \"Jan. 10, 2000\"\nvar dateString = date.toLocaleDateString('en-US', {\n month: 'short',\n day: 'numeric',\n year: 'numeric'\n});\n\n// Use the toLocaleTimeString() function to format the time as \"9:30:20 A.M.\"\nvar timeString = date.toLocaleTimeString('en-US', {\n hour: 'numeric',\n minute: 'numeric',\n second: 'numeric',\n hour12: true\n});\n\n// Append the formatted date and time to the HTML\nvar temp=\"<div class='container darker'><span class='time-left'>\" + dateString + \", \" + timeString + \"</span></div>\";\n$(\"#display\").append(temp);\n\n\nYou can read more about the toLocaleDateString() and toLocaleTimeString() functions in the JavaScript documentation:\n\ntoLocaleDateString()\ntoLocaleTimeString()\n\n",
"One way to set the format you need is via Javascript, Tharun posted an example in his answer.\nAlternatively, you can specify the format you need in views.py:\ndef backlog_list(request): \n ...\n dates = [\n val.strftime('%b. %d, %Y, %I:%M:%S %p') \n for val in access_log.values_list(\"date\", flat=True)\n ]\n return JsonResponse({\"access_log\":[{\"date\": d} for d in dates]})\n\nFormat string reference - https://docs.python.org/3/library/datetime.html#strftime-and-strptime-format-codes\n"
] | [
0,
0
] | [] | [] | [
"ajax",
"datetime",
"django",
"python"
] | stackoverflow_0074673906_ajax_datetime_django_python.txt |
Q:
How can I remove the values on top of the grouped bars with the bar_plot using axes.bar in matplotlib?
I want to remove the percentage values on top of each plot, or possibly round them
width = 0.2
x = np.arange(len(labels))
fig2,ax = plt.subplots()
rects1 = ax.bar(x - width/2, precision_data, width, label='precision',color ='firebrick')
rects2 = ax.bar(x + width/2 , recall_data, width, label='recall',color = 'royalblue')
ax.set_ylabel('Score %')
ax.set_title('precision-recall average classifiers scores')
ax.set_xticks(x, labels)
ax.legend()
ax.bar_label(rects1)
ax.bar_label(rects2)`
A:
To remove the text on top of your bars, simply comment out ax.bar_label(rects1) and ax.bar_label(rects2):
To round the labels, you may use the fmt argument: ax.bar_label(labels, fmt='%.2f')
| How can I remove the values on top of the grouped bars with the bar_plot using axes.bar in matplotlib? |
I want to remove the percentage values on top of each plot, or possibly round them
width = 0.2
x = np.arange(len(labels))
fig2,ax = plt.subplots()
rects1 = ax.bar(x - width/2, precision_data, width, label='precision',color ='firebrick')
rects2 = ax.bar(x + width/2 , recall_data, width, label='recall',color = 'royalblue')
ax.set_ylabel('Score %')
ax.set_title('precision-recall average classifiers scores')
ax.set_xticks(x, labels)
ax.legend()
ax.bar_label(rects1)
ax.bar_label(rects2)`
| [
"\nTo remove the text on top of your bars, simply comment out ax.bar_label(rects1) and ax.bar_label(rects2):\n\nTo round the labels, you may use the fmt argument: ax.bar_label(labels, fmt='%.2f')\n\n\n"
] | [
1
] | [] | [] | [
"matplotlib",
"python"
] | stackoverflow_0074674124_matplotlib_python.txt |
Q:
Why getting this selenium.common.exceptions.ElementClickInterceptedException: Message: element click intercepted: Element
I know already upload answer to this same question but I try them they are not working for me because there is also some some update in selenium code too.
Getting this Error selenium.common.exceptions.ElementClickInterceptedException: Message: element click intercepted: Element <div class="up-typeahead-fake" data-test="up-c-typeahead-input-fake">...</div> is not clickable at point (838, 0). Other element would receive the click: <div class="up-modal-header">...</div> , When trying to send my searching keyword in this input with labeled "Skills Search"
in advance searching pop-pup form.
Here is the URL: https://www.upwork.com/nx/jobs/search/modals/advanced-search?sort=recency&pageTitle=Advanced%20Search&_navType=modal&_modalInfo=%5B%7B%22navType%22%3A%22modal%22,%22title%22%3A%22Advanced%20Search%22,%22modalId%22%3A%221670133126002%22,%22channelName%22%3A%22advanced-search-modal%22%7D%5D
Here is my code:
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
from selenium.webdriver.common.proxy import Proxy, ProxyType
import time
from fake_useragent import UserAgent
import pyttsx3
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
def main():
options = Options()
service = Service('F:\\work\\chromedriver_win32\\chromedriver.exe')
options.add_argument("start-maximized")
options.add_argument('--disable-blink-features=AutomationControlled') #Adding the argument
options.add_experimental_option("excludeSwitches",["enable-automation"])#Disable chrome contrlled message (Exclude the collection of enable-automation switches)
options.add_experimental_option('useAutomationExtension', False) #Turn-off useAutomationExtension
options.add_experimental_option('useAutomationExtension', False) #Turn-off useAutomationExtension
prefs = {"credentials_enable_service": False,
"profile.password_manager_enabled": False}
options.add_experimental_option("prefs", prefs)
ua = UserAgent()
userAgent = ua.random
options.add_argument(f'user-agent={userAgent}')
driver = webdriver.Chrome(service=service , options=options)
url = 'https://www.upwork.com/nx/jobs/search/?sort=recency'
driver.get(url)
time.sleep(7)
advsearch = driver.find_element(By.XPATH,'//button[contains(@title,"Advanced Search")]')
advsearch.click()
time.sleep(10)
skill = WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH,'//div[contains(@class,"up-typeahead")]')))
skill.click()
time.sleep(10)
keys = ["Web Scraping","Selenium WebDriver", "Data Scraping", "selenium", "Web Crawling", "Beautiful Soup", "Scrapy", "Data Extraction", "Automation"]
for i in range(len(keys)):
skill.send_keys(Keys[i],Keys.ENTER)
time.sleep (2)
main()
I try to send keys to the input field but its give me Error .ElementClickInterceptedException , I try old answer from stack previous question answer related to this error but they are not working for me because there is also some some update in selenium code too.
A:
That error indicates that you have to click using JS execution like:
import time
skill = WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH,'//div[contains(@class,"up-typeahead")]')))
driver.execute_script("arguments[0].click();" ,skill)
time.sleep(1)
A:
By clicking on "Advanced search" button an advanced search modal dialog is opened. So, when this dialog is opened you can not insert your search inputs into the regular search input, only in that modal dialog input. Then you need to close the button on that dialog to perform the search.
The following code is working:
import time
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
options = Options()
options.add_argument("start-maximized")
webdriver_service = Service('C:\webdrivers\chromedriver.exe')
driver = webdriver.Chrome(options=options, service=webdriver_service)
wait = WebDriverWait(driver, 10)
url = "https://www.upwork.com/nx/jobs/search/?sort=recency"
driver.get(url)
keys = ["Web Scraping","Selenium WebDriver", "Data Scraping", "selenium", "Web Crawling", "Beautiful Soup", "Scrapy", "Data Extraction", "Automation"]
wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, 'button#onetrust-accept-btn-handler')))
time.sleep(5)
wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, 'button#onetrust-accept-btn-handler'))).click()
for i in range(len(keys)):
wait.until(EC.element_to_be_clickable((By.XPATH, '//button[contains(@title,"Advanced Search")]'))).click()
advanced_search_input = wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR,'[data-test="modal-advanced-search-and_terms"]')))
advanced_search_input.clear()
advanced_search_input.send_keys(keys[i])
wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR,'[data-test="modal-advanced-search-search-btn"]'))).click()
Also, when using Selenium you should never use JavaScript clicks until you have no alternatives since Selenium imitates human GUI actions while JavaScript clicks can perform clicks on invisible, covered elements etc.
In this case, when the dialog is opened as a user you can not click on elements covered by that dialog. So, when performing GUI testing with Selenium (this is what Selenium for) you should not perform force clicks on such elements with the use of JavaScript.
| Why getting this selenium.common.exceptions.ElementClickInterceptedException: Message: element click intercepted: Element | I know already upload answer to this same question but I try them they are not working for me because there is also some some update in selenium code too.
Getting this Error selenium.common.exceptions.ElementClickInterceptedException: Message: element click intercepted: Element <div class="up-typeahead-fake" data-test="up-c-typeahead-input-fake">...</div> is not clickable at point (838, 0). Other element would receive the click: <div class="up-modal-header">...</div> , When trying to send my searching keyword in this input with labeled "Skills Search"
in advance searching pop-pup form.
Here is the URL: https://www.upwork.com/nx/jobs/search/modals/advanced-search?sort=recency&pageTitle=Advanced%20Search&_navType=modal&_modalInfo=%5B%7B%22navType%22%3A%22modal%22,%22title%22%3A%22Advanced%20Search%22,%22modalId%22%3A%221670133126002%22,%22channelName%22%3A%22advanced-search-modal%22%7D%5D
Here is my code:
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
from selenium.webdriver.common.proxy import Proxy, ProxyType
import time
from fake_useragent import UserAgent
import pyttsx3
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
def main():
options = Options()
service = Service('F:\\work\\chromedriver_win32\\chromedriver.exe')
options.add_argument("start-maximized")
options.add_argument('--disable-blink-features=AutomationControlled') #Adding the argument
options.add_experimental_option("excludeSwitches",["enable-automation"])#Disable chrome contrlled message (Exclude the collection of enable-automation switches)
options.add_experimental_option('useAutomationExtension', False) #Turn-off useAutomationExtension
options.add_experimental_option('useAutomationExtension', False) #Turn-off useAutomationExtension
prefs = {"credentials_enable_service": False,
"profile.password_manager_enabled": False}
options.add_experimental_option("prefs", prefs)
ua = UserAgent()
userAgent = ua.random
options.add_argument(f'user-agent={userAgent}')
driver = webdriver.Chrome(service=service , options=options)
url = 'https://www.upwork.com/nx/jobs/search/?sort=recency'
driver.get(url)
time.sleep(7)
advsearch = driver.find_element(By.XPATH,'//button[contains(@title,"Advanced Search")]')
advsearch.click()
time.sleep(10)
skill = WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH,'//div[contains(@class,"up-typeahead")]')))
skill.click()
time.sleep(10)
keys = ["Web Scraping","Selenium WebDriver", "Data Scraping", "selenium", "Web Crawling", "Beautiful Soup", "Scrapy", "Data Extraction", "Automation"]
for i in range(len(keys)):
skill.send_keys(Keys[i],Keys.ENTER)
time.sleep (2)
main()
I try to send keys to the input field but its give me Error .ElementClickInterceptedException , I try old answer from stack previous question answer related to this error but they are not working for me because there is also some some update in selenium code too.
| [
"That error indicates that you have to click using JS execution like:\n import time\n\n skill = WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH,'//div[contains(@class,\"up-typeahead\")]')))\n driver.execute_script(\"arguments[0].click();\" ,skill)\n time.sleep(1)\n\n",
"By clicking on \"Advanced search\" button an advanced search modal dialog is opened. So, when this dialog is opened you can not insert your search inputs into the regular search input, only in that modal dialog input. Then you need to close the button on that dialog to perform the search.\nThe following code is working:\nimport time\n\nfrom selenium import webdriver\nfrom selenium.webdriver.chrome.service import Service\nfrom selenium.webdriver.chrome.options import Options\nfrom selenium.webdriver.support.ui import WebDriverWait\nfrom selenium.webdriver.common.by import By\nfrom selenium.webdriver.support import expected_conditions as EC\n\noptions = Options()\noptions.add_argument(\"start-maximized\")\n\nwebdriver_service = Service('C:\\webdrivers\\chromedriver.exe')\ndriver = webdriver.Chrome(options=options, service=webdriver_service)\nwait = WebDriverWait(driver, 10)\n\nurl = \"https://www.upwork.com/nx/jobs/search/?sort=recency\"\ndriver.get(url)\n\nkeys = [\"Web Scraping\",\"Selenium WebDriver\", \"Data Scraping\", \"selenium\", \"Web Crawling\", \"Beautiful Soup\", \"Scrapy\", \"Data Extraction\", \"Automation\"]\nwait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, 'button#onetrust-accept-btn-handler')))\ntime.sleep(5)\nwait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, 'button#onetrust-accept-btn-handler'))).click()\nfor i in range(len(keys)):\n wait.until(EC.element_to_be_clickable((By.XPATH, '//button[contains(@title,\"Advanced Search\")]'))).click()\n advanced_search_input = wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR,'[data-test=\"modal-advanced-search-and_terms\"]')))\n advanced_search_input.clear()\n advanced_search_input.send_keys(keys[i])\n wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR,'[data-test=\"modal-advanced-search-search-btn\"]'))).click()\n\nAlso, when using Selenium you should never use JavaScript clicks until you have no alternatives since Selenium imitates human GUI actions while JavaScript clicks can perform clicks on invisible, covered elements etc.\nIn this case, when the dialog is opened as a user you can not click on elements covered by that dialog. So, when performing GUI testing with Selenium (this is what Selenium for) you should not perform force clicks on such elements with the use of JavaScript.\n"
] | [
1,
0
] | [] | [] | [
"automation",
"python",
"selenium",
"selenium_webdriver",
"xpath"
] | stackoverflow_0074673772_automation_python_selenium_selenium_webdriver_xpath.txt |
Q:
Python: How to print a looping nested list as a Matrix
I want to print a matrix of p*p (where p is an input taken from the user).
The matrix should be in a format of [m,n] i.e [[[3,0],[3,1],[3,2],[3,3]],[2,0],[2,1],[2,2],[2,3]]... and so on.
a = int(input())
l1 = []
for i in range(a):
l1.append([])
for j in range(a):
l1[i] = [j,i]
print(l1)
I tried using this code and realized it is wrong, what can I do to achieve the desired output.
A:
# Take input from the user
p = int(input())
# Create an empty list
l1 = []
# Iterate over the range 0 to p
for i in range(p):
# Create a new empty sublist for each iteration of the outer loop
l1.append([])
# Iterate over the range 0 to p
for j in range(p):
# Append the values [j, i] to the sublist
l1[i].append([j, i])
# Print the matrix
print(l1)
| Python: How to print a looping nested list as a Matrix | I want to print a matrix of p*p (where p is an input taken from the user).
The matrix should be in a format of [m,n] i.e [[[3,0],[3,1],[3,2],[3,3]],[2,0],[2,1],[2,2],[2,3]]... and so on.
a = int(input())
l1 = []
for i in range(a):
l1.append([])
for j in range(a):
l1[i] = [j,i]
print(l1)
I tried using this code and realized it is wrong, what can I do to achieve the desired output.
| [
"# Take input from the user\np = int(input())\n\n# Create an empty list\nl1 = []\n\n# Iterate over the range 0 to p\nfor i in range(p):\n # Create a new empty sublist for each iteration of the outer loop\n l1.append([])\n\n # Iterate over the range 0 to p\n for j in range(p):\n # Append the values [j, i] to the sublist\n l1[i].append([j, i])\n\n# Print the matrix\nprint(l1)\n\n"
] | [
0
] | [] | [] | [
"list",
"loops",
"matrix",
"python"
] | stackoverflow_0074672951_list_loops_matrix_python.txt |
Q:
How to use multiprocessing in Python for for loop?
I'm new to Python and multiprocessing, I would like to speed up my current code processing speed as it takes around 8 mins for 80 images. I only show 1 image for this code for reference purpose. I got into know that multiprocessing helps on this and gave it a try but somehow not working as what I expected.
import numpy as np
import cv2
import time
import os
import multiprocessing
img = cv2.imread("C://Users/jason/Desktop/test.bmp")
gry = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
_,blackMask = cv2.threshold(gry, 0, 255, cv2.THRESH_BINARY_INV)
x1 = []
y1 = []
def verticle(mask, y, x):
vertiPixel = 0
while(y < mask.shape[0]):
if (y + 1) == mask.shape[0]:
break
else:
if(mask[y + 1][x] == 255):
vertiPixel += 1
y += 1
else:
break
y1.append(vertiPixel)
def horizontal(mask, y, x):
horiPixel = 0
while(x < mask.shape[1]):
if (x + 1) == mask.shape[1]:
break
else:
if(mask[y][x + 1] == 255):
horiPixel += 1
x += 1
else:
break
x1.append(horiPixel)
def mask(mask):
for y in range (mask.shape[0]):
for x in range (mask.shape[1]):
if(mask[y][x] == 255):
verticle(mask, y, x)
horizontal(mask, y, x)
mask(blackMask)
print(np.average(x1), np.average(y1))
This is what I tried to work on my side. Although it's not working, added pool class for multiprocessing but getting None result.
import numpy as np
import cv2
import time
import os
from multiprocessing import Pool
folderDir = "C://Users/ruler/Desktop/testseg/"
total = []
with open('readme.txt', 'w') as f:
count = 0
for allImages in os.listdir(folderDir):
if (allImages.startswith('TRAIN_SET') and allImages.endswith(".bmp")):
img = cv2.imread(os.path.join(folderDir, allImages))
gry = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
_,blackMask = cv2.threshold(gry, 0, 255, cv2.THRESH_BINARY_INV)
x1 = []
y1 = []
def verticle(mask, y, x):
vertiPixel = 0
while(y < mask.shape[0]):
if (y + 1) == mask.shape[0]:
break
else:
if(mask[y + 1][x] == 255):
vertiPixel += 1
y += 1
else:
break
y1.append(vertiPixel)
def horizontal(mask, y, x):
horiPixel = 0
while(x < mask.shape[1]):
if (x + 1) == mask.shape[1]:
break
else:
if(mask[y][x + 1] == 255):
horiPixel += 1
x += 1
else:
break
x1.append(horiPixel)
def mask(mask):
for y in range (mask.shape[0]):
for x in range (mask.shape[1]):
if(mask[y][x] == 255):
verticle(mask, y, x)
horizontal(mask, y, x)
equation(y,x)
def equation(y,x):
a = np.average(y) * (9.9 / 305)
c = np.average(x) * (9.9 / 305)
final = (a + c) / 2
total.append(final)
if __name__ == "__main__":
pool = Pool(8)
print(pool.map(mask, [blackMask] * 3))
pool.close()
A:
To use multiprocessing to speed up your code, you can use the Pool class from the multiprocessing module. The Pool class allows you to run multiple processes in parallel, which can help speed up your code.
To use the Pool class, you need to first create a Pool object and then use the map method to apply a function to each element in a list in parallel. For example, to use the Pool class to speed up your code, you could do the following:
# Import the Pool class from the multiprocessing module
from multiprocessing import Pool
# Create a Pool object with the desired number of processes
pool = Pool(8)
# Use the map method to apply the mask function to each element in a list in parallel
pool.map(mask, [blackMask] * 80)
# Close the pool when finished
pool.close()
This will create a Pool object with 8 processes, and then apply the mask function to 80 copies of the blackMask image in parallel. This should speed up your code by running multiple processes in parallel.
However, note that using multiprocessing can be complex and may not always result in significant speedups, especially for relatively small and simple tasks like the one in your code. It may be worth trying to optimize your code in other ways before resorting to multiprocessing.
| How to use multiprocessing in Python for for loop? | I'm new to Python and multiprocessing, I would like to speed up my current code processing speed as it takes around 8 mins for 80 images. I only show 1 image for this code for reference purpose. I got into know that multiprocessing helps on this and gave it a try but somehow not working as what I expected.
import numpy as np
import cv2
import time
import os
import multiprocessing
img = cv2.imread("C://Users/jason/Desktop/test.bmp")
gry = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
_,blackMask = cv2.threshold(gry, 0, 255, cv2.THRESH_BINARY_INV)
x1 = []
y1 = []
def verticle(mask, y, x):
vertiPixel = 0
while(y < mask.shape[0]):
if (y + 1) == mask.shape[0]:
break
else:
if(mask[y + 1][x] == 255):
vertiPixel += 1
y += 1
else:
break
y1.append(vertiPixel)
def horizontal(mask, y, x):
horiPixel = 0
while(x < mask.shape[1]):
if (x + 1) == mask.shape[1]:
break
else:
if(mask[y][x + 1] == 255):
horiPixel += 1
x += 1
else:
break
x1.append(horiPixel)
def mask(mask):
for y in range (mask.shape[0]):
for x in range (mask.shape[1]):
if(mask[y][x] == 255):
verticle(mask, y, x)
horizontal(mask, y, x)
mask(blackMask)
print(np.average(x1), np.average(y1))
This is what I tried to work on my side. Although it's not working, added pool class for multiprocessing but getting None result.
import numpy as np
import cv2
import time
import os
from multiprocessing import Pool
folderDir = "C://Users/ruler/Desktop/testseg/"
total = []
with open('readme.txt', 'w') as f:
count = 0
for allImages in os.listdir(folderDir):
if (allImages.startswith('TRAIN_SET') and allImages.endswith(".bmp")):
img = cv2.imread(os.path.join(folderDir, allImages))
gry = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
_,blackMask = cv2.threshold(gry, 0, 255, cv2.THRESH_BINARY_INV)
x1 = []
y1 = []
def verticle(mask, y, x):
vertiPixel = 0
while(y < mask.shape[0]):
if (y + 1) == mask.shape[0]:
break
else:
if(mask[y + 1][x] == 255):
vertiPixel += 1
y += 1
else:
break
y1.append(vertiPixel)
def horizontal(mask, y, x):
horiPixel = 0
while(x < mask.shape[1]):
if (x + 1) == mask.shape[1]:
break
else:
if(mask[y][x + 1] == 255):
horiPixel += 1
x += 1
else:
break
x1.append(horiPixel)
def mask(mask):
for y in range (mask.shape[0]):
for x in range (mask.shape[1]):
if(mask[y][x] == 255):
verticle(mask, y, x)
horizontal(mask, y, x)
equation(y,x)
def equation(y,x):
a = np.average(y) * (9.9 / 305)
c = np.average(x) * (9.9 / 305)
final = (a + c) / 2
total.append(final)
if __name__ == "__main__":
pool = Pool(8)
print(pool.map(mask, [blackMask] * 3))
pool.close()
| [
"To use multiprocessing to speed up your code, you can use the Pool class from the multiprocessing module. The Pool class allows you to run multiple processes in parallel, which can help speed up your code.\nTo use the Pool class, you need to first create a Pool object and then use the map method to apply a function to each element in a list in parallel. For example, to use the Pool class to speed up your code, you could do the following:\n# Import the Pool class from the multiprocessing module\nfrom multiprocessing import Pool\n\n# Create a Pool object with the desired number of processes\npool = Pool(8)\n\n# Use the map method to apply the mask function to each element in a list in parallel\npool.map(mask, [blackMask] * 80)\n\n# Close the pool when finished\npool.close()\n\nThis will create a Pool object with 8 processes, and then apply the mask function to 80 copies of the blackMask image in parallel. This should speed up your code by running multiple processes in parallel.\nHowever, note that using multiprocessing can be complex and may not always result in significant speedups, especially for relatively small and simple tasks like the one in your code. It may be worth trying to optimize your code in other ways before resorting to multiprocessing.\n"
] | [
1
] | [] | [] | [
"multiprocessing",
"python",
"python_3.x",
"python_multiprocessing"
] | stackoverflow_0074674131_multiprocessing_python_python_3.x_python_multiprocessing.txt |
Q:
Parser unrecognized arguments
I accept a file path as an argument for my .huy file type python editor but when i change to exe and run it it says:
Editor.exe: error: unrecognized arguments: C:\Users\Doan 1\Desktop\test.huy
but when i run the python file:
Editor.py -f "C:\Users\Doan 1\Desktop\test.huy"
it works
how do i fix this?
this was the parser part:
#get arguments
parser = argparse.ArgumentParser(description='test')
parser.add_argument('-f', metavar='FILE')
args = parser.parse_args()
location = str(args)[13:-2]
if location and location != 'on':
load(location)
A:
To fix this issue, you need to pass the -f flag and the file path to the EXE file when you run it from the command line, just like you do when running the Python file.
Here is an example of how you can run the EXE file and pass the required arguments:
Editor.exe -f "C:\Users\Doan 1\Desktop\test.huy"
Make sure to include the -f flag and the file path in quotes if the file path contains spaces. This should allow the EXE file to parse the arguments and access the file at the specified location.
Here is the updated code for the parser part of your script:
#get arguments
parser = argparse.ArgumentParser(description='test')
parser.add_argument('-f', metavar='FILE')
args = parser.parse_args()
location = args.f
if location and location != 'on':
load(location)
I have removed the str() and slicing operations from the location variable, and instead directly accessed the f attribute of the args object. This should fix the issue and allow the EXE file to parse the arguments correctly.
| Parser unrecognized arguments | I accept a file path as an argument for my .huy file type python editor but when i change to exe and run it it says:
Editor.exe: error: unrecognized arguments: C:\Users\Doan 1\Desktop\test.huy
but when i run the python file:
Editor.py -f "C:\Users\Doan 1\Desktop\test.huy"
it works
how do i fix this?
this was the parser part:
#get arguments
parser = argparse.ArgumentParser(description='test')
parser.add_argument('-f', metavar='FILE')
args = parser.parse_args()
location = str(args)[13:-2]
if location and location != 'on':
load(location)
| [
"To fix this issue, you need to pass the -f flag and the file path to the EXE file when you run it from the command line, just like you do when running the Python file.\nHere is an example of how you can run the EXE file and pass the required arguments:\nEditor.exe -f \"C:\\Users\\Doan 1\\Desktop\\test.huy\"\n\nMake sure to include the -f flag and the file path in quotes if the file path contains spaces. This should allow the EXE file to parse the arguments and access the file at the specified location.\nHere is the updated code for the parser part of your script:\n#get arguments\nparser = argparse.ArgumentParser(description='test')\nparser.add_argument('-f', metavar='FILE')\nargs = parser.parse_args()\nlocation = args.f\nif location and location != 'on':\n load(location)\n\nI have removed the str() and slicing operations from the location variable, and instead directly accessed the f attribute of the args object. This should fix the issue and allow the EXE file to parse the arguments correctly.\n"
] | [
0
] | [] | [] | [
"argparse",
"python",
"python_3.x"
] | stackoverflow_0074672656_argparse_python_python_3.x.txt |
Q:
Discord bot cannot connect a voice channel
I'm trying to make a discord bot first time, but the bot can't connect to the voice channel without any error, please help me, thanks.
This command worked successfully but the bot cannot connect my voice channel when enter 'else' statement.
Please help me.Thanks a lot.
`
class music_cog(commands.Cog):
def __init__(self, bot):
self.bot = bot
@commands.command()
async def join(self, ctx):
if not ctx.author.voice:
await ctx.send("You are not in a voice channel")
else:
print('join')
channel = ctx.author.voice.channel
await channel.connect()
`
I tried to give my bot administrator in my server, but it still didn't work.
A:
Simple all u need to do is to download PyNaCl,
pip install PyNaCl
here is the error that u got
| Discord bot cannot connect a voice channel | I'm trying to make a discord bot first time, but the bot can't connect to the voice channel without any error, please help me, thanks.
This command worked successfully but the bot cannot connect my voice channel when enter 'else' statement.
Please help me.Thanks a lot.
`
class music_cog(commands.Cog):
def __init__(self, bot):
self.bot = bot
@commands.command()
async def join(self, ctx):
if not ctx.author.voice:
await ctx.send("You are not in a voice channel")
else:
print('join')
channel = ctx.author.voice.channel
await channel.connect()
`
I tried to give my bot administrator in my server, but it still didn't work.
| [
"Simple all u need to do is to download PyNaCl,\npip install PyNaCl\n\nhere is the error that u got\n"
] | [
0
] | [] | [] | [
"discord",
"discord.py",
"python"
] | stackoverflow_0074672935_discord_discord.py_python.txt |
Q:
Error while executing bash for loop from python subprocess
I want to run this command from python mentioned here:
ffmpeg -f concat -safe 0 -i <(for f in ./*.wav; do echo "file '$PWD/$f'"; done) -c copy output.wav
But i can't even run this:
subprocess.run('for i in {1..3}; do echo $i; done'.split(), capture_output=True)
Error:
Traceback (most recent call last):
File "/media/russich555/hdd/Programming/Freelance/YouDo/21.intercom_record/test.py", line 36, in <module>
pr = subprocess.run(
^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/subprocess.py", line 546, in run
with Popen(*popenargs, **kwargs) as process:
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/subprocess.py", line 1022, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "/usr/lib/python3.11/subprocess.py", line 1899, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'for'
Process finished with exit code 1
Also tried add shell=True:
subprocess.run('for i in {1..3}; do echo $i; done'.split(), capture_output=True, shell=True)
stderr output:
i: 1: Syntax error: Bad for loop variable
Also tried pass /bin/bash, because documentation says that shell=True using /bin/sh
subprocess.run('/bin/bash for i in {1..3}; do echo $i; done'.split(), capture_output=True)
stderr output:
/bin/bash: for: No such file or catalog
A:
There are two errors here, or really, three;
You are trying to use shell features without shell=True
You are trying to use Bash features, but the default shell on non-Windows platforms is POSIX sh; you can fix that with executable='/bin/bash' (obviously, adjust the path if necessary).
More fundamentally, though, you want to avoid using a subprocess when Python can perform the loop natively.
from pathlib import Path
import subprocess
subprocess.run(
['ffmpeg', '-f', 'concat', '-safe', '0',
'-i', '/dev/stdin', '-c', 'copy', 'output.wav'],
input="".join(f"file '{x}'\n" for x in Path.cwd().glob("*.wav")),
text=True, capture_output=True)
Relying on /dev/stdin for the input file is somewhat platform-dependent; in the worst case, you'll need to refactor to use a temporary file, or fall back to using the shell after all.
subprocess.run(r"""ffmpeg -f concat -safe 0 -i <(printf "file '%s'\n" $PWD/*.wav) -c copy output.wav""",
shell=True, executable='/bin/bash',
text=True, capture_output=True)
As noted in comments, you should either use shell=True and pass in a single string as the first argument for the shell to parse, or else pass in a list of tokens without shell=True and with no shell features like wildcard expansion, command substitution, variable interpolation, redirection, shell builtins, etc etc.
If you really wanted to explicitly run Bash, the syntax for that would look like
subprocess.run(
['bash', '-c',
r"""ffmpeg -f concat -safe 0 -i <(printf "file '%s'\n" $PWD/*.wav) -c copy output.wav"""],
text=True, capture_output=True)
(The syntax bash for loop etc tries to find a file named for and run it with Bash, passing in loop and etc as arguments.)
It's not clear why you are using capture_output=True here; in order for that to be useful, you need to examine the .stdout (and/or perhaps .stderr) attributes of the object returned by subprocess.run. If you just want to discard the output, use stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL
| Error while executing bash for loop from python subprocess | I want to run this command from python mentioned here:
ffmpeg -f concat -safe 0 -i <(for f in ./*.wav; do echo "file '$PWD/$f'"; done) -c copy output.wav
But i can't even run this:
subprocess.run('for i in {1..3}; do echo $i; done'.split(), capture_output=True)
Error:
Traceback (most recent call last):
File "/media/russich555/hdd/Programming/Freelance/YouDo/21.intercom_record/test.py", line 36, in <module>
pr = subprocess.run(
^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/subprocess.py", line 546, in run
with Popen(*popenargs, **kwargs) as process:
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/subprocess.py", line 1022, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "/usr/lib/python3.11/subprocess.py", line 1899, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'for'
Process finished with exit code 1
Also tried add shell=True:
subprocess.run('for i in {1..3}; do echo $i; done'.split(), capture_output=True, shell=True)
stderr output:
i: 1: Syntax error: Bad for loop variable
Also tried pass /bin/bash, because documentation says that shell=True using /bin/sh
subprocess.run('/bin/bash for i in {1..3}; do echo $i; done'.split(), capture_output=True)
stderr output:
/bin/bash: for: No such file or catalog
| [
"There are two errors here, or really, three;\n\nYou are trying to use shell features without shell=True\nYou are trying to use Bash features, but the default shell on non-Windows platforms is POSIX sh; you can fix that with executable='/bin/bash' (obviously, adjust the path if necessary).\n\nMore fundamentally, though, you want to avoid using a subprocess when Python can perform the loop natively.\nfrom pathlib import Path\nimport subprocess\n\nsubprocess.run(\n ['ffmpeg', '-f', 'concat', '-safe', '0',\n '-i', '/dev/stdin', '-c', 'copy', 'output.wav'],\n input=\"\".join(f\"file '{x}'\\n\" for x in Path.cwd().glob(\"*.wav\")),\n text=True, capture_output=True)\n\nRelying on /dev/stdin for the input file is somewhat platform-dependent; in the worst case, you'll need to refactor to use a temporary file, or fall back to using the shell after all.\nsubprocess.run(r\"\"\"ffmpeg -f concat -safe 0 -i <(printf \"file '%s'\\n\" $PWD/*.wav) -c copy output.wav\"\"\",\n shell=True, executable='/bin/bash',\n text=True, capture_output=True)\n\nAs noted in comments, you should either use shell=True and pass in a single string as the first argument for the shell to parse, or else pass in a list of tokens without shell=True and with no shell features like wildcard expansion, command substitution, variable interpolation, redirection, shell builtins, etc etc.\nIf you really wanted to explicitly run Bash, the syntax for that would look like\nsubprocess.run(\n ['bash', '-c',\n r\"\"\"ffmpeg -f concat -safe 0 -i <(printf \"file '%s'\\n\" $PWD/*.wav) -c copy output.wav\"\"\"],\n text=True, capture_output=True)\n\n(The syntax bash for loop etc tries to find a file named for and run it with Bash, passing in loop and etc as arguments.)\nIt's not clear why you are using capture_output=True here; in order for that to be useful, you need to examine the .stdout (and/or perhaps .stderr) attributes of the object returned by subprocess.run. If you just want to discard the output, use stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL\n"
] | [
1
] | [] | [] | [
"bash",
"python",
"subprocess"
] | stackoverflow_0074673644_bash_python_subprocess.txt |
Q:
Python - open file in paint with whitespaces
I am trying to open an image in paint with python, however, the path contains a space, paint throws an error saying it cannot find the path because it has just split the string until the first space.
Can someone tell me how to solve this without changing the path?
Here is my code:
import subprocess, os
paintImage = "C:\\Users\\Me\MY Images\\image.png"
#get the path of paint:
paintPath = os.path.splitdrive(os.path.expanduser("~"))[0]+r"\WINDOWS\system32\mspaint.exe"
#open the file with paint
subprocess.Popen("%s %s" % (paintPath, paintImage))
However, paint opens and says that C:\Users\Me\MY contains an invalid path, because it has not counted the space. I have tried replacing the space with %20, but that does not work.
Thanks
A:
You can rewrite the following line
paintImage = "C:\\Users\\Me\MY Images\\image.png"
to
paintImage = "C:\\Users\\Me\MYImages\\image.png"
MYImages should be the new name of the folder no spaces.
| Python - open file in paint with whitespaces | I am trying to open an image in paint with python, however, the path contains a space, paint throws an error saying it cannot find the path because it has just split the string until the first space.
Can someone tell me how to solve this without changing the path?
Here is my code:
import subprocess, os
paintImage = "C:\\Users\\Me\MY Images\\image.png"
#get the path of paint:
paintPath = os.path.splitdrive(os.path.expanduser("~"))[0]+r"\WINDOWS\system32\mspaint.exe"
#open the file with paint
subprocess.Popen("%s %s" % (paintPath, paintImage))
However, paint opens and says that C:\Users\Me\MY contains an invalid path, because it has not counted the space. I have tried replacing the space with %20, but that does not work.
Thanks
| [
"You can rewrite the following line\npaintImage = \"C:\\\\Users\\\\Me\\MY Images\\\\image.png\" \n\nto\npaintImage = \"C:\\\\Users\\\\Me\\MYImages\\\\image.png\"\n\nMYImages should be the new name of the folder no spaces.\n"
] | [
0
] | [] | [] | [
"image",
"paint",
"python"
] | stackoverflow_0063423058_image_paint_python.txt |
Q:
Error status code 403 even with headers, Python Requests
I am sending a request to some url. I Copied the curl url to get the code from curl to python tool. So all the headers are included, but my request is not working and I recieve status code 403 on printing and error code 1020 in the html output. The code is
import requests
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:106.0) Gecko/20100101 Firefox/106.0',
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8',
'Accept-Language': 'en-US,en;q=0.5',
# 'Accept-Encoding': 'gzip, deflate, br',
'DNT': '1',
'Connection': 'keep-alive',
'Upgrade-Insecure-Requests': '1',
'Sec-Fetch-Dest': 'document',
'Sec-Fetch-Mode': 'navigate',
'Sec-Fetch-Site': 'none',
'Sec-Fetch-User': '?1',
}
response = requests.get('https://v2.gcchmc.org/book-appointment/', headers=headers)
print(response.status_code)
print(response.cookies.get_dict())
with open("test.html",'w') as f:
f.write(response.text)
I also get cookies but not getting the desired response. I know I can do it with selenium but I want to know the reason behind this. Thanks in advance.
Note:
I have installed all the libraries installed with request with same version as computer and still not working and throwing 403 error
A:
It works on my machine, so I am not sure what the problem is.
However, when I want send a request which does not work, I often try if it works using playwright. Playwright uses a browser driver and thus mimics your actual browser when visiting the page. It can be installed using pip install playwright. When you try it for the first time it may give an error which tells you to install the drivers, just follow the instruction to do so.
With playwright you can try the following:
from playwright.sync_api import sync_playwright
url = 'https://v2.gcchmc.org/book-appointment/'
ua = (
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) "
"AppleWebKit/537.36 (KHTML, like Gecko) "
"Chrome/69.0.3497.100 Safari/537.36"
)
with sync_playwright() as p:
browser = p.chromium.launch(headless=False)
page = browser.new_page(user_agent=ua)
page.goto(url)
page.wait_for_timeout(1000)
html = page.content()
print(html)
Let me know if this works!
A:
The site is protected by cloudflare which aims to block, among other things, unauthorized data scraping. From What is data scraping?
The process of web scraping is fairly simple, though the
implementation can be complex. Web scraping occurs in 3 steps:
First the piece of code used to pull the information, which we call a scraper bot, sends an HTTP GET request to a specific website.
When the website responds, the scraper parses the HTML document for a specific pattern of data.
Once the data is extracted, it is converted into whatever specific format the scraper bot’s author designed.
You can use urllib instead of requests, it seems to be able to deal with cloudflare
req = urllib.request.Request('https://v2.gcchmc.org/book-appointment/')
req.add_headers('User-Agent', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:106.0) Gecko/20100101 Firefox/106.0')
req.add_header('Accept', 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8')
req.add_header('Accept-Language', 'en-US,en;q=0.5')
r = urllib.request.urlopen(req).read().decode('utf-8')
with open("test.html", 'w', encoding="utf-8") as f:
f.write(r)
| Error status code 403 even with headers, Python Requests | I am sending a request to some url. I Copied the curl url to get the code from curl to python tool. So all the headers are included, but my request is not working and I recieve status code 403 on printing and error code 1020 in the html output. The code is
import requests
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:106.0) Gecko/20100101 Firefox/106.0',
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8',
'Accept-Language': 'en-US,en;q=0.5',
# 'Accept-Encoding': 'gzip, deflate, br',
'DNT': '1',
'Connection': 'keep-alive',
'Upgrade-Insecure-Requests': '1',
'Sec-Fetch-Dest': 'document',
'Sec-Fetch-Mode': 'navigate',
'Sec-Fetch-Site': 'none',
'Sec-Fetch-User': '?1',
}
response = requests.get('https://v2.gcchmc.org/book-appointment/', headers=headers)
print(response.status_code)
print(response.cookies.get_dict())
with open("test.html",'w') as f:
f.write(response.text)
I also get cookies but not getting the desired response. I know I can do it with selenium but I want to know the reason behind this. Thanks in advance.
Note:
I have installed all the libraries installed with request with same version as computer and still not working and throwing 403 error
| [
"It works on my machine, so I am not sure what the problem is.\nHowever, when I want send a request which does not work, I often try if it works using playwright. Playwright uses a browser driver and thus mimics your actual browser when visiting the page. It can be installed using pip install playwright. When you try it for the first time it may give an error which tells you to install the drivers, just follow the instruction to do so.\nWith playwright you can try the following:\nfrom playwright.sync_api import sync_playwright\n\n\nurl = 'https://v2.gcchmc.org/book-appointment/'\nua = (\n \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) \"\n \"AppleWebKit/537.36 (KHTML, like Gecko) \"\n \"Chrome/69.0.3497.100 Safari/537.36\"\n)\n\nwith sync_playwright() as p:\n browser = p.chromium.launch(headless=False)\n page = browser.new_page(user_agent=ua)\n page.goto(url)\n page.wait_for_timeout(1000)\n \n html = page.content()\n \nprint(html)\n\nLet me know if this works!\n",
"The site is protected by cloudflare which aims to block, among other things, unauthorized data scraping. From What is data scraping?\n\n\nThe process of web scraping is fairly simple, though the\nimplementation can be complex. Web scraping occurs in 3 steps:\n\nFirst the piece of code used to pull the information, which we call a scraper bot, sends an HTTP GET request to a specific website.\nWhen the website responds, the scraper parses the HTML document for a specific pattern of data.\nOnce the data is extracted, it is converted into whatever specific format the scraper bot’s author designed.\n\n\nYou can use urllib instead of requests, it seems to be able to deal with cloudflare\nreq = urllib.request.Request('https://v2.gcchmc.org/book-appointment/')\nreq.add_headers('User-Agent', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:106.0) Gecko/20100101 Firefox/106.0')\nreq.add_header('Accept', 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8')\nreq.add_header('Accept-Language', 'en-US,en;q=0.5')\n\nr = urllib.request.urlopen(req).read().decode('utf-8')\nwith open(\"test.html\", 'w', encoding=\"utf-8\") as f:\n f.write(r)\n\n"
] | [
1,
1
] | [] | [] | [
"python",
"python_requests"
] | stackoverflow_0074446830_python_python_requests.txt |
Q:
No module named 'graphql.type' in Django
I am New in Django and GraphQL, following the the article,
I am using python 3.8 in virtual env and 3.10 in windows, but same error occurs on both side, also tried the this Question, i also heard that GraphQL generate queries, But dont know how to generate it, But this error occurs:
Traceback (most recent call last):
File "/usr/lib/python3.8/threading.py", line 932, in _bootstrap_inner
self.run()
File "/usr/lib/python3.8/threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "/home/talha/ve/lib/python3.8/site-packages/django/utils/autoreload.py", line 64, in wrapper
fn(*args, **kwargs)
File "/home/talha/ve/lib/python3.8/site-packages/django/core/management/commands/runserver.py", line 125, in inner_run autoreload.raise_last_exception()
File "/home/talha/ve/lib/python3.8/site-packages/django/utils/autoreload.py", line 87, in raise_last_exception
raise _exception[1]
File "/home/talha/ve/lib/python3.8/site-packages/django/core/management/__init__.py", line 398, in execute
autoreload.check_errors(django.setup)()
File "/home/talha/ve/lib/python3.8/site-packages/django/utils/autoreload.py", line 64, in wrapper
fn(*args, **kwargs)
File "/home/talha/ve/lib/python3.8/site-packages/django/__init__.py", line 24, in setup
apps.populate(settings.INSTALLED_APPS)
File "/home/talha/ve/lib/python3.8/site-packages/django/apps/registry.py", line 91, in populate
app_config = AppConfig.create(entry)
File "/home/talha/ve/lib/python3.8/site-packages/django/apps/config.py", line 193, in create
import_module(entry)
File "/usr/lib/python3.8/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 961, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 961, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 783, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/home/talha/ve/lib/python3.8/site-packages/ariadne/__init__.py", line 1, in <module>
from .enums import (
File "/home/talha/ve/lib/python3.8/site-packages/ariadne/enums.py", line 17, in <module>
from graphql.type import GraphQLEnumType, GraphQLNamedType, GraphQLSchema
ModuleNotFoundError: No module named 'graphql.type'```
A:
You can try these following ways,
One, you can find graphql directory in the project, on python path. renaming it will fix the issue.
And also you can try these commands,
pip install pip --upgrade
pip install setuptools --upgrade
pip install gql[all]
Hope this helps, if not please let know. Thanks
| No module named 'graphql.type' in Django | I am New in Django and GraphQL, following the the article,
I am using python 3.8 in virtual env and 3.10 in windows, but same error occurs on both side, also tried the this Question, i also heard that GraphQL generate queries, But dont know how to generate it, But this error occurs:
Traceback (most recent call last):
File "/usr/lib/python3.8/threading.py", line 932, in _bootstrap_inner
self.run()
File "/usr/lib/python3.8/threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "/home/talha/ve/lib/python3.8/site-packages/django/utils/autoreload.py", line 64, in wrapper
fn(*args, **kwargs)
File "/home/talha/ve/lib/python3.8/site-packages/django/core/management/commands/runserver.py", line 125, in inner_run autoreload.raise_last_exception()
File "/home/talha/ve/lib/python3.8/site-packages/django/utils/autoreload.py", line 87, in raise_last_exception
raise _exception[1]
File "/home/talha/ve/lib/python3.8/site-packages/django/core/management/__init__.py", line 398, in execute
autoreload.check_errors(django.setup)()
File "/home/talha/ve/lib/python3.8/site-packages/django/utils/autoreload.py", line 64, in wrapper
fn(*args, **kwargs)
File "/home/talha/ve/lib/python3.8/site-packages/django/__init__.py", line 24, in setup
apps.populate(settings.INSTALLED_APPS)
File "/home/talha/ve/lib/python3.8/site-packages/django/apps/registry.py", line 91, in populate
app_config = AppConfig.create(entry)
File "/home/talha/ve/lib/python3.8/site-packages/django/apps/config.py", line 193, in create
import_module(entry)
File "/usr/lib/python3.8/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 961, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 961, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 783, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/home/talha/ve/lib/python3.8/site-packages/ariadne/__init__.py", line 1, in <module>
from .enums import (
File "/home/talha/ve/lib/python3.8/site-packages/ariadne/enums.py", line 17, in <module>
from graphql.type import GraphQLEnumType, GraphQLNamedType, GraphQLSchema
ModuleNotFoundError: No module named 'graphql.type'```
| [
"You can try these following ways,\nOne, you can find graphql directory in the project, on python path. renaming it will fix the issue.\nAnd also you can try these commands,\npip install pip --upgrade\npip install setuptools --upgrade\npip install gql[all]\n\nHope this helps, if not please let know. Thanks\n"
] | [
0
] | [] | [] | [
"ariadne_graphql",
"django",
"graphql",
"python"
] | stackoverflow_0074674006_ariadne_graphql_django_graphql_python.txt |
Q:
Convert bytes to a string
I captured the standard output of an external program into a bytes object:
>>> from subprocess import *
>>> command_stdout = Popen(['ls', '-l'], stdout=PIPE).communicate()[0]
>>>
>>> command_stdout
b'total 0\n-rw-rw-r-- 1 thomas thomas 0 Mar 3 07:03 file1\n-rw-rw-r-- 1 thomas thomas 0 Mar 3 07:03 file2\n'
I want to convert that to a normal Python string, so that I can print it like this:
>>> print(command_stdout)
-rw-rw-r-- 1 thomas thomas 0 Mar 3 07:03 file1
-rw-rw-r-- 1 thomas thomas 0 Mar 3 07:03 file2
How do I convert the bytes object to a str with Python 3?
A:
Decode the bytes object to produce a string:
>>> b"abcde".decode("utf-8")
'abcde'
The above example assumes that the bytes object is in UTF-8, because it is a common encoding. However, you should use the encoding your data is actually in!
A:
Decode the byte string and turn it in to a character (Unicode) string.
Python 3:
encoding = 'utf-8'
b'hello'.decode(encoding)
or
str(b'hello', encoding)
Python 2:
encoding = 'utf-8'
'hello'.decode(encoding)
or
unicode('hello', encoding)
A:
This joins together a list of bytes into a string:
>>> bytes_data = [112, 52, 52]
>>> "".join(map(chr, bytes_data))
'p44'
A:
If you don't know the encoding, then to read binary input into string in Python 3 and Python 2 compatible way, use the ancient MS-DOS CP437 encoding:
PY3K = sys.version_info >= (3, 0)
lines = []
for line in stream:
if not PY3K:
lines.append(line)
else:
lines.append(line.decode('cp437'))
Because encoding is unknown, expect non-English symbols to translate to characters of cp437 (English characters are not translated, because they match in most single byte encodings and UTF-8).
Decoding arbitrary binary input to UTF-8 is unsafe, because you may get this:
>>> b'\x00\x01\xffsd'.decode('utf-8')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 2: invalid
start byte
The same applies to latin-1, which was popular (the default?) for Python 2. See the missing points in Codepage Layout - it is where Python chokes with infamous ordinal not in range.
UPDATE 20150604: There are rumors that Python 3 has the surrogateescape error strategy for encoding stuff into binary data without data loss and crashes, but it needs conversion tests, [binary] -> [str] -> [binary], to validate both performance and reliability.
UPDATE 20170116: Thanks to comment by Nearoo - there is also a possibility to slash escape all unknown bytes with backslashreplace error handler. That works only for Python 3, so even with this workaround you will still get inconsistent output from different Python versions:
PY3K = sys.version_info >= (3, 0)
lines = []
for line in stream:
if not PY3K:
lines.append(line)
else:
lines.append(line.decode('utf-8', 'backslashreplace'))
See Python’s Unicode Support for details.
UPDATE 20170119: I decided to implement slash escaping decode that works for both Python 2 and Python 3. It should be slower than the cp437 solution, but it should produce identical results on every Python version.
# --- preparation
import codecs
def slashescape(err):
""" codecs error handler. err is UnicodeDecode instance. return
a tuple with a replacement for the unencodable part of the input
and a position where encoding should continue"""
#print err, dir(err), err.start, err.end, err.object[:err.start]
thebyte = err.object[err.start:err.end]
repl = u'\\x'+hex(ord(thebyte))[2:]
return (repl, err.end)
codecs.register_error('slashescape', slashescape)
# --- processing
stream = [b'\x80abc']
lines = []
for line in stream:
lines.append(line.decode('utf-8', 'slashescape'))
A:
In Python 3, the default encoding is "utf-8", so you can directly use:
b'hello'.decode()
which is equivalent to
b'hello'.decode(encoding="utf-8")
On the other hand, in Python 2, encoding defaults to the default string encoding. Thus, you should use:
b'hello'.decode(encoding)
where encoding is the encoding you want.
Note: support for keyword arguments was added in Python 2.7.
A:
I think you actually want this:
>>> from subprocess import *
>>> command_stdout = Popen(['ls', '-l'], stdout=PIPE).communicate()[0]
>>> command_text = command_stdout.decode(encoding='windows-1252')
Aaron's answer was correct, except that you need to know which encoding to use. And I believe that Windows uses 'windows-1252'. It will only matter if you have some unusual (non-ASCII) characters in your content, but then it will make a difference.
By the way, the fact that it does matter is the reason that Python moved to using two different types for binary and text data: it can't convert magically between them, because it doesn't know the encoding unless you tell it! The only way YOU would know is to read the Windows documentation (or read it here).
A:
Since this question is actually asking about subprocess output, you have more direct approaches available. The most modern would be using subprocess.check_output and passing text=True (Python 3.7+) to automatically decode stdout using the system default coding:
text = subprocess.check_output(["ls", "-l"], text=True)
For Python 3.6, Popen accepts an encoding keyword:
>>> from subprocess import Popen, PIPE
>>> text = Popen(['ls', '-l'], stdout=PIPE, encoding='utf-8').communicate()[0]
>>> type(text)
str
>>> print(text)
total 0
-rw-r--r-- 1 wim badger 0 May 31 12:45 some_file.txt
The general answer to the question in the title, if you're not dealing with subprocess output, is to decode bytes to text:
>>> b'abcde'.decode()
'abcde'
With no argument, sys.getdefaultencoding() will be used. If your data is not sys.getdefaultencoding(), then you must specify the encoding explicitly in the decode call:
>>> b'caf\xe9'.decode('cp1250')
'café'
A:
Set universal_newlines to True, i.e.
command_stdout = Popen(['ls', '-l'], stdout=PIPE, universal_newlines=True).communicate()[0]
A:
To interpret a byte sequence as a text, you have to know the
corresponding character encoding:
unicode_text = bytestring.decode(character_encoding)
Example:
>>> b'\xc2\xb5'.decode('utf-8')
'µ'
ls command may produce output that can't be interpreted as text. File names
on Unix may be any sequence of bytes except slash b'/' and zero
b'\0':
>>> open(bytes(range(0x100)).translate(None, b'\0/'), 'w').close()
Trying to decode such byte soup using utf-8 encoding raises UnicodeDecodeError.
It can be worse. The decoding may fail silently and produce mojibake
if you use a wrong incompatible encoding:
>>> '—'.encode('utf-8').decode('cp1252')
'—'
The data is corrupted but your program remains unaware that a failure
has occurred.
In general, what character encoding to use is not embedded in the byte sequence itself. You have to communicate this info out-of-band. Some outcomes are more likely than others and therefore chardet module exists that can guess the character encoding. A single Python script may use multiple character encodings in different places.
ls output can be converted to a Python string using os.fsdecode()
function that succeeds even for undecodable
filenames (it uses
sys.getfilesystemencoding() and surrogateescape error handler on
Unix):
import os
import subprocess
output = os.fsdecode(subprocess.check_output('ls'))
To get the original bytes, you could use os.fsencode().
If you pass universal_newlines=True parameter then subprocess uses
locale.getpreferredencoding(False) to decode bytes e.g., it can be
cp1252 on Windows.
To decode the byte stream on-the-fly,
io.TextIOWrapper()
could be used: example.
Different commands may use different character encodings for their
output e.g., dir internal command (cmd) may use cp437. To decode its
output, you could pass the encoding explicitly (Python 3.6+):
output = subprocess.check_output('dir', shell=True, encoding='cp437')
The filenames may differ from os.listdir() (which uses Windows
Unicode API) e.g., '\xb6' can be substituted with '\x14'—Python's
cp437 codec maps b'\x14' to control character U+0014 instead of
U+00B6 (¶). To support filenames with arbitrary Unicode characters, see Decode PowerShell output possibly containing non-ASCII Unicode characters into a Python string
A:
While @Aaron Maenpaa's answer just works, a user recently asked:
Is there any more simply way? 'fhand.read().decode("ASCII")' [...] It's so long!
You can use:
command_stdout.decode()
decode() has a standard argument:
codecs.decode(obj, encoding='utf-8', errors='strict')
A:
If you should get the following by trying decode():
AttributeError: 'str' object has no attribute 'decode'
You can also specify the encoding type straight in a cast:
>>> my_byte_str
b'Hello World'
>>> str(my_byte_str, 'utf-8')
'Hello World'
A:
If you have had this error:
utf-8 codec can't decode byte 0x8a,
then it is better to use the following code to convert bytes to a string:
bytes = b"abcdefg"
string = bytes.decode("utf-8", "ignore")
A:
Bytes
m=b'This is bytes'
Converting to string
Method 1
m.decode("utf-8")
or
m.decode()
Method 2
import codecs
codecs.decode(m,encoding="utf-8")
or
import codecs
codecs.decode(m)
Method 3
str(m,encoding="utf-8")
or
str(m)[2:-1]
Result
'This is bytes'
A:
For Python 3, this is a much safer and Pythonic approach to convert from byte to string:
def byte_to_str(bytes_or_str):
if isinstance(bytes_or_str, bytes): # Check if it's in bytes
print(bytes_or_str.decode('utf-8'))
else:
print("Object not of byte type")
byte_to_str(b'total 0\n-rw-rw-r-- 1 thomas thomas 0 Mar 3 07:03 file1\n-rw-rw-r-- 1 thomas thomas 0 Mar 3 07:03 file2\n')
Output:
total 0
-rw-rw-r-- 1 thomas thomas 0 Mar 3 07:03 file1
-rw-rw-r-- 1 thomas thomas 0 Mar 3 07:03 file2
A:
When working with data from Windows systems (with \r\n line endings), my answer is
String = Bytes.decode("utf-8").replace("\r\n", "\n")
Why? Try this with a multiline Input.txt:
Bytes = open("Input.txt", "rb").read()
String = Bytes.decode("utf-8")
open("Output.txt", "w").write(String)
All your line endings will be doubled (to \r\r\n), leading to extra empty lines. Python's text-read functions usually normalize line endings so that strings use only \n. If you receive binary data from a Windows system, Python does not have a chance to do that. Thus,
Bytes = open("Input.txt", "rb").read()
String = Bytes.decode("utf-8").replace("\r\n", "\n")
open("Output.txt", "w").write(String)
will replicate your original file.
A:
We can decode the bytes object to produce a string using bytes.decode(encoding='utf-8', errors='strict').
For documentation see bytes.decode.
Python 3 example:
byte_value = b"abcde"
print("Initial value = {}".format(byte_value))
print("Initial value type = {}".format(type(byte_value)))
string_value = byte_value.decode("utf-8")
# utf-8 is used here because it is a very common encoding, but you need to use the encoding your data is actually in.
print("------------")
print("Converted value = {}".format(string_value))
print("Converted value type = {}".format(type(string_value)))
Output:
Initial value = b'abcde'
Initial value type = <class 'bytes'>
------------
Converted value = abcde
Converted value type = <class 'str'>
Note: In Python 3, by default the encoding type is UTF-8. So, <byte_string>.decode("utf-8") can be also written as <byte_string>.decode()
A:
For your specific case of "run a shell command and get its output as text instead of bytes", on Python 3.7, you should use subprocess.run and pass in text=True (as well as capture_output=True to capture the output)
command_result = subprocess.run(["ls", "-l"], capture_output=True, text=True)
command_result.stdout # is a `str` containing your program's stdout
text used to be called universal_newlines, and was changed (well, aliased) in Python 3.7. If you want to support Python versions before 3.7, pass in universal_newlines=True instead of text=True
A:
From sys — System-specific parameters and functions:
To write or read binary data from/to the standard streams, use the underlying binary buffer. For example, to write bytes to stdout, use sys.stdout.buffer.write(b'abc').
A:
Try this:
bytes.fromhex('c3a9').decode('utf-8')
A:
Decode with .decode(). This will decode the string. Pass in 'utf-8') as the value in the inside.
A:
def toString(string):
try:
return v.decode("utf-8")
except ValueError:
return string
b = b'97.080.500'
s = '97.080.500'
print(toString(b))
print(toString(s))
A:
If you want to convert any bytes, not just string converted to bytes:
with open("bytesfile", "rb") as infile:
str = base64.b85encode(imageFile.read())
with open("bytesfile", "rb") as infile:
str2 = json.dumps(list(infile.read()))
This is not very efficient, however. It will turn a 2 MB picture into 9 MB.
A:
Try using this one; this function will ignore all the non-character sets (like UTF-8) binaries and return a clean string. It is tested for Python 3.6 and above.
def bin2str(text, encoding = 'utf-8'):
"""Converts a binary to Unicode string by removing all non Unicode char
text: binary string to work on
encoding: output encoding *utf-8"""
return text.decode(encoding, 'ignore')
Here, the function will take the binary and decode it (converts binary data to characters using the Python predefined character set and the ignore argument ignores all non-character set data from your binary and finally returns your desired string value.
If you are not sure about the encoding, use sys.getdefaultencoding() to get the default encoding of your device.
A:
You can use the decode() method on the bytes object to convert it to a string:
command_stdout = command_stdout.decode()
Then you can print the string as usual:
print(command_stdout)
This will produce the following output:
-rw-rw-r-- 1 thomas thomas 0 Mar 3 07:03 file1
-rw-rw-r-- 1 thomas thomas 0 Mar 3 07:03 file2
| Convert bytes to a string | I captured the standard output of an external program into a bytes object:
>>> from subprocess import *
>>> command_stdout = Popen(['ls', '-l'], stdout=PIPE).communicate()[0]
>>>
>>> command_stdout
b'total 0\n-rw-rw-r-- 1 thomas thomas 0 Mar 3 07:03 file1\n-rw-rw-r-- 1 thomas thomas 0 Mar 3 07:03 file2\n'
I want to convert that to a normal Python string, so that I can print it like this:
>>> print(command_stdout)
-rw-rw-r-- 1 thomas thomas 0 Mar 3 07:03 file1
-rw-rw-r-- 1 thomas thomas 0 Mar 3 07:03 file2
How do I convert the bytes object to a str with Python 3?
| [
"Decode the bytes object to produce a string:\n>>> b\"abcde\".decode(\"utf-8\") \n'abcde'\n\nThe above example assumes that the bytes object is in UTF-8, because it is a common encoding. However, you should use the encoding your data is actually in!\n",
"Decode the byte string and turn it in to a character (Unicode) string.\n\nPython 3:\nencoding = 'utf-8'\nb'hello'.decode(encoding)\n\nor\nstr(b'hello', encoding)\n\n\nPython 2:\nencoding = 'utf-8'\n'hello'.decode(encoding)\n\nor\nunicode('hello', encoding)\n\n",
"This joins together a list of bytes into a string:\n>>> bytes_data = [112, 52, 52]\n>>> \"\".join(map(chr, bytes_data))\n'p44'\n\n",
"If you don't know the encoding, then to read binary input into string in Python 3 and Python 2 compatible way, use the ancient MS-DOS CP437 encoding:\nPY3K = sys.version_info >= (3, 0)\n\nlines = []\nfor line in stream:\n if not PY3K:\n lines.append(line)\n else:\n lines.append(line.decode('cp437'))\n\nBecause encoding is unknown, expect non-English symbols to translate to characters of cp437 (English characters are not translated, because they match in most single byte encodings and UTF-8).\nDecoding arbitrary binary input to UTF-8 is unsafe, because you may get this:\n>>> b'\\x00\\x01\\xffsd'.decode('utf-8')\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nUnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 2: invalid\nstart byte\n\nThe same applies to latin-1, which was popular (the default?) for Python 2. See the missing points in Codepage Layout - it is where Python chokes with infamous ordinal not in range.\nUPDATE 20150604: There are rumors that Python 3 has the surrogateescape error strategy for encoding stuff into binary data without data loss and crashes, but it needs conversion tests, [binary] -> [str] -> [binary], to validate both performance and reliability.\nUPDATE 20170116: Thanks to comment by Nearoo - there is also a possibility to slash escape all unknown bytes with backslashreplace error handler. That works only for Python 3, so even with this workaround you will still get inconsistent output from different Python versions:\nPY3K = sys.version_info >= (3, 0)\n\nlines = []\nfor line in stream:\n if not PY3K:\n lines.append(line)\n else:\n lines.append(line.decode('utf-8', 'backslashreplace'))\n\nSee Python’s Unicode Support for details.\nUPDATE 20170119: I decided to implement slash escaping decode that works for both Python 2 and Python 3. It should be slower than the cp437 solution, but it should produce identical results on every Python version.\n# --- preparation\n\nimport codecs\n\ndef slashescape(err):\n \"\"\" codecs error handler. err is UnicodeDecode instance. return\n a tuple with a replacement for the unencodable part of the input\n and a position where encoding should continue\"\"\"\n #print err, dir(err), err.start, err.end, err.object[:err.start]\n thebyte = err.object[err.start:err.end]\n repl = u'\\\\x'+hex(ord(thebyte))[2:]\n return (repl, err.end)\n\ncodecs.register_error('slashescape', slashescape)\n\n# --- processing\n\nstream = [b'\\x80abc']\n\nlines = []\nfor line in stream:\n lines.append(line.decode('utf-8', 'slashescape'))\n\n",
"In Python 3, the default encoding is \"utf-8\", so you can directly use:\nb'hello'.decode()\n\nwhich is equivalent to\nb'hello'.decode(encoding=\"utf-8\")\n\nOn the other hand, in Python 2, encoding defaults to the default string encoding. Thus, you should use:\nb'hello'.decode(encoding)\n\nwhere encoding is the encoding you want.\nNote: support for keyword arguments was added in Python 2.7.\n",
"I think you actually want this:\n>>> from subprocess import *\n>>> command_stdout = Popen(['ls', '-l'], stdout=PIPE).communicate()[0]\n>>> command_text = command_stdout.decode(encoding='windows-1252')\n\nAaron's answer was correct, except that you need to know which encoding to use. And I believe that Windows uses 'windows-1252'. It will only matter if you have some unusual (non-ASCII) characters in your content, but then it will make a difference.\nBy the way, the fact that it does matter is the reason that Python moved to using two different types for binary and text data: it can't convert magically between them, because it doesn't know the encoding unless you tell it! The only way YOU would know is to read the Windows documentation (or read it here).\n",
"Since this question is actually asking about subprocess output, you have more direct approaches available. The most modern would be using subprocess.check_output and passing text=True (Python 3.7+) to automatically decode stdout using the system default coding:\ntext = subprocess.check_output([\"ls\", \"-l\"], text=True)\n\nFor Python 3.6, Popen accepts an encoding keyword:\n>>> from subprocess import Popen, PIPE\n>>> text = Popen(['ls', '-l'], stdout=PIPE, encoding='utf-8').communicate()[0]\n>>> type(text)\nstr\n>>> print(text)\ntotal 0\n-rw-r--r-- 1 wim badger 0 May 31 12:45 some_file.txt\n\nThe general answer to the question in the title, if you're not dealing with subprocess output, is to decode bytes to text:\n>>> b'abcde'.decode()\n'abcde'\n\nWith no argument, sys.getdefaultencoding() will be used. If your data is not sys.getdefaultencoding(), then you must specify the encoding explicitly in the decode call:\n>>> b'caf\\xe9'.decode('cp1250')\n'café'\n\n",
"Set universal_newlines to True, i.e.\ncommand_stdout = Popen(['ls', '-l'], stdout=PIPE, universal_newlines=True).communicate()[0]\n\n",
"To interpret a byte sequence as a text, you have to know the\ncorresponding character encoding:\nunicode_text = bytestring.decode(character_encoding)\n\nExample:\n>>> b'\\xc2\\xb5'.decode('utf-8')\n'µ'\n\nls command may produce output that can't be interpreted as text. File names\non Unix may be any sequence of bytes except slash b'/' and zero\nb'\\0':\n>>> open(bytes(range(0x100)).translate(None, b'\\0/'), 'w').close()\n\nTrying to decode such byte soup using utf-8 encoding raises UnicodeDecodeError.\nIt can be worse. The decoding may fail silently and produce mojibake\nif you use a wrong incompatible encoding:\n>>> '—'.encode('utf-8').decode('cp1252')\n'—'\n\nThe data is corrupted but your program remains unaware that a failure\nhas occurred.\nIn general, what character encoding to use is not embedded in the byte sequence itself. You have to communicate this info out-of-band. Some outcomes are more likely than others and therefore chardet module exists that can guess the character encoding. A single Python script may use multiple character encodings in different places.\n\nls output can be converted to a Python string using os.fsdecode()\nfunction that succeeds even for undecodable\nfilenames (it uses\nsys.getfilesystemencoding() and surrogateescape error handler on\nUnix):\nimport os\nimport subprocess\n\noutput = os.fsdecode(subprocess.check_output('ls'))\n\nTo get the original bytes, you could use os.fsencode().\nIf you pass universal_newlines=True parameter then subprocess uses\nlocale.getpreferredencoding(False) to decode bytes e.g., it can be\ncp1252 on Windows.\nTo decode the byte stream on-the-fly,\nio.TextIOWrapper()\ncould be used: example.\nDifferent commands may use different character encodings for their\noutput e.g., dir internal command (cmd) may use cp437. To decode its\noutput, you could pass the encoding explicitly (Python 3.6+):\noutput = subprocess.check_output('dir', shell=True, encoding='cp437')\n\nThe filenames may differ from os.listdir() (which uses Windows\nUnicode API) e.g., '\\xb6' can be substituted with '\\x14'—Python's\ncp437 codec maps b'\\x14' to control character U+0014 instead of\nU+00B6 (¶). To support filenames with arbitrary Unicode characters, see Decode PowerShell output possibly containing non-ASCII Unicode characters into a Python string\n",
"While @Aaron Maenpaa's answer just works, a user recently asked:\n\nIs there any more simply way? 'fhand.read().decode(\"ASCII\")' [...] It's so long!\n\nYou can use:\ncommand_stdout.decode()\n\ndecode() has a standard argument:\n\ncodecs.decode(obj, encoding='utf-8', errors='strict')\n\n",
"If you should get the following by trying decode():\n\nAttributeError: 'str' object has no attribute 'decode'\n\nYou can also specify the encoding type straight in a cast:\n>>> my_byte_str\nb'Hello World'\n\n>>> str(my_byte_str, 'utf-8')\n'Hello World'\n\n",
"If you have had this error:\n\nutf-8 codec can't decode byte 0x8a,\n\nthen it is better to use the following code to convert bytes to a string:\nbytes = b\"abcdefg\"\nstring = bytes.decode(\"utf-8\", \"ignore\") \n\n",
"Bytes\nm=b'This is bytes'\n\nConverting to string\nMethod 1\nm.decode(\"utf-8\")\n\nor\nm.decode()\n\nMethod 2\nimport codecs\ncodecs.decode(m,encoding=\"utf-8\")\n\nor\nimport codecs\ncodecs.decode(m)\n\nMethod 3\nstr(m,encoding=\"utf-8\")\n\nor\nstr(m)[2:-1]\n\nResult\n'This is bytes'\n\n",
"For Python 3, this is a much safer and Pythonic approach to convert from byte to string:\ndef byte_to_str(bytes_or_str):\n if isinstance(bytes_or_str, bytes): # Check if it's in bytes\n print(bytes_or_str.decode('utf-8'))\n else:\n print(\"Object not of byte type\")\n\nbyte_to_str(b'total 0\\n-rw-rw-r-- 1 thomas thomas 0 Mar 3 07:03 file1\\n-rw-rw-r-- 1 thomas thomas 0 Mar 3 07:03 file2\\n')\n\nOutput:\ntotal 0\n-rw-rw-r-- 1 thomas thomas 0 Mar 3 07:03 file1\n-rw-rw-r-- 1 thomas thomas 0 Mar 3 07:03 file2\n\n",
"When working with data from Windows systems (with \\r\\n line endings), my answer is\nString = Bytes.decode(\"utf-8\").replace(\"\\r\\n\", \"\\n\")\n\nWhy? Try this with a multiline Input.txt:\nBytes = open(\"Input.txt\", \"rb\").read()\nString = Bytes.decode(\"utf-8\")\nopen(\"Output.txt\", \"w\").write(String)\n\nAll your line endings will be doubled (to \\r\\r\\n), leading to extra empty lines. Python's text-read functions usually normalize line endings so that strings use only \\n. If you receive binary data from a Windows system, Python does not have a chance to do that. Thus,\nBytes = open(\"Input.txt\", \"rb\").read()\nString = Bytes.decode(\"utf-8\").replace(\"\\r\\n\", \"\\n\")\nopen(\"Output.txt\", \"w\").write(String)\n\nwill replicate your original file.\n",
"We can decode the bytes object to produce a string using bytes.decode(encoding='utf-8', errors='strict').\nFor documentation see bytes.decode.\nPython 3 example:\nbyte_value = b\"abcde\"\nprint(\"Initial value = {}\".format(byte_value))\nprint(\"Initial value type = {}\".format(type(byte_value)))\nstring_value = byte_value.decode(\"utf-8\")\n# utf-8 is used here because it is a very common encoding, but you need to use the encoding your data is actually in.\nprint(\"------------\")\nprint(\"Converted value = {}\".format(string_value))\nprint(\"Converted value type = {}\".format(type(string_value)))\n\nOutput:\nInitial value = b'abcde'\nInitial value type = <class 'bytes'>\n------------\nConverted value = abcde\nConverted value type = <class 'str'>\n\nNote: In Python 3, by default the encoding type is UTF-8. So, <byte_string>.decode(\"utf-8\") can be also written as <byte_string>.decode()\n",
"For your specific case of \"run a shell command and get its output as text instead of bytes\", on Python 3.7, you should use subprocess.run and pass in text=True (as well as capture_output=True to capture the output)\ncommand_result = subprocess.run([\"ls\", \"-l\"], capture_output=True, text=True)\ncommand_result.stdout # is a `str` containing your program's stdout\n\ntext used to be called universal_newlines, and was changed (well, aliased) in Python 3.7. If you want to support Python versions before 3.7, pass in universal_newlines=True instead of text=True\n",
"From sys — System-specific parameters and functions:\nTo write or read binary data from/to the standard streams, use the underlying binary buffer. For example, to write bytes to stdout, use sys.stdout.buffer.write(b'abc').\n",
"Try this:\nbytes.fromhex('c3a9').decode('utf-8') \n\n",
"Decode with .decode(). This will decode the string. Pass in 'utf-8') as the value in the inside.\n",
"def toString(string): \n try:\n return v.decode(\"utf-8\")\n except ValueError:\n return string\n\nb = b'97.080.500'\ns = '97.080.500'\nprint(toString(b))\nprint(toString(s))\n\n",
"If you want to convert any bytes, not just string converted to bytes:\nwith open(\"bytesfile\", \"rb\") as infile:\n str = base64.b85encode(imageFile.read())\n\nwith open(\"bytesfile\", \"rb\") as infile:\n str2 = json.dumps(list(infile.read()))\n\nThis is not very efficient, however. It will turn a 2 MB picture into 9 MB.\n",
"Try using this one; this function will ignore all the non-character sets (like UTF-8) binaries and return a clean string. It is tested for Python 3.6 and above.\ndef bin2str(text, encoding = 'utf-8'):\n \"\"\"Converts a binary to Unicode string by removing all non Unicode char\n text: binary string to work on\n encoding: output encoding *utf-8\"\"\"\n\n return text.decode(encoding, 'ignore')\n\nHere, the function will take the binary and decode it (converts binary data to characters using the Python predefined character set and the ignore argument ignores all non-character set data from your binary and finally returns your desired string value.\nIf you are not sure about the encoding, use sys.getdefaultencoding() to get the default encoding of your device.\n",
"You can use the decode() method on the bytes object to convert it to a string:\ncommand_stdout = command_stdout.decode()\nThen you can print the string as usual:\n\n\nprint(command_stdout)\n\nThis will produce the following output:\n-rw-rw-r-- 1 thomas thomas 0 Mar 3 07:03 file1\n-rw-rw-r-- 1 thomas thomas 0 Mar 3 07:03 file2\n\n"
] | [
5363,
393,
256,
127,
120,
48,
43,
38,
34,
28,
20,
19,
19,
9,
8,
8,
5,
4,
3,
3,
2,
2,
1,
0
] | [] | [] | [
"python",
"python_3.x",
"string"
] | stackoverflow_0000606191_python_python_3.x_string.txt |
Q:
Kivymd APK App (created with Buildozer) closes after opening up
I have created an APK file from Python Kivy & KivyMD, using Buildozer. When I open the app after installing it, it shows the splash image and then closes.
I have checked and found that their seems no issue in the main.py, as I have correctly listed Kivy & KivyMD in the requirements in the Buildozer.spec file. (kivy==2.0.0,kivymd==0.104.1)
This is my code..
main.py
import kivymd
from kivymd.app import MDApp
from kivymd.uix.screen import Screen
from kivy.lang import Builder
from kivymd.uix.button import MDRectangleFlatButton, MDFlatButton
from kivymd.uix.dialog import MDDialog
import helper
import model
class DemoApp(MDApp):
def build(self):
self.theme_cls.primary_palette = "Green"
self.screen = Builder.load_string(helper.navigation_helper)
return self.screen
def show_data(self): #(self,obj):
self.abc = model.chat(self.screen.ids.user_name.text)
close_button = MDFlatButton(text='Close', on_release=self.close_dialog)
self.dialog = MDDialog(title='First-aid Suggested..', text=self.abc, size_hint=(0.7, 1), buttons=[close_button])
self.dialog.open()
def close_dialog(self, obj):
self.dialog.dismiss()
DemoApp().run()
model.py
import nltk
# nltk.download('punkt')
from nltk.stem.lancaster import LancasterStemmer
stemmer = LancasterStemmer()
import numpy
import random
import json
from keras.layers import *
from keras.models import *
with open("intents.json") as file:
data = json.load(file)
words = []
labels = []
docs_x = []
docs_y = []
for intent in data["intents"]:
for pattern in intent["patterns"]:
wrds = nltk.word_tokenize(pattern) # ['What', 'to', 'do', 'if', 'Cuts', '?']
words.extend(wrds)
docs_x.append(wrds) # input data (x)
docs_y.append(intent["tag"]) # corresponding output data (y)
if intent["tag"] not in labels:
labels.append(intent["tag"]) # all possible output data
words = [stemmer.stem(w.lower()) for w in words if w != "?"]
words = sorted(list(set(words)))
labels = sorted(labels)
training = []
output = []
out_empty = [0 for _ in range(len(labels))] # [1,2,3] [0,0,0]
for x, doc in enumerate(docs_x):
bag = []
wrds = [stemmer.stem(w) for w in doc] # doc = ['What', 'to', 'do', 'if', 'Cuts', '?'] & wrds = ['What', 'to', 'do', 'if', 'Cut', '?']
for w in words:
if w in wrds:
bag.append(1)
else:
bag.append(0)
output_row = out_empty[:]
output_row[labels.index(docs_y[x])] = 1
training.append(bag)
output.append(output_row)
from keras.models import load_model
model = load_model("First_Aid_model.h5")
def bag_of_words(s,words):
bag = [0 for _ in range(len(words))]
s_words = nltk.word_tokenize(s)
s_words = [stemmer.stem(word.lower()) for word in s_words]
for se in s_words:
for i, w in enumerate(words):
if w == se:
bag[i] = 1
return bag
def chat(inp):
results = model.predict([bag_of_words(inp,words)])
result = results[0]
results_index = numpy.argmax(result)
tag = labels[results_index]
if result[results_index] > 0.5:
for tg in data["intents"]:
if tg['tag'] == tag:
responses = tg['responses']
res = random.choice(responses).split('. ')
res = [res[_]+'.' for _ in range(len(res)) if not res[_].endswith('.')]
res = ('\n').join(res)
return(res + "\n")
else:
return("I didnt get that, try again")
helper.py
navigation_helper = """
Screen:
MDNavigationLayout:
ScreenManager:
Screen:
BoxLayout:
orientation: 'vertical'
MDToolbar:
title: "Navigation Drawer"
elevation: 10
left_action_items: [['menu', lambda x: nav_drawer.set_state('toggle')]]
Widget:
MDTextField:
id: user_name
hint_text: "Enter username"
helper_text: "or click on forgot username"
helper_text_mode: "on_focus"
icon_right: "redhat"
icon_right_color: app.theme_cls.primary_color
pos_hint:{'center_x': 0.5, 'center_y': 0.5}
size_hint_x:None
width:300
MDRectangleFlatButton:
text: "Show"
pos_hint: {'center_x': 0.5, 'center_y': 0.5}
on_release: app.show_data()
Widget:
MDNavigationDrawer:
id: nav_drawer
BoxLayout:
orientation: 'vertical'
padding: "8dp"
spacing: "8dp"
Image:
id: avatar
size_hint: (1,1)
source: "Capture.PNG"
MDLabel:
text: "First-aid Bot"
font_style: "Subtitle1"
size_hint_y: None
height: self.texture_size[1]
MDLabel:
text: "[email protected]"
size_hint_y: None
font_style: "Caption"
height: self.texture_size[1]
ScrollView:
MDList:
OneLineIconListItem:
text: "Profile"
IconLeftWidget:
icon: "face-profile"
OneLineIconListItem:
text: "Upload"
IconLeftWidget:
icon: "upload"
OneLineIconListItem:
text: "Logout"
IconLeftWidget:
icon: "logout"
"""
buildozer.spec
[app]
# (str) Title of your application
title = Bot
# (str) Package name
package.name = bot
# (str) Package domain (needed for android/ios packaging)
package.domain = org.bot
# (str) Source code where the main.py live
source.dir = .
# (list) Source files to include (let empty to include all the files)
source.include_exts = py,png,jpg,kv,atlas
# (list) List of inclusions using pattern matching
#source.include_patterns = assets/*,images/*.png
# (list) Source files to exclude (let empty to not exclude anything)
#source.exclude_exts = spec
# (list) List of directory to exclude (let empty to not exclude anything)
#source.exclude_dirs = tests, bin
# (list) List of exclusions using pattern matching
#source.exclude_patterns = license,images/*/*.jpg
# (str) Application versioning (method 1)
version = 0.1
# (str) Application versioning (method 2)
# version.regex = __version__ = ['"](.*)['"]
# version.filename = %(source.dir)s/main.py
# (list) Application requirements
# comma separated e.g. requirements = sqlite3,kivy
requirements = python3,kivy==2.0.0,kivymd==0.104.1
# (str) Custom source folders for requirements
# Sets custom source for any requirements with recipes
# requirements.source.kivy = ../../kivy
# (list) Garden requirements
#garden_requirements =
# (str) Presplash of the application
#presplash.filename = %(source.dir)s/data/presplash.png
# (str) Icon of the application
#icon.filename = %(source.dir)s/data/icon.png
# (str) Supported orientation (one of landscape, sensorLandscape, portrait or all)
orientation = portrait
# (list) List of service to declare
#services = NAME:ENTRYPOINT_TO_PY,NAME2:ENTRYPOINT2_TO_PY
#
# OSX Specific
#
#
# author = © Copyright Info
# change the major version of python used by the app
osx.python_version = 3
# Kivy version to use
osx.kivy_version = 1.9.1
#
# Android specific
#
# (bool) Indicate if the application should be fullscreen or not
fullscreen = 0
# (string) Presplash background color (for new android toolchain)
# Supported formats are: #RRGGBB #AARRGGBB or one of the following names:
# red, blue, green, black, white, gray, cyan, magenta, yellow, lightgray,
# darkgray, grey, lightgrey, darkgrey, aqua, fuchsia, lime, maroon, navy,
# olive, purple, silver, teal.
#android.presplash_color = #FFFFFF
# (list) Permissions
#android.permissions = INTERNET
# (int) Target Android API, should be as high as possible.
#android.api = 27
# (int) Minimum API your APK will support.
#android.minapi = 21
# (int) Android SDK version to use
#android.sdk = 20
# (str) Android NDK version to use
#android.ndk = 19b
# (int) Android NDK API to use. This is the minimum API your app will support, it should usually match android.minapi.
#android.ndk_api = 21
# (bool) Use --private data storage (True) or --dir public storage (False)
#android.private_storage = True
# (str) Android NDK directory (if empty, it will be automatically downloaded.)
#android.ndk_path =
# (str) Android SDK directory (if empty, it will be automatically downloaded.)
#android.sdk_path =
# (str) ANT directory (if empty, it will be automatically downloaded.)
#android.ant_path =
# (bool) If True, then skip trying to update the Android sdk
# This can be useful to avoid excess Internet downloads or save time
# when an update is due and you just want to test/build your package
# android.skip_update = False
# (bool) If True, then automatically accept SDK license
# agreements. This is intended for automation only. If set to False,
# the default, you will be shown the license when first running
# buildozer.
# android.accept_sdk_license = False
# (str) Android entry point, default is ok for Kivy-based app
#android.entrypoint = org.renpy.android.PythonActivity
# (str) Android app theme, default is ok for Kivy-based app
# android.apptheme = "@android:style/Theme.NoTitleBar"
# (list) Pattern to whitelist for the whole project
#android.whitelist =
# (str) Path to a custom whitelist file
#android.whitelist_src =
# (str) Path to a custom blacklist file
#android.blacklist_src =
# (list) List of Java .jar files to add to the libs so that pyjnius can access
# their classes. Don't add jars that you do not need, since extra jars can slow
# down the build process. Allows wildcards matching, for example:
# OUYA-ODK/libs/*.jar
#android.add_jars = foo.jar,bar.jar,path/to/more/*.jar
# (list) List of Java files to add to the android project (can be java or a
# directory containing the files)
#android.add_src =
# (list) Android AAR archives to add (currently works only with sdl2_gradle
# bootstrap)
#android.add_aars =
# (list) Gradle dependencies to add (currently works only with sdl2_gradle
# bootstrap)
#android.gradle_dependencies =
# (list) add java compile options
# this can for example be necessary when importing certain java libraries using the 'android.gradle_dependencies' option
# see https://developer.android.com/studio/write/java8-support for further information
# android.add_compile_options = "sourceCompatibility = 1.8", "targetCompatibility = 1.8"
# (list) Gradle repositories to add {can be necessary for some android.gradle_dependencies}
# please enclose in double quotes
# e.g. android.gradle_repositories = "maven { url 'https://kotlin.bintray.com/ktor' }"
#android.add_gradle_repositories =
# (list) packaging options to add
# see https://google.github.io/android-gradle-dsl/current/com.android.build.gradle.internal.dsl.PackagingOptions.html
# can be necessary to solve conflicts in gradle_dependencies
# please enclose in double quotes
# e.g. android.add_packaging_options = "exclude 'META-INF/common.kotlin_module'", "exclude 'META-INF/*.kotlin_module'"
#android.add_gradle_repositories =
# (list) Java classes to add as activities to the manifest.
#android.add_activities = com.example.ExampleActivity
# (str) OUYA Console category. Should be one of GAME or APP
# If you leave this blank, OUYA support will not be enabled
#android.ouya.category = GAME
# (str) Filename of OUYA Console icon. It must be a 732x412 png image.
#android.ouya.icon.filename = %(source.dir)s/data/ouya_icon.png
# (str) XML file to include as an intent filters in <activity> tag
#android.manifest.intent_filters =
# (str) launchMode to set for the main activity
#android.manifest.launch_mode = standard
# (list) Android additional libraries to copy into libs/armeabi
#android.add_libs_armeabi = libs/android/*.so
#android.add_libs_armeabi_v7a = libs/android-v7/*.so
#android.add_libs_arm64_v8a = libs/android-v8/*.so
#android.add_libs_x86 = libs/android-x86/*.so
#android.add_libs_mips = libs/android-mips/*.so
# (bool) Indicate whether the screen should stay on
# Don't forget to add the WAKE_LOCK permission if you set this to True
#android.wakelock = False
# (list) Android application meta-data to set (key=value format)
#android.meta_data =
# (list) Android library project to add (will be added in the
# project.properties automatically.)
#android.library_references =
# (list) Android shared libraries which will be added to AndroidManifest.xml using <uses-library> tag
#android.uses_library =
# (str) Android logcat filters to use
#android.logcat_filters = *:S python:D
# (bool) Copy library instead of making a libpymodules.so
#android.copy_libs = 1
# (str) The Android arch to build for, choices: armeabi-v7a, arm64-v8a, x86, x86_64
android.arch = armeabi-v7a
# (int) overrides automatic versionCode computation (used in build.gradle)
# this is not the same as app version and should only be edited if you know what you're doing
# android.numeric_version = 1
#
# Python for android (p4a) specific
#
# (str) python-for-android fork to use, defaults to upstream (kivy)
#p4a.fork = kivy
# (str) python-for-android branch to use, defaults to master
#p4a.branch = master
# (str) python-for-android git clone directory (if empty, it will be automatically cloned from github)
#p4a.source_dir =
# (str) The directory in which python-for-android should look for your own build recipes (if any)
#p4a.local_recipes =
# (str) Filename to the hook for p4a
#p4a.hook =
# (str) Bootstrap to use for android builds
# p4a.bootstrap = sdl2
# (int) port number to specify an explicit --port= p4a argument (eg for bootstrap flask)
#p4a.port =
#
# iOS specific
#
# (str) Path to a custom kivy-ios folder
#ios.kivy_ios_dir = ../kivy-ios
# Alternately, specify the URL and branch of a git checkout:
ios.kivy_ios_url = https://github.com/kivy/kivy-ios
ios.kivy_ios_branch = master
# Another platform dependency: ios-deploy
# Uncomment to use a custom checkout
#ios.ios_deploy_dir = ../ios_deploy
# Or specify URL and branch
ios.ios_deploy_url = https://github.com/phonegap/ios-deploy
ios.ios_deploy_branch = 1.7.0
# (str) Name of the certificate to use for signing the debug version
# Get a list of available identities: buildozer ios list_identities
#ios.codesign.debug = "iPhone Developer: <lastname> <firstname> (<hexstring>)"
# (str) Name of the certificate to use for signing the release version
#ios.codesign.release = %(ios.codesign.debug)s
[buildozer]
# (int) Log level (0 = error only, 1 = info, 2 = debug (with command output))
log_level = 2
# (int) Display warning if buildozer is run as root (0 = False, 1 = True)
warn_on_root = 1
# (str) Path to build artifact storage, absolute or relative to spec file
# build_dir = ./.buildozer
# (str) Path to build output (i.e. .apk, .ipa) storage
# bin_dir = ./bin
# -----------------------------------------------------------------------------
# List as sections
#
# You can define all the "list" as [section:key].
# Each line will be considered as a option to the list.
# Let's take [app] / source.exclude_patterns.
# Instead of doing:
#
#[app]
#source.exclude_patterns = license,data/audio/*.wav,data/images/original/*
#
# This can be translated into:
#
#[app:source.exclude_patterns]
#license
#data/audio/*.wav
#data/images/original/*
#
# -----------------------------------------------------------------------------
# Profiles
#
# You can extend section / key with a profile
# For example, you want to deploy a demo version of your application without
# HD content. You could first change the title to add "(demo)" in the name
# and extend the excluded directories to remove the HD content.
#
#[app@demo]
#title = My Application (demo)
#
#[app:source.exclude_patterns@demo]
#images/hd/*
#
# Then, invoke the command line with the "demo" profile:
#
#buildozer --profile demo android debug
According to me, the issue is coming as in the model.py file, I have imported modules like Keras, NLTK etc..and I am not mentioning them in requirements.
If this is the issue, then please give the complete statement which I should write in the requirements, according to my model.py & others
Please guide me.
A:
If you have some other plugins just add them like this:
# comma separated e.g. requirements = sqlite3,kivy
requirements = python3,kivy==2.0.0,kivymd==0.104.1,pluginname==version
A:
requirements = python3,kivy==2.0.0,kivymd==0.104.1,nltk,numpy,keras
That's all you need
| Kivymd APK App (created with Buildozer) closes after opening up | I have created an APK file from Python Kivy & KivyMD, using Buildozer. When I open the app after installing it, it shows the splash image and then closes.
I have checked and found that their seems no issue in the main.py, as I have correctly listed Kivy & KivyMD in the requirements in the Buildozer.spec file. (kivy==2.0.0,kivymd==0.104.1)
This is my code..
main.py
import kivymd
from kivymd.app import MDApp
from kivymd.uix.screen import Screen
from kivy.lang import Builder
from kivymd.uix.button import MDRectangleFlatButton, MDFlatButton
from kivymd.uix.dialog import MDDialog
import helper
import model
class DemoApp(MDApp):
def build(self):
self.theme_cls.primary_palette = "Green"
self.screen = Builder.load_string(helper.navigation_helper)
return self.screen
def show_data(self): #(self,obj):
self.abc = model.chat(self.screen.ids.user_name.text)
close_button = MDFlatButton(text='Close', on_release=self.close_dialog)
self.dialog = MDDialog(title='First-aid Suggested..', text=self.abc, size_hint=(0.7, 1), buttons=[close_button])
self.dialog.open()
def close_dialog(self, obj):
self.dialog.dismiss()
DemoApp().run()
model.py
import nltk
# nltk.download('punkt')
from nltk.stem.lancaster import LancasterStemmer
stemmer = LancasterStemmer()
import numpy
import random
import json
from keras.layers import *
from keras.models import *
with open("intents.json") as file:
data = json.load(file)
words = []
labels = []
docs_x = []
docs_y = []
for intent in data["intents"]:
for pattern in intent["patterns"]:
wrds = nltk.word_tokenize(pattern) # ['What', 'to', 'do', 'if', 'Cuts', '?']
words.extend(wrds)
docs_x.append(wrds) # input data (x)
docs_y.append(intent["tag"]) # corresponding output data (y)
if intent["tag"] not in labels:
labels.append(intent["tag"]) # all possible output data
words = [stemmer.stem(w.lower()) for w in words if w != "?"]
words = sorted(list(set(words)))
labels = sorted(labels)
training = []
output = []
out_empty = [0 for _ in range(len(labels))] # [1,2,3] [0,0,0]
for x, doc in enumerate(docs_x):
bag = []
wrds = [stemmer.stem(w) for w in doc] # doc = ['What', 'to', 'do', 'if', 'Cuts', '?'] & wrds = ['What', 'to', 'do', 'if', 'Cut', '?']
for w in words:
if w in wrds:
bag.append(1)
else:
bag.append(0)
output_row = out_empty[:]
output_row[labels.index(docs_y[x])] = 1
training.append(bag)
output.append(output_row)
from keras.models import load_model
model = load_model("First_Aid_model.h5")
def bag_of_words(s,words):
bag = [0 for _ in range(len(words))]
s_words = nltk.word_tokenize(s)
s_words = [stemmer.stem(word.lower()) for word in s_words]
for se in s_words:
for i, w in enumerate(words):
if w == se:
bag[i] = 1
return bag
def chat(inp):
results = model.predict([bag_of_words(inp,words)])
result = results[0]
results_index = numpy.argmax(result)
tag = labels[results_index]
if result[results_index] > 0.5:
for tg in data["intents"]:
if tg['tag'] == tag:
responses = tg['responses']
res = random.choice(responses).split('. ')
res = [res[_]+'.' for _ in range(len(res)) if not res[_].endswith('.')]
res = ('\n').join(res)
return(res + "\n")
else:
return("I didnt get that, try again")
helper.py
navigation_helper = """
Screen:
MDNavigationLayout:
ScreenManager:
Screen:
BoxLayout:
orientation: 'vertical'
MDToolbar:
title: "Navigation Drawer"
elevation: 10
left_action_items: [['menu', lambda x: nav_drawer.set_state('toggle')]]
Widget:
MDTextField:
id: user_name
hint_text: "Enter username"
helper_text: "or click on forgot username"
helper_text_mode: "on_focus"
icon_right: "redhat"
icon_right_color: app.theme_cls.primary_color
pos_hint:{'center_x': 0.5, 'center_y': 0.5}
size_hint_x:None
width:300
MDRectangleFlatButton:
text: "Show"
pos_hint: {'center_x': 0.5, 'center_y': 0.5}
on_release: app.show_data()
Widget:
MDNavigationDrawer:
id: nav_drawer
BoxLayout:
orientation: 'vertical'
padding: "8dp"
spacing: "8dp"
Image:
id: avatar
size_hint: (1,1)
source: "Capture.PNG"
MDLabel:
text: "First-aid Bot"
font_style: "Subtitle1"
size_hint_y: None
height: self.texture_size[1]
MDLabel:
text: "[email protected]"
size_hint_y: None
font_style: "Caption"
height: self.texture_size[1]
ScrollView:
MDList:
OneLineIconListItem:
text: "Profile"
IconLeftWidget:
icon: "face-profile"
OneLineIconListItem:
text: "Upload"
IconLeftWidget:
icon: "upload"
OneLineIconListItem:
text: "Logout"
IconLeftWidget:
icon: "logout"
"""
buildozer.spec
[app]
# (str) Title of your application
title = Bot
# (str) Package name
package.name = bot
# (str) Package domain (needed for android/ios packaging)
package.domain = org.bot
# (str) Source code where the main.py live
source.dir = .
# (list) Source files to include (let empty to include all the files)
source.include_exts = py,png,jpg,kv,atlas
# (list) List of inclusions using pattern matching
#source.include_patterns = assets/*,images/*.png
# (list) Source files to exclude (let empty to not exclude anything)
#source.exclude_exts = spec
# (list) List of directory to exclude (let empty to not exclude anything)
#source.exclude_dirs = tests, bin
# (list) List of exclusions using pattern matching
#source.exclude_patterns = license,images/*/*.jpg
# (str) Application versioning (method 1)
version = 0.1
# (str) Application versioning (method 2)
# version.regex = __version__ = ['"](.*)['"]
# version.filename = %(source.dir)s/main.py
# (list) Application requirements
# comma separated e.g. requirements = sqlite3,kivy
requirements = python3,kivy==2.0.0,kivymd==0.104.1
# (str) Custom source folders for requirements
# Sets custom source for any requirements with recipes
# requirements.source.kivy = ../../kivy
# (list) Garden requirements
#garden_requirements =
# (str) Presplash of the application
#presplash.filename = %(source.dir)s/data/presplash.png
# (str) Icon of the application
#icon.filename = %(source.dir)s/data/icon.png
# (str) Supported orientation (one of landscape, sensorLandscape, portrait or all)
orientation = portrait
# (list) List of service to declare
#services = NAME:ENTRYPOINT_TO_PY,NAME2:ENTRYPOINT2_TO_PY
#
# OSX Specific
#
#
# author = © Copyright Info
# change the major version of python used by the app
osx.python_version = 3
# Kivy version to use
osx.kivy_version = 1.9.1
#
# Android specific
#
# (bool) Indicate if the application should be fullscreen or not
fullscreen = 0
# (string) Presplash background color (for new android toolchain)
# Supported formats are: #RRGGBB #AARRGGBB or one of the following names:
# red, blue, green, black, white, gray, cyan, magenta, yellow, lightgray,
# darkgray, grey, lightgrey, darkgrey, aqua, fuchsia, lime, maroon, navy,
# olive, purple, silver, teal.
#android.presplash_color = #FFFFFF
# (list) Permissions
#android.permissions = INTERNET
# (int) Target Android API, should be as high as possible.
#android.api = 27
# (int) Minimum API your APK will support.
#android.minapi = 21
# (int) Android SDK version to use
#android.sdk = 20
# (str) Android NDK version to use
#android.ndk = 19b
# (int) Android NDK API to use. This is the minimum API your app will support, it should usually match android.minapi.
#android.ndk_api = 21
# (bool) Use --private data storage (True) or --dir public storage (False)
#android.private_storage = True
# (str) Android NDK directory (if empty, it will be automatically downloaded.)
#android.ndk_path =
# (str) Android SDK directory (if empty, it will be automatically downloaded.)
#android.sdk_path =
# (str) ANT directory (if empty, it will be automatically downloaded.)
#android.ant_path =
# (bool) If True, then skip trying to update the Android sdk
# This can be useful to avoid excess Internet downloads or save time
# when an update is due and you just want to test/build your package
# android.skip_update = False
# (bool) If True, then automatically accept SDK license
# agreements. This is intended for automation only. If set to False,
# the default, you will be shown the license when first running
# buildozer.
# android.accept_sdk_license = False
# (str) Android entry point, default is ok for Kivy-based app
#android.entrypoint = org.renpy.android.PythonActivity
# (str) Android app theme, default is ok for Kivy-based app
# android.apptheme = "@android:style/Theme.NoTitleBar"
# (list) Pattern to whitelist for the whole project
#android.whitelist =
# (str) Path to a custom whitelist file
#android.whitelist_src =
# (str) Path to a custom blacklist file
#android.blacklist_src =
# (list) List of Java .jar files to add to the libs so that pyjnius can access
# their classes. Don't add jars that you do not need, since extra jars can slow
# down the build process. Allows wildcards matching, for example:
# OUYA-ODK/libs/*.jar
#android.add_jars = foo.jar,bar.jar,path/to/more/*.jar
# (list) List of Java files to add to the android project (can be java or a
# directory containing the files)
#android.add_src =
# (list) Android AAR archives to add (currently works only with sdl2_gradle
# bootstrap)
#android.add_aars =
# (list) Gradle dependencies to add (currently works only with sdl2_gradle
# bootstrap)
#android.gradle_dependencies =
# (list) add java compile options
# this can for example be necessary when importing certain java libraries using the 'android.gradle_dependencies' option
# see https://developer.android.com/studio/write/java8-support for further information
# android.add_compile_options = "sourceCompatibility = 1.8", "targetCompatibility = 1.8"
# (list) Gradle repositories to add {can be necessary for some android.gradle_dependencies}
# please enclose in double quotes
# e.g. android.gradle_repositories = "maven { url 'https://kotlin.bintray.com/ktor' }"
#android.add_gradle_repositories =
# (list) packaging options to add
# see https://google.github.io/android-gradle-dsl/current/com.android.build.gradle.internal.dsl.PackagingOptions.html
# can be necessary to solve conflicts in gradle_dependencies
# please enclose in double quotes
# e.g. android.add_packaging_options = "exclude 'META-INF/common.kotlin_module'", "exclude 'META-INF/*.kotlin_module'"
#android.add_gradle_repositories =
# (list) Java classes to add as activities to the manifest.
#android.add_activities = com.example.ExampleActivity
# (str) OUYA Console category. Should be one of GAME or APP
# If you leave this blank, OUYA support will not be enabled
#android.ouya.category = GAME
# (str) Filename of OUYA Console icon. It must be a 732x412 png image.
#android.ouya.icon.filename = %(source.dir)s/data/ouya_icon.png
# (str) XML file to include as an intent filters in <activity> tag
#android.manifest.intent_filters =
# (str) launchMode to set for the main activity
#android.manifest.launch_mode = standard
# (list) Android additional libraries to copy into libs/armeabi
#android.add_libs_armeabi = libs/android/*.so
#android.add_libs_armeabi_v7a = libs/android-v7/*.so
#android.add_libs_arm64_v8a = libs/android-v8/*.so
#android.add_libs_x86 = libs/android-x86/*.so
#android.add_libs_mips = libs/android-mips/*.so
# (bool) Indicate whether the screen should stay on
# Don't forget to add the WAKE_LOCK permission if you set this to True
#android.wakelock = False
# (list) Android application meta-data to set (key=value format)
#android.meta_data =
# (list) Android library project to add (will be added in the
# project.properties automatically.)
#android.library_references =
# (list) Android shared libraries which will be added to AndroidManifest.xml using <uses-library> tag
#android.uses_library =
# (str) Android logcat filters to use
#android.logcat_filters = *:S python:D
# (bool) Copy library instead of making a libpymodules.so
#android.copy_libs = 1
# (str) The Android arch to build for, choices: armeabi-v7a, arm64-v8a, x86, x86_64
android.arch = armeabi-v7a
# (int) overrides automatic versionCode computation (used in build.gradle)
# this is not the same as app version and should only be edited if you know what you're doing
# android.numeric_version = 1
#
# Python for android (p4a) specific
#
# (str) python-for-android fork to use, defaults to upstream (kivy)
#p4a.fork = kivy
# (str) python-for-android branch to use, defaults to master
#p4a.branch = master
# (str) python-for-android git clone directory (if empty, it will be automatically cloned from github)
#p4a.source_dir =
# (str) The directory in which python-for-android should look for your own build recipes (if any)
#p4a.local_recipes =
# (str) Filename to the hook for p4a
#p4a.hook =
# (str) Bootstrap to use for android builds
# p4a.bootstrap = sdl2
# (int) port number to specify an explicit --port= p4a argument (eg for bootstrap flask)
#p4a.port =
#
# iOS specific
#
# (str) Path to a custom kivy-ios folder
#ios.kivy_ios_dir = ../kivy-ios
# Alternately, specify the URL and branch of a git checkout:
ios.kivy_ios_url = https://github.com/kivy/kivy-ios
ios.kivy_ios_branch = master
# Another platform dependency: ios-deploy
# Uncomment to use a custom checkout
#ios.ios_deploy_dir = ../ios_deploy
# Or specify URL and branch
ios.ios_deploy_url = https://github.com/phonegap/ios-deploy
ios.ios_deploy_branch = 1.7.0
# (str) Name of the certificate to use for signing the debug version
# Get a list of available identities: buildozer ios list_identities
#ios.codesign.debug = "iPhone Developer: <lastname> <firstname> (<hexstring>)"
# (str) Name of the certificate to use for signing the release version
#ios.codesign.release = %(ios.codesign.debug)s
[buildozer]
# (int) Log level (0 = error only, 1 = info, 2 = debug (with command output))
log_level = 2
# (int) Display warning if buildozer is run as root (0 = False, 1 = True)
warn_on_root = 1
# (str) Path to build artifact storage, absolute or relative to spec file
# build_dir = ./.buildozer
# (str) Path to build output (i.e. .apk, .ipa) storage
# bin_dir = ./bin
# -----------------------------------------------------------------------------
# List as sections
#
# You can define all the "list" as [section:key].
# Each line will be considered as a option to the list.
# Let's take [app] / source.exclude_patterns.
# Instead of doing:
#
#[app]
#source.exclude_patterns = license,data/audio/*.wav,data/images/original/*
#
# This can be translated into:
#
#[app:source.exclude_patterns]
#license
#data/audio/*.wav
#data/images/original/*
#
# -----------------------------------------------------------------------------
# Profiles
#
# You can extend section / key with a profile
# For example, you want to deploy a demo version of your application without
# HD content. You could first change the title to add "(demo)" in the name
# and extend the excluded directories to remove the HD content.
#
#[app@demo]
#title = My Application (demo)
#
#[app:source.exclude_patterns@demo]
#images/hd/*
#
# Then, invoke the command line with the "demo" profile:
#
#buildozer --profile demo android debug
According to me, the issue is coming as in the model.py file, I have imported modules like Keras, NLTK etc..and I am not mentioning them in requirements.
If this is the issue, then please give the complete statement which I should write in the requirements, according to my model.py & others
Please guide me.
| [
"If you have some other plugins just add them like this:\n# comma separated e.g. requirements = sqlite3,kivy\nrequirements = python3,kivy==2.0.0,kivymd==0.104.1,pluginname==version\n\n",
"requirements = python3,kivy==2.0.0,kivymd==0.104.1,nltk,numpy,keras\nThat's all you need\n"
] | [
1,
0
] | [] | [] | [
"buildozer",
"keras",
"kivy",
"kivymd",
"python"
] | stackoverflow_0069593107_buildozer_keras_kivy_kivymd_python.txt |
Q:
analyze the train-validation accuracy learning curve
I am building a two-layer neural network from scratch on the Fashion MNIST dataset. In between, using the RELU as activation and on the last layer, I am using softmax cross entropy. I am getting the below learning curve between train and validation accuracy which is wrong obviously. But if you see my loss curve, it's decreasing but my model is not learning. I am not able to my head around where I am going wrong. Could anyone explain these two graphs, like where I could be possibly going wrong?
A:
I don't know exactly what you are doing, and I don't know anything about your architecture, but it's wrong to use ReLU on the last layer.
Usually you leave the last layer as linear (no activation). This will produce the logits that enter the Softmax. The output of the softmax will try to approximate the probability distribution on the classes.
This could be a reason for your results.
| analyze the train-validation accuracy learning curve | I am building a two-layer neural network from scratch on the Fashion MNIST dataset. In between, using the RELU as activation and on the last layer, I am using softmax cross entropy. I am getting the below learning curve between train and validation accuracy which is wrong obviously. But if you see my loss curve, it's decreasing but my model is not learning. I am not able to my head around where I am going wrong. Could anyone explain these two graphs, like where I could be possibly going wrong?
| [
"I don't know exactly what you are doing, and I don't know anything about your architecture, but it's wrong to use ReLU on the last layer.\nUsually you leave the last layer as linear (no activation). This will produce the logits that enter the Softmax. The output of the softmax will try to approximate the probability distribution on the classes.\nThis could be a reason for your results.\n"
] | [
0
] | [] | [] | [
"cross_entropy",
"neural_network",
"numpy",
"python",
"softmax"
] | stackoverflow_0074671726_cross_entropy_neural_network_numpy_python_softmax.txt |
Q:
Django Rest Framework Cannot save a model it tells me the date must be a str
I have this Profile model that also has location attached to it but not trying to save the location now only trying to save the Profile but get an error:
class Profile(models.Model):
# Gender
M = 'M'
F = 'F'
O = 'O'
GENDER = [
(M, "male"),
(F, "female"),
(O, "Other")
]
# Basic information
background = models.FileField(upload_to=background_to, null=True, blank=True)
photo = models.FileField(upload_to=image_to, null=True, blank=True)
slug = AutoSlugField(populate_from=['first_name', 'last_name', 'gender'])
first_name = models.CharField(max_length=100)
middle_name = models.CharField(max_length=100, null=True, blank=True)
last_name = models.CharField(max_length=100)
birthdate = models.DateField()
gender = models.CharField(max_length=1, choices=GENDER, default=None)
bio = models.TextField(max_length=5000, null=True, blank=True)
languages = ArrayField(models.CharField(max_length=30, null=True, blank=True), null=True, blank=True)
# Location information
website = models.URLField(max_length=256, null=True, blank=True)
# owner information
user = models.OneToOneField(User, on_delete=models.CASCADE)
created_at = models.DateTimeField(auto_now_add=True, verbose_name="created at")
updated_at = models.DateTimeField(auto_now=True, verbose_name="updated at")
class Meta:
verbose_name = "profile"
verbose_name_plural = "profiles"
db_table = "user_profiles"
def __str__(self):
return self.first_name + ' ' + self.last_name
def get_absolute_url(self):
return self.slug
and this is the view I am using to save the Profile with. I tried sending the data to a serializer first and saving that but the serializer was invalid every time:
class CreateProfileView(APIView):
permission_classes = [permissions.IsAuthenticated]
def post(self, request):
data = dict(request.data)
location = {}
location.update(street=data.pop('street'))
location.update(additional=data.pop('additional'))
location.update(country=data.pop('country'))
location.update(state=data.pop('state'))
location.update(city=data.pop('city'))
location.update(zip=data.pop('zip'))
location.update(phone=data.pop('phone'))
user_id = data.pop('user')
id = int((user_id[0]))
image = data.pop('photo')
user = User.objects.get(pk=id)
print(data['birthdate'])
new_profile = Profile.objects.create(**data, user=user)
# new_location = Location.objects.create(**location, profile=new_profile)
return Response("Profile saved successfully")
and this the data coming in from the front end:
0: photo → File { name: "tumblr_005ddc5e92b6818f41d4dba4bb08e77e_bbe06c5b_540.jpg", lastModified: 1670127532084, size: 91844, … }
1: first_name → "Calvin"
2: middle_name → "undefined"
3: last_name → "Cani"
4: birthdate → "1971-09-01"
5: gender → "M"
6: bio → "This is general information about me"
7: languages → ""
8: street → "street one"
9: additional → "zwartkop"
10: country → "1"
11: state → "1"
12: city → "1"
13: zip → "0186"
14: phone → "0815252165"
15: website → ""
16: user → "1"
When I try and save a Profile I get the following error I cannot seem to find an answer for:
TypeError: fromisoformat: argument must be str
What is wrong please and how do I fix it?
I actually want to validate the data first and then save it and I tried to serialize the data first but that proved to be fatal, so I took a different approach. new to this and trying to learn so I know how it all fits together. Thanks
A:
The error you are encountering is likely due to the birthdate field in your Profile model being a DateField, but the value you are trying to save is a string. You must convert the string value to a date object before saving it to the birthdate field.
Here is an example of how you can do this:
from datetime import datetime
# Your code here
class CreateProfileView(APIView):
permission_classes = [permissions.IsAuthenticated]
def post(self, request):
data = dict(request.data)
location = {}
location.update(street=data.pop('street'))
location.update(additional=data.pop('additional'))
location.update(country=data.pop('country'))
location.update(state=data.pop('state'))
location.update(city=data.pop('city'))
location.update(zip=data.pop('zip'))
location.update(phone=data.pop('phone'))
user_id = data.pop('user')
id = int((user_id[0]))
image = data.pop('photo')
user = User.objects.get(pk=id)
# Convert the string value to a date object
birthdate_str = data.pop('birthdate')
birthdate = datetime.strptime(birthdate_str, '%Y-%m-%d').date()
new_profile = Profile.objects.create(**data, birthdate=birthdate, user=user)
# new_location = Location.objects.create(**location, profile=new_profile)
return Response("Profile saved successfully")
| Django Rest Framework Cannot save a model it tells me the date must be a str | I have this Profile model that also has location attached to it but not trying to save the location now only trying to save the Profile but get an error:
class Profile(models.Model):
# Gender
M = 'M'
F = 'F'
O = 'O'
GENDER = [
(M, "male"),
(F, "female"),
(O, "Other")
]
# Basic information
background = models.FileField(upload_to=background_to, null=True, blank=True)
photo = models.FileField(upload_to=image_to, null=True, blank=True)
slug = AutoSlugField(populate_from=['first_name', 'last_name', 'gender'])
first_name = models.CharField(max_length=100)
middle_name = models.CharField(max_length=100, null=True, blank=True)
last_name = models.CharField(max_length=100)
birthdate = models.DateField()
gender = models.CharField(max_length=1, choices=GENDER, default=None)
bio = models.TextField(max_length=5000, null=True, blank=True)
languages = ArrayField(models.CharField(max_length=30, null=True, blank=True), null=True, blank=True)
# Location information
website = models.URLField(max_length=256, null=True, blank=True)
# owner information
user = models.OneToOneField(User, on_delete=models.CASCADE)
created_at = models.DateTimeField(auto_now_add=True, verbose_name="created at")
updated_at = models.DateTimeField(auto_now=True, verbose_name="updated at")
class Meta:
verbose_name = "profile"
verbose_name_plural = "profiles"
db_table = "user_profiles"
def __str__(self):
return self.first_name + ' ' + self.last_name
def get_absolute_url(self):
return self.slug
and this is the view I am using to save the Profile with. I tried sending the data to a serializer first and saving that but the serializer was invalid every time:
class CreateProfileView(APIView):
permission_classes = [permissions.IsAuthenticated]
def post(self, request):
data = dict(request.data)
location = {}
location.update(street=data.pop('street'))
location.update(additional=data.pop('additional'))
location.update(country=data.pop('country'))
location.update(state=data.pop('state'))
location.update(city=data.pop('city'))
location.update(zip=data.pop('zip'))
location.update(phone=data.pop('phone'))
user_id = data.pop('user')
id = int((user_id[0]))
image = data.pop('photo')
user = User.objects.get(pk=id)
print(data['birthdate'])
new_profile = Profile.objects.create(**data, user=user)
# new_location = Location.objects.create(**location, profile=new_profile)
return Response("Profile saved successfully")
and this the data coming in from the front end:
0: photo → File { name: "tumblr_005ddc5e92b6818f41d4dba4bb08e77e_bbe06c5b_540.jpg", lastModified: 1670127532084, size: 91844, … }
1: first_name → "Calvin"
2: middle_name → "undefined"
3: last_name → "Cani"
4: birthdate → "1971-09-01"
5: gender → "M"
6: bio → "This is general information about me"
7: languages → ""
8: street → "street one"
9: additional → "zwartkop"
10: country → "1"
11: state → "1"
12: city → "1"
13: zip → "0186"
14: phone → "0815252165"
15: website → ""
16: user → "1"
When I try and save a Profile I get the following error I cannot seem to find an answer for:
TypeError: fromisoformat: argument must be str
What is wrong please and how do I fix it?
I actually want to validate the data first and then save it and I tried to serialize the data first but that proved to be fatal, so I took a different approach. new to this and trying to learn so I know how it all fits together. Thanks
| [
"The error you are encountering is likely due to the birthdate field in your Profile model being a DateField, but the value you are trying to save is a string. You must convert the string value to a date object before saving it to the birthdate field.\nHere is an example of how you can do this:\nfrom datetime import datetime\n\n# Your code here\n\nclass CreateProfileView(APIView):\n permission_classes = [permissions.IsAuthenticated]\n\n def post(self, request):\n data = dict(request.data)\n location = {}\n location.update(street=data.pop('street'))\n location.update(additional=data.pop('additional'))\n location.update(country=data.pop('country'))\n location.update(state=data.pop('state'))\n location.update(city=data.pop('city'))\n location.update(zip=data.pop('zip'))\n location.update(phone=data.pop('phone'))\n user_id = data.pop('user')\n id = int((user_id[0]))\n image = data.pop('photo')\n user = User.objects.get(pk=id)\n\n # Convert the string value to a date object\n birthdate_str = data.pop('birthdate')\n birthdate = datetime.strptime(birthdate_str, '%Y-%m-%d').date()\n\n new_profile = Profile.objects.create(**data, birthdate=birthdate, user=user)\n # new_location = Location.objects.create(**location, profile=new_profile)\n return Response(\"Profile saved successfully\")\n\n\n"
] | [
1
] | [] | [] | [
"django",
"django_rest_framework",
"python"
] | stackoverflow_0074674389_django_django_rest_framework_python.txt |
Q:
ValueError: could not convert string to float: '"815745789754417152"'
This is error code
ValueError Traceback (most recent call last)
Input In [42], in <cell line: 3>()
1 from sklearn.neighbors import KNeighborsClassifier as knn
2 classifier=knn(n_neighbors=5)
----> 3 classifier.fit(X,y)
4 bots = training_data[training_data.bot==1]
5 Nbots = training_data[training_data.bot==0]
After result show this error
ValueError: could not convert string to float: '"815745789754417152"'
My code using
enter image description here
A:
The string itself seems to be "815745789754417152". It can't convert " to a numeric value.
You can strip it off by:
string = string[1:-1]
| ValueError: could not convert string to float: '"815745789754417152"' | This is error code
ValueError Traceback (most recent call last)
Input In [42], in <cell line: 3>()
1 from sklearn.neighbors import KNeighborsClassifier as knn
2 classifier=knn(n_neighbors=5)
----> 3 classifier.fit(X,y)
4 bots = training_data[training_data.bot==1]
5 Nbots = training_data[training_data.bot==0]
After result show this error
ValueError: could not convert string to float: '"815745789754417152"'
My code using
enter image description here
| [
"The string itself seems to be \"815745789754417152\". It can't convert \" to a numeric value.\nYou can strip it off by:\nstring = string[1:-1]\n\n"
] | [
1
] | [] | [] | [
"python"
] | stackoverflow_0074674430_python.txt |
Q:
How to get the continent given the coordinates (latitude and longitude) in Python?
Is there a method that allows to get the continent where it is in place given its coordinates (without an API key)?
I'm using:
from geopy.geocoders import Nominatim
geolocator = Nominatim(user_agent='...')
location = geolocator.reverse('51.0456448, 3.7273618')
print(location.address)
print((location.latitude, location.longitude))
print(location.raw)
But it does not return the continent. Even giving a place name and using geolocator.geocode() doesn't work. Besides, even giving a name and using:
import urllib.error, urllib.request, urllib.parse
import json
target = 'http://py4e-data.dr-chuck.net/json?'
local = 'Paris'
url = target + urllib.parse.urlencode({'address': local, 'key' : 42})
data = urllib.request.urlopen(url).read()
js = json.loads(data)
print(json.dumps(js, indent=4))
Doesn't work either.
A:
A bit late, but for future reference and those who could need it, like me recently, here is one way to do it with Wikipedia and the use of Pandas, requests and geopy:
import pandas as pd
import requests
from geopy.geocoders import Nominatim
URLS = {
"Africa": "https://en.wikipedia.org/wiki/List_of_sovereign_states_and_dependent_territories_in_Africa",
"Asia": "https://en.wikipedia.org/wiki/List_of_sovereign_states_and_dependent_territories_in_Asia",
"Europe": "https://en.wikipedia.org/wiki/List_of_sovereign_states_and_dependent_territories_in_Europe",
"North America": "https://en.wikipedia.org/wiki/List_of_sovereign_states_and_dependent_territories_in_North_America",
"Ocenia": "https://en.wikipedia.org/wiki/List_of_sovereign_states_and_dependent_territories_in_Oceania",
"South America": "https://en.wikipedia.org/wiki/List_of_sovereign_states_and_dependent_territories_in_South_America",
}
def get_continents_and_countries() -> dict[str, str]:
"""Helper function to get countries and corresponding continents.
Returns:
Dictionary where keys are countries and values are continents.
"""
df_ = pd.concat(
[
pd.DataFrame(
pd.read_html(
requests.get(url).text.replace("<br />", ";"),
match="Flag",
)[0]
.pipe(
lambda df_: df_.rename(
columns={col: i for i, col in enumerate(df_.columns)}
)
)[2]
.str.split(";;")
.apply(lambda x: x[0])
)
.assign(continent=continent)
.rename(columns={2: "country"})
for continent, url in URLS.items()
]
).reset_index(drop=True)
df_["country"] = (
df_["country"]
.str.replace("*", "", regex=False)
.str.split("[")
.apply(lambda x: x[0])
).str.replace("\xa0", "")
return dict(df_.to_dict(orient="split")["data"])
def get_location_of(coo: str, data: dict[str, str]) -> tuple[str, str, str]:
"""Function to get the country of given coordinates.
Args:
coo: coordinates as string ("lat, lon").
data: input dictionary of countries and continents.
Returns:
Tuple of coordinates, country and continent (or Unknown if country not found).
"""
geolocator = Nominatim(user_agent="stackoverflow", timeout=25)
country: str = (
geolocator.reverse(coo, language="en-US").raw["display_name"].split(", ")[-1]
)
return (coo, country, data.get(country, "Unknown"))
Finally:
continents_and_countries = get_continents_and_countries()
print(get_location_of("51.0456448, 3.7273618", continents_and_countries))
# Output
('51.0456448, 3.7273618', 'Belgium', 'Europe')
| How to get the continent given the coordinates (latitude and longitude) in Python? | Is there a method that allows to get the continent where it is in place given its coordinates (without an API key)?
I'm using:
from geopy.geocoders import Nominatim
geolocator = Nominatim(user_agent='...')
location = geolocator.reverse('51.0456448, 3.7273618')
print(location.address)
print((location.latitude, location.longitude))
print(location.raw)
But it does not return the continent. Even giving a place name and using geolocator.geocode() doesn't work. Besides, even giving a name and using:
import urllib.error, urllib.request, urllib.parse
import json
target = 'http://py4e-data.dr-chuck.net/json?'
local = 'Paris'
url = target + urllib.parse.urlencode({'address': local, 'key' : 42})
data = urllib.request.urlopen(url).read()
js = json.loads(data)
print(json.dumps(js, indent=4))
Doesn't work either.
| [
"A bit late, but for future reference and those who could need it, like me recently, here is one way to do it with Wikipedia and the use of Pandas, requests and geopy:\nimport pandas as pd\nimport requests\nfrom geopy.geocoders import Nominatim\n\nURLS = {\n \"Africa\": \"https://en.wikipedia.org/wiki/List_of_sovereign_states_and_dependent_territories_in_Africa\",\n \"Asia\": \"https://en.wikipedia.org/wiki/List_of_sovereign_states_and_dependent_territories_in_Asia\",\n \"Europe\": \"https://en.wikipedia.org/wiki/List_of_sovereign_states_and_dependent_territories_in_Europe\",\n \"North America\": \"https://en.wikipedia.org/wiki/List_of_sovereign_states_and_dependent_territories_in_North_America\",\n \"Ocenia\": \"https://en.wikipedia.org/wiki/List_of_sovereign_states_and_dependent_territories_in_Oceania\",\n \"South America\": \"https://en.wikipedia.org/wiki/List_of_sovereign_states_and_dependent_territories_in_South_America\",\n}\n\ndef get_continents_and_countries() -> dict[str, str]:\n \"\"\"Helper function to get countries and corresponding continents.\n\n Returns:\n Dictionary where keys are countries and values are continents.\n\n \"\"\"\n df_ = pd.concat(\n [\n pd.DataFrame(\n pd.read_html(\n requests.get(url).text.replace(\"<br />\", \";\"),\n match=\"Flag\",\n )[0]\n .pipe(\n lambda df_: df_.rename(\n columns={col: i for i, col in enumerate(df_.columns)}\n )\n )[2]\n .str.split(\";;\")\n .apply(lambda x: x[0])\n )\n .assign(continent=continent)\n .rename(columns={2: \"country\"})\n for continent, url in URLS.items()\n ]\n ).reset_index(drop=True)\n df_[\"country\"] = (\n df_[\"country\"]\n .str.replace(\"*\", \"\", regex=False)\n .str.split(\"[\")\n .apply(lambda x: x[0])\n ).str.replace(\"\\xa0\", \"\")\n return dict(df_.to_dict(orient=\"split\")[\"data\"])\n\ndef get_location_of(coo: str, data: dict[str, str]) -> tuple[str, str, str]:\n \"\"\"Function to get the country of given coordinates.\n\n Args:\n coo: coordinates as string (\"lat, lon\").\n data: input dictionary of countries and continents.\n\n Returns:\n Tuple of coordinates, country and continent (or Unknown if country not found).\n\n \"\"\"\n geolocator = Nominatim(user_agent=\"stackoverflow\", timeout=25)\n country: str = (\n geolocator.reverse(coo, language=\"en-US\").raw[\"display_name\"].split(\", \")[-1]\n )\n return (coo, country, data.get(country, \"Unknown\"))\n\nFinally:\ncontinents_and_countries = get_continents_and_countries()\n\nprint(get_location_of(\"51.0456448, 3.7273618\", continents_and_countries))\n\n# Output\n('51.0456448, 3.7273618', 'Belgium', 'Europe')\n\n"
] | [
0
] | [] | [] | [
"coordinates",
"geolocation",
"geopy",
"python",
"python_requests"
] | stackoverflow_0069771711_coordinates_geolocation_geopy_python_python_requests.txt |
Q:
Telegram Inline Bot - Buttons get stuck loading
I am working on a inline telegram bot.
The bot should be invoked through any chat so I am using the inline method, however the bot now uses a conversation flow that requires the conversation to be started by using the /start command which is not what I want.
After calling the bot with the command I set the user should see message 1 then click on a button which will show a new selection of buttons and another message.
My problem is that now the bot shows the inital message and 2 buttons, but when I click on the button nothing happens. I believe this is due to the ConversationHandler States and how it's setup
conv_handler = ConversationHandler(
entry_points=[CommandHandler('start', inlinequery)],
states={
FIRST: [
CallbackQueryHandler(one, pattern='^' + str(ONE) + '$'),
CallbackQueryHandler(two, pattern='^' + str(TWO) + '$'),
CallbackQueryHandler(three, pattern='^' + str(THREE) + '$'),
],
SECOND: [
CallbackQueryHandler(start_over, pattern='^' + str(ONE) + '$'),
CallbackQueryHandler(end, pattern='^' + str(TWO) + '$'),
],
},
fallbacks=[CommandHandler('start', inlinequery)],
Based on this it is waiting for the /start command to initiate the conv_handler. I want It to start when the user sends in any chat @botusername <command I set> which is written in the function inlinequery.
The code:
from datetime import datetime
from uuid import uuid4
import logging
import emojihash
from telegram import InlineKeyboardButton, InlineKeyboardMarkup, Update
from telegram.ext import (
Updater,
CommandHandler,
CallbackQueryHandler,
ConversationHandler,
CallbackContext,
)
from telegram.ext import InlineQueryHandler, CommandHandler, CallbackContext
from telegram.utils.helpers import escape_markdown
from telegram import InlineQueryResultArticle, ParseMode, InputTextMessageContent, Update
logging.basicConfig(
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s', level=logging.INFO
)
logger = logging.getLogger(__name__)
TransactionDateTime: str = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
TransactionNumber: int = 1
TotalTransactions:int = 1
EmojiCode: str = emojihash.eh1("unique password" , 5) #TODO: make more complex
Emojihash: str =emojihash.eh1("unique code",5)
FIRST, SECOND = range(2)
ONE, TWO, THREE, FOUR = range(4)
verified_message_2="message 2"
verified_message_1 = "Message 1 "
def inlinequery(update: Update, context: CallbackContext) -> None:
print("Inline hit!")
# print the inline query from the update.
query = update.inline_query.query
print("len(query):" + str(len(query)))
if len(query) > 0:
print("query[-1] == " "?: " + str(query[-1] == "?"))
print("query[-1] == " + query[-1])
# len(query) > 1 and query[-1] == " "
if len(query) == 0 or query[-1] != ".":
print("Empty query, showing message to seller to type username of buyer")
results = [
InlineQueryResultArticle(
id="Noop",
title="title",
input_message_content=InputTextMessageContent("I don't know how to use this bot yet. I was supposed to type the username but clicked this button anyway. Give me a second to figure this out."),
)
]
update.inline_query.answer(results)
# else if the query ends with a period character:
elif len(query) > 1 and query[-1] == ".":
buyer_username = query
SellerUserName: str = update.inline_query.from_user.username
print("buyer_username:" + buyer_username)
EmojiCode: str = emojihash.eh1("unique password" + SellerUserName + str(update.inline_query.from_user.id), 5)
keyboard = [
[
InlineKeyboardButton(EmojiCode, callback_data=str(ONE)),
],
[
InlineKeyboardButton(Emojihash, callback_data=str(TWO)),
],
]
reply_markup = InlineKeyboardMarkup(keyboard)
context.bot.send_message(chat_id=update.inline_query.from_user.id,text=verified_message_1, reply_markup=reply_markup)
return FIRST
def start_over(update: Update, context: CallbackContext) -> int:
query = update.callback_query
logger.info("User clicked on button %s", query.data)
SellerUserName: str = update.inline_query.from_user.username
buyer_username = query
print("buyer_username:" + buyer_username)
EmojiCode: str = emojihash.eh1("unique password" + SellerUserName + str(update.inline_query.from_user.id), 5)
SellerUserName: str = update.inline_query.from_user.username
verified_message_1 = f"""message 1 """
query.answer()
keyboard = [
[
InlineKeyboardButton(EmojiCode, callback_data=str(ONE)),
],
[
InlineKeyboardButton(Emojihash, callback_data=str(TWO)),
],
]
reply_markup = InlineKeyboardMarkup(keyboard)
context.bot.send_message(chat_id=update.inline_query.from_user.id,text=verified_message_1, reply_markup=reply_markup)
return FIRST
def one(update: Update, context: CallbackContext) -> int:
query = update.callback_query
logger.info("User clicked on button %s", query.data)
query.answer()
keyboard = [
[
InlineKeyboardButton(EmojiCode, callback_data=str(THREE)),
],
[
InlineKeyboardButton(Emojihash, callback_data=str(TWO)),
],
]
reply_markup = InlineKeyboardMarkup(keyboard)
query.edit_message_text(
text=verified_message_2, reply_markup=reply_markup
)
return FIRST
def two(update: Update, context: CallbackContext) -> int:
query = update.callback_query
logger.info("User clicked on button %s", query.data)
query.answer()
keyboard = [
[
InlineKeyboardButton("Yes", callback_data=str(ONE)),
],
[
InlineKeyboardButton("No", callback_data=str(TWO)),
],
]
reply_markup = InlineKeyboardMarkup(keyboard)
query.edit_message_text(
text="You clicked on the wrong code. Do you want to try again?", reply_markup=reply_markup
)
return SECOND
def three(update: Update, context: CallbackContext) -> int:
query = update.callback_query
logger.info("User clicked on button %s", query.data)
buyer_username = query
SellerUserName: str = update.inline_query.from_user.username
print("buyer_username:" + buyer_username)
SellerUserName: str = update.inline_query.from_user.username
query.answer()
keyboard = [
[
InlineKeyboardButton(text='Yes', url=f'https://t.me/{SellerUserName}'),
],
[
InlineKeyboardButton("No", callback_data=str(TWO)),
],
[ InlineKeyboardButton("Read Again", callback_data=str(ONE)),
],
]
reply_markup = InlineKeyboardMarkup(keyboard)
query.edit_message_text(
text=f"""With this you have confirmed you read the messages above.
Go back to chat with seller?""", reply_markup=reply_markup
)
return SECOND
def end(update: Update, context: CallbackContext) -> int:
query = update.callback_query
logger.info("User clicked on button %s", query.data)
query.answer()
query.edit_message_text(text="Process stopped")
return ConversationHandler.END
def main() -> None:
"""Run the bot."""
updater = Updater("TOKEN")
dispatcher = updater.dispatcher
conv_handler = ConversationHandler(
entry_points=[CommandHandler('start', inlinequery)],
states={
FIRST: [
CallbackQueryHandler(one, pattern='^' + str(ONE) + '$'),
CallbackQueryHandler(two, pattern='^' + str(TWO) + '$'),
CallbackQueryHandler(three, pattern='^' + str(THREE) + '$'),
],
SECOND: [
CallbackQueryHandler(start_over, pattern='^' + str(ONE) + '$'),
CallbackQueryHandler(end, pattern='^' + str(TWO) + '$'),
],
},
fallbacks=[CommandHandler('start', inlinequery)],
)
# Add ConversationHandler to dispatcher that will be used for handling updates
dispatcher.add_handler(conv_handler)
dispatcher.add_handler(InlineQueryHandler(inlinequery))
# Start the Bot
updater.start_polling()
# Run the bot until you press Ctrl-C or the process receives SIGINT,
# SIGTERM or SIGABRT. This should be used most of the time, since
# start_polling() is non-blocking and will stop the bot gracefully.
updater.idle()
if __name__ == '__main__':
main()
I tried switching out the Command handler to be a InlineQueryHandler, but that didn't give any results
A:
that requires the conversation to be started by using the /start command which is not what I want.
This is not the case - you can use any handler as entry point.
I tried switching out the Command handler to be a InlineQueryHandler, but that didn't give any results
This is one caveat here: The per_chat setting of ConversationHandler defaults to True, but InlineQuery are not linked to a chat_id. If you set per_chat=False, using an InlineQueryHandler as entry point should work just fine. See also here for more info on what the per_* settings do.
Disclaimer: I'm currently the maintainer of python-telegram-bot.
| Telegram Inline Bot - Buttons get stuck loading | I am working on a inline telegram bot.
The bot should be invoked through any chat so I am using the inline method, however the bot now uses a conversation flow that requires the conversation to be started by using the /start command which is not what I want.
After calling the bot with the command I set the user should see message 1 then click on a button which will show a new selection of buttons and another message.
My problem is that now the bot shows the inital message and 2 buttons, but when I click on the button nothing happens. I believe this is due to the ConversationHandler States and how it's setup
conv_handler = ConversationHandler(
entry_points=[CommandHandler('start', inlinequery)],
states={
FIRST: [
CallbackQueryHandler(one, pattern='^' + str(ONE) + '$'),
CallbackQueryHandler(two, pattern='^' + str(TWO) + '$'),
CallbackQueryHandler(three, pattern='^' + str(THREE) + '$'),
],
SECOND: [
CallbackQueryHandler(start_over, pattern='^' + str(ONE) + '$'),
CallbackQueryHandler(end, pattern='^' + str(TWO) + '$'),
],
},
fallbacks=[CommandHandler('start', inlinequery)],
Based on this it is waiting for the /start command to initiate the conv_handler. I want It to start when the user sends in any chat @botusername <command I set> which is written in the function inlinequery.
The code:
from datetime import datetime
from uuid import uuid4
import logging
import emojihash
from telegram import InlineKeyboardButton, InlineKeyboardMarkup, Update
from telegram.ext import (
Updater,
CommandHandler,
CallbackQueryHandler,
ConversationHandler,
CallbackContext,
)
from telegram.ext import InlineQueryHandler, CommandHandler, CallbackContext
from telegram.utils.helpers import escape_markdown
from telegram import InlineQueryResultArticle, ParseMode, InputTextMessageContent, Update
logging.basicConfig(
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s', level=logging.INFO
)
logger = logging.getLogger(__name__)
TransactionDateTime: str = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
TransactionNumber: int = 1
TotalTransactions:int = 1
EmojiCode: str = emojihash.eh1("unique password" , 5) #TODO: make more complex
Emojihash: str =emojihash.eh1("unique code",5)
FIRST, SECOND = range(2)
ONE, TWO, THREE, FOUR = range(4)
verified_message_2="message 2"
verified_message_1 = "Message 1 "
def inlinequery(update: Update, context: CallbackContext) -> None:
print("Inline hit!")
# print the inline query from the update.
query = update.inline_query.query
print("len(query):" + str(len(query)))
if len(query) > 0:
print("query[-1] == " "?: " + str(query[-1] == "?"))
print("query[-1] == " + query[-1])
# len(query) > 1 and query[-1] == " "
if len(query) == 0 or query[-1] != ".":
print("Empty query, showing message to seller to type username of buyer")
results = [
InlineQueryResultArticle(
id="Noop",
title="title",
input_message_content=InputTextMessageContent("I don't know how to use this bot yet. I was supposed to type the username but clicked this button anyway. Give me a second to figure this out."),
)
]
update.inline_query.answer(results)
# else if the query ends with a period character:
elif len(query) > 1 and query[-1] == ".":
buyer_username = query
SellerUserName: str = update.inline_query.from_user.username
print("buyer_username:" + buyer_username)
EmojiCode: str = emojihash.eh1("unique password" + SellerUserName + str(update.inline_query.from_user.id), 5)
keyboard = [
[
InlineKeyboardButton(EmojiCode, callback_data=str(ONE)),
],
[
InlineKeyboardButton(Emojihash, callback_data=str(TWO)),
],
]
reply_markup = InlineKeyboardMarkup(keyboard)
context.bot.send_message(chat_id=update.inline_query.from_user.id,text=verified_message_1, reply_markup=reply_markup)
return FIRST
def start_over(update: Update, context: CallbackContext) -> int:
query = update.callback_query
logger.info("User clicked on button %s", query.data)
SellerUserName: str = update.inline_query.from_user.username
buyer_username = query
print("buyer_username:" + buyer_username)
EmojiCode: str = emojihash.eh1("unique password" + SellerUserName + str(update.inline_query.from_user.id), 5)
SellerUserName: str = update.inline_query.from_user.username
verified_message_1 = f"""message 1 """
query.answer()
keyboard = [
[
InlineKeyboardButton(EmojiCode, callback_data=str(ONE)),
],
[
InlineKeyboardButton(Emojihash, callback_data=str(TWO)),
],
]
reply_markup = InlineKeyboardMarkup(keyboard)
context.bot.send_message(chat_id=update.inline_query.from_user.id,text=verified_message_1, reply_markup=reply_markup)
return FIRST
def one(update: Update, context: CallbackContext) -> int:
query = update.callback_query
logger.info("User clicked on button %s", query.data)
query.answer()
keyboard = [
[
InlineKeyboardButton(EmojiCode, callback_data=str(THREE)),
],
[
InlineKeyboardButton(Emojihash, callback_data=str(TWO)),
],
]
reply_markup = InlineKeyboardMarkup(keyboard)
query.edit_message_text(
text=verified_message_2, reply_markup=reply_markup
)
return FIRST
def two(update: Update, context: CallbackContext) -> int:
query = update.callback_query
logger.info("User clicked on button %s", query.data)
query.answer()
keyboard = [
[
InlineKeyboardButton("Yes", callback_data=str(ONE)),
],
[
InlineKeyboardButton("No", callback_data=str(TWO)),
],
]
reply_markup = InlineKeyboardMarkup(keyboard)
query.edit_message_text(
text="You clicked on the wrong code. Do you want to try again?", reply_markup=reply_markup
)
return SECOND
def three(update: Update, context: CallbackContext) -> int:
query = update.callback_query
logger.info("User clicked on button %s", query.data)
buyer_username = query
SellerUserName: str = update.inline_query.from_user.username
print("buyer_username:" + buyer_username)
SellerUserName: str = update.inline_query.from_user.username
query.answer()
keyboard = [
[
InlineKeyboardButton(text='Yes', url=f'https://t.me/{SellerUserName}'),
],
[
InlineKeyboardButton("No", callback_data=str(TWO)),
],
[ InlineKeyboardButton("Read Again", callback_data=str(ONE)),
],
]
reply_markup = InlineKeyboardMarkup(keyboard)
query.edit_message_text(
text=f"""With this you have confirmed you read the messages above.
Go back to chat with seller?""", reply_markup=reply_markup
)
return SECOND
def end(update: Update, context: CallbackContext) -> int:
query = update.callback_query
logger.info("User clicked on button %s", query.data)
query.answer()
query.edit_message_text(text="Process stopped")
return ConversationHandler.END
def main() -> None:
"""Run the bot."""
updater = Updater("TOKEN")
dispatcher = updater.dispatcher
conv_handler = ConversationHandler(
entry_points=[CommandHandler('start', inlinequery)],
states={
FIRST: [
CallbackQueryHandler(one, pattern='^' + str(ONE) + '$'),
CallbackQueryHandler(two, pattern='^' + str(TWO) + '$'),
CallbackQueryHandler(three, pattern='^' + str(THREE) + '$'),
],
SECOND: [
CallbackQueryHandler(start_over, pattern='^' + str(ONE) + '$'),
CallbackQueryHandler(end, pattern='^' + str(TWO) + '$'),
],
},
fallbacks=[CommandHandler('start', inlinequery)],
)
# Add ConversationHandler to dispatcher that will be used for handling updates
dispatcher.add_handler(conv_handler)
dispatcher.add_handler(InlineQueryHandler(inlinequery))
# Start the Bot
updater.start_polling()
# Run the bot until you press Ctrl-C or the process receives SIGINT,
# SIGTERM or SIGABRT. This should be used most of the time, since
# start_polling() is non-blocking and will stop the bot gracefully.
updater.idle()
if __name__ == '__main__':
main()
I tried switching out the Command handler to be a InlineQueryHandler, but that didn't give any results
| [
"\nthat requires the conversation to be started by using the /start command which is not what I want.\n\nThis is not the case - you can use any handler as entry point.\n\nI tried switching out the Command handler to be a InlineQueryHandler, but that didn't give any results\n\nThis is one caveat here: The per_chat setting of ConversationHandler defaults to True, but InlineQuery are not linked to a chat_id. If you set per_chat=False, using an InlineQueryHandler as entry point should work just fine. See also here for more info on what the per_* settings do.\n\nDisclaimer: I'm currently the maintainer of python-telegram-bot.\n"
] | [
0
] | [] | [] | [
"py_telegram_bot_api",
"python",
"python_telegram_bot"
] | stackoverflow_0074672289_py_telegram_bot_api_python_python_telegram_bot.txt |
Q:
How do I deal with and it is returning ``` HTTPError: HTTP Error 403: Forbidden?
I am trying to copy a table from a website using this code
covid = pd.read_html("https://covid19.ncdc.gov.ng/")[0].head()
and it is returning
HTTPError: HTTP Error 403: Forbidden
A:
You can use requests:
import pandas as pd
import requests
req=requests.get('https://covid19.ncdc.gov.ng/')
covid = pd.read_html(req.text)[0].head()
'''
| | States Affected | No. of Cases (Lab Confirmed) | No. of Cases (on admission) | No. Discharged | No. of Deaths |
|---:|:------------------|-------------------------------:|------------------------------:|-----------------:|----------------:|
| 0 | Lagos | 104187 | 1044 | 102372 | 771 |
| 1 | FCT | 29508 | 19 | 29240 | 249 |
| 2 | Rivers | 18105 | 27 | 17923 | 155 |
| 3 | Kaduna | 11619 | 1 | 11529 | 89 |
| 4 | Oyo | 10352 | 6 | 10144 | 202 |
'''
| How do I deal with and it is returning ``` HTTPError: HTTP Error 403: Forbidden? | I am trying to copy a table from a website using this code
covid = pd.read_html("https://covid19.ncdc.gov.ng/")[0].head()
and it is returning
HTTPError: HTTP Error 403: Forbidden
| [
"You can use requests:\nimport pandas as pd\nimport requests\nreq=requests.get('https://covid19.ncdc.gov.ng/')\ncovid = pd.read_html(req.text)[0].head()\n'''\n| | States Affected | No. of Cases (Lab Confirmed) | No. of Cases (on admission) | No. Discharged | No. of Deaths |\n|---:|:------------------|-------------------------------:|------------------------------:|-----------------:|----------------:|\n| 0 | Lagos | 104187 | 1044 | 102372 | 771 |\n| 1 | FCT | 29508 | 19 | 29240 | 249 |\n| 2 | Rivers | 18105 | 27 | 17923 | 155 |\n| 3 | Kaduna | 11619 | 1 | 11529 | 89 |\n| 4 | Oyo | 10352 | 6 | 10144 | 202 |\n'''\n\n\n"
] | [
0
] | [] | [] | [
"dataframe",
"error_handling",
"html",
"list",
"python"
] | stackoverflow_0074670041_dataframe_error_handling_html_list_python.txt |
Q:
numpy: multiply uint16 ndarray by scalar
I have a ndarray 'a' of dtype uint16.
I would like to multiply all entries by a scalar, let's say 2.
The max value for uint16 is 65535. Let's assume some entries of a are greater than 65535/2.
Because of numerical issues, these values will become small values after applying the multiplication
For example, if a is:
1, 1
1, 32867
then a*2 will be:
2, 2
2, 198
This makes sense, but the behavior I would like to enforce is to have 65535 as the "max ceiling", i.e.
x = x*2 if x*2<65535 else 65535
and a*2:
2, 2,
2, 65535
Does numpy support this?
note: I would like the resulting array also to be of dtype uint16
A:
I think the only way is to cast the array to a bigger data type and then clip the values before casting it back to uint16.
For example:
import numpy as np
a = np.array([*stuff], dtype=np.uint16)
res = np.clip(a.astype(np.uint32) * 2, 0, 65535).astype(np.uint16)
| numpy: multiply uint16 ndarray by scalar | I have a ndarray 'a' of dtype uint16.
I would like to multiply all entries by a scalar, let's say 2.
The max value for uint16 is 65535. Let's assume some entries of a are greater than 65535/2.
Because of numerical issues, these values will become small values after applying the multiplication
For example, if a is:
1, 1
1, 32867
then a*2 will be:
2, 2
2, 198
This makes sense, but the behavior I would like to enforce is to have 65535 as the "max ceiling", i.e.
x = x*2 if x*2<65535 else 65535
and a*2:
2, 2,
2, 65535
Does numpy support this?
note: I would like the resulting array also to be of dtype uint16
| [
"I think the only way is to cast the array to a bigger data type and then clip the values before casting it back to uint16.\nFor example:\nimport numpy as np\n\na = np.array([*stuff], dtype=np.uint16)\nres = np.clip(a.astype(np.uint32) * 2, 0, 65535).astype(np.uint16)\n\n"
] | [
2
] | [] | [] | [
"multidimensional_array",
"numeric",
"numpy",
"python",
"type_conversion"
] | stackoverflow_0074670564_multidimensional_array_numeric_numpy_python_type_conversion.txt |
Q:
How do versions after py3.10 implement asyncio.get_event_loop with the same behavior as previous versions
python3.10-asyncio-get_event_loop
Deprecated since version 3.10: Emits a deprecation warning if there is no running event loop. In future Python releases, this function may become an alias of get_running_loop() and will accordingly raise a RuntimeError if there is no running event loop.
The behavior of get_event_loop has changed in version 3.10, now the sanic-jwt library needs to be compatible with later versions of 3.10, and needs to be modified to remove this warning(DeprecationWarning: There is no current event loop)
The place of the warning is the call method under ConfigItem on line 134
sanic_jwt/configuration.py
enter image description here
I tried the method of this article and the test did not pass. It should not match the behavior of the version before 3.10
PR
A:
If you want to hide the DeprecationWarning, set a higher logging level. Or if you have to use Python3.10+, then you can do something like:
import asyncio
def get_event_loop() -> asyncio.AbstractEventLoop:
try:
return asyncio.get_running_loop()
except (RuntimeError, Exception):
return asyncio.new_event_loop()
# DO NOT RECOMMEND TO OVERRIDE THE built-in one
# override the built-in get_event_loop function
asyncio.get_event_loop = get_event_loop
| How do versions after py3.10 implement asyncio.get_event_loop with the same behavior as previous versions | python3.10-asyncio-get_event_loop
Deprecated since version 3.10: Emits a deprecation warning if there is no running event loop. In future Python releases, this function may become an alias of get_running_loop() and will accordingly raise a RuntimeError if there is no running event loop.
The behavior of get_event_loop has changed in version 3.10, now the sanic-jwt library needs to be compatible with later versions of 3.10, and needs to be modified to remove this warning(DeprecationWarning: There is no current event loop)
The place of the warning is the call method under ConfigItem on line 134
sanic_jwt/configuration.py
enter image description here
I tried the method of this article and the test did not pass. It should not match the behavior of the version before 3.10
PR
| [
"If you want to hide the DeprecationWarning, set a higher logging level. Or if you have to use Python3.10+, then you can do something like:\nimport asyncio\n\ndef get_event_loop() -> asyncio.AbstractEventLoop:\n try:\n return asyncio.get_running_loop()\n except (RuntimeError, Exception):\n return asyncio.new_event_loop()\n\n# DO NOT RECOMMEND TO OVERRIDE THE built-in one\n# override the built-in get_event_loop function\nasyncio.get_event_loop = get_event_loop\n\n"
] | [
0
] | [] | [] | [
"python",
"python_3.x",
"python_asyncio",
"sanic"
] | stackoverflow_0074673969_python_python_3.x_python_asyncio_sanic.txt |
Q:
pattern matching in Python with regex problem
I am trying to learn pattern matching with regex, the course is through coursera and hasn't been updated since python 3 came out so the instructors code is not working correctly.
Here's what I have so far:
# example Wiki data
wiki= """There are several Buddhist universities in the United States. Some of these have existed for decades and are accredited. Others are relatively new and are either in the process of being accredited or else have no formal accreditation. The list includes:
• Dhammakaya Open University – located in Azusa, California,
• Dharmakirti College – located in Tucson, Arizona
• Dharma Realm Buddhist University – located in Ukiah, California
• Ewam Buddhist Institute – located in Arlee, Montana
• Naropa University - located in Boulder, Colorado
• Institute of Buddhist Studies – located in Berkeley, California
• Maitripa College – located in Portland, Oregon
• Soka University of America – located in Aliso Viejo, California
• University of the West – located in Rosemead, California
• Won Institute of Graduate Studies – located in Glenside, Pennsylvania"""
pattern=re.compile(
r'(?P<title>.*)' # the university title
r'(-\ located\ in\ )' #an indicator of the location
r'(?P<city>\w*)' # city the university is in
r'(,\ )' #seperator for the state
r'(?P<state>\w.*)') #the state the city is in)
for item in re.finditer(pattern, wiki, re.VERBOSE):
print(item.groupdict())
Output:
Traceback (most recent call last):
File "/Users/r..., line 194, in <module>
for item in re.finditer(pattern, wiki, re.VERBOSE):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/re/__init__.py", line 223, in finditer
return _compile(pattern, flags).finditer(string)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/re/__init__.py", line 282, in _compile
raise ValueError(
ValueError: cannot process flags argument with a compiled pattern
I only want a dictionary with the university name, the city and the state. If I run it without re.VERBOSE, only one school shows up and none of the rest are there.
I am somewhat new to python and don't know what to do about these errors
A:
In fact, for current versions of Python, you do not need to add re.VERBOSE at all. If you do
for item in re.finditer(pattern, wiki):
print(item.groupdict())
the program will print
{'title': '• Naropa University ', 'city': 'Boulder', 'state': 'Colorado '}
using Python 3.10.
By the way, the program only outputs one school because the other schools use a long hyphen – instead or a short one, -. Making all schools use the same, and changing your pattern accordingly, should give you the whole list.
A:
Thanks to JustLearning, my problem is solved. Here is the code I ended up using. I can't believe it was a long hyphen instead of a short one. And now I know I dont need to use the re.VERBOSE. Thank you again
pattern =re.compile(
r'(?P.)'
r'(-\ located\ in\ )'
r'(?P.)'
r'(,\ )'
r'(?P.*)')
A:
In your example data you are using 2 types of hyphens.
En Dash
Hyphen-Minus
If you want to match both you can make use of a character class [–-]
Apart from that, using .* repeats 0+ times any character (can match empty strings) and will first match until the end of the line and will allow backtracking to match the rest of the pattern.
What you could do it make the pattern a bit more precise starting each group matching at least a word character.
If you are only interested in the groups title, city and state you don't need the other 2 capture groups.
Note that if you want to match a space that you don't have to escape it.
^\W*(?P<title>\w.*?) [–-] located in (?P<city>\w.*?), (?P<state>\w.*)
^ Start of string
\W* Match optional non word characters
(?P<title>\w.*?) Match a word character, followed by matching as least as possible chars
[–-] Match any of the dashes with a space to the left and right
located in Match literally
(?P<city>\w.*?) Match a word character followed by matching as least as possible chars
, Match literally
(?P<state>\w.*) Match a word character followed by the rest of the line
Regex demo | Python demo
Example
import re
pattern = r"^\W*(?P<title>\w.*?) [–-] located in (?P<city>\w.*?), (?P<state>\w.*)"
wiki = """There are several Buddhist universities in the United States. Some of these have existed for decades and are accredited. Others are relatively new and are either in the process of being accredited or else have no formal accreditation. The list includes:
• Dhammakaya Open University – located in Azusa, California,
• Dharmakirti College – located in Tucson, Arizona
• Dharma Realm Buddhist University – located in Ukiah, California
• Ewam Buddhist Institute – located in Arlee, Montana
• Naropa University - located in Boulder, Colorado
• Institute of Buddhist Studies – located in Berkeley, California
• Maitripa College – located in Portland, Oregon
• Soka University of America – located in Aliso Viejo, California
• University of the West – located in Rosemead, California
• Won Institute of Graduate Studies – located in Glenside, Pennsylvania"""
for item in re.finditer(pattern, wiki, re.M):
print(item.groupdict())
Output
{'title': 'Dhammakaya Open University', 'city': 'Azusa', 'state': 'California,'}
{'title': 'Dharmakirti College', 'city': 'Tucson', 'state': 'Arizona'}
{'title': 'Dharma Realm Buddhist University', 'city': 'Ukiah', 'state': 'California'}
{'title': 'Ewam Buddhist Institute', 'city': 'Arlee', 'state': 'Montana'}
{'title': 'Naropa University', 'city': 'Boulder', 'state': 'Colorado'}
{'title': 'Institute of Buddhist Studies', 'city': 'Berkeley', 'state': 'California'}
{'title': 'Maitripa College', 'city': 'Portland', 'state': 'Oregon'}
{'title': 'Soka University of America', 'city': 'Aliso Viejo', 'state': 'California'}
{'title': 'University of the West', 'city': 'Rosemead', 'state': 'California'}
{'title': 'Won Institute of Graduate Studies', 'city': 'Glenside', 'state': 'Pennsylvania'}
| pattern matching in Python with regex problem | I am trying to learn pattern matching with regex, the course is through coursera and hasn't been updated since python 3 came out so the instructors code is not working correctly.
Here's what I have so far:
# example Wiki data
wiki= """There are several Buddhist universities in the United States. Some of these have existed for decades and are accredited. Others are relatively new and are either in the process of being accredited or else have no formal accreditation. The list includes:
• Dhammakaya Open University – located in Azusa, California,
• Dharmakirti College – located in Tucson, Arizona
• Dharma Realm Buddhist University – located in Ukiah, California
• Ewam Buddhist Institute – located in Arlee, Montana
• Naropa University - located in Boulder, Colorado
• Institute of Buddhist Studies – located in Berkeley, California
• Maitripa College – located in Portland, Oregon
• Soka University of America – located in Aliso Viejo, California
• University of the West – located in Rosemead, California
• Won Institute of Graduate Studies – located in Glenside, Pennsylvania"""
pattern=re.compile(
r'(?P<title>.*)' # the university title
r'(-\ located\ in\ )' #an indicator of the location
r'(?P<city>\w*)' # city the university is in
r'(,\ )' #seperator for the state
r'(?P<state>\w.*)') #the state the city is in)
for item in re.finditer(pattern, wiki, re.VERBOSE):
print(item.groupdict())
Output:
Traceback (most recent call last):
File "/Users/r..., line 194, in <module>
for item in re.finditer(pattern, wiki, re.VERBOSE):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/re/__init__.py", line 223, in finditer
return _compile(pattern, flags).finditer(string)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/re/__init__.py", line 282, in _compile
raise ValueError(
ValueError: cannot process flags argument with a compiled pattern
I only want a dictionary with the university name, the city and the state. If I run it without re.VERBOSE, only one school shows up and none of the rest are there.
I am somewhat new to python and don't know what to do about these errors
| [
"In fact, for current versions of Python, you do not need to add re.VERBOSE at all. If you do\nfor item in re.finditer(pattern, wiki): \n print(item.groupdict())\n\nthe program will print\n{'title': '• Naropa University ', 'city': 'Boulder', 'state': 'Colorado '}\n\nusing Python 3.10.\nBy the way, the program only outputs one school because the other schools use a long hyphen – instead or a short one, -. Making all schools use the same, and changing your pattern accordingly, should give you the whole list.\n",
"Thanks to JustLearning, my problem is solved. Here is the code I ended up using. I can't believe it was a long hyphen instead of a short one. And now I know I dont need to use the re.VERBOSE. Thank you again\npattern =re.compile(\nr'(?P.)'\nr'(-\\ located\\ in\\ )'\nr'(?P.)'\nr'(,\\ )'\nr'(?P.*)')\n",
"In your example data you are using 2 types of hyphens.\n\nEn Dash\nHyphen-Minus\n\nIf you want to match both you can make use of a character class [–-]\nApart from that, using .* repeats 0+ times any character (can match empty strings) and will first match until the end of the line and will allow backtracking to match the rest of the pattern.\nWhat you could do it make the pattern a bit more precise starting each group matching at least a word character.\nIf you are only interested in the groups title, city and state you don't need the other 2 capture groups.\nNote that if you want to match a space that you don't have to escape it.\n^\\W*(?P<title>\\w.*?) [–-] located in (?P<city>\\w.*?), (?P<state>\\w.*)\n\n\n^ Start of string\n\\W* Match optional non word characters\n(?P<title>\\w.*?) Match a word character, followed by matching as least as possible chars\n [–-] Match any of the dashes with a space to the left and right\nlocated in Match literally\n(?P<city>\\w.*?) Match a word character followed by matching as least as possible chars\n, Match literally\n(?P<state>\\w.*) Match a word character followed by the rest of the line\n\nRegex demo | Python demo\nExample\nimport re\n\npattern = r\"^\\W*(?P<title>\\w.*?) [–-] located in (?P<city>\\w.*?), (?P<state>\\w.*)\"\n\nwiki = \"\"\"There are several Buddhist universities in the United States. Some of these have existed for decades and are accredited. Others are relatively new and are either in the process of being accredited or else have no formal accreditation. The list includes:\n• Dhammakaya Open University – located in Azusa, California,\n• Dharmakirti College – located in Tucson, Arizona\n• Dharma Realm Buddhist University – located in Ukiah, California\n• Ewam Buddhist Institute – located in Arlee, Montana\n• Naropa University - located in Boulder, Colorado\n• Institute of Buddhist Studies – located in Berkeley, California\n• Maitripa College – located in Portland, Oregon\n• Soka University of America – located in Aliso Viejo, California\n• University of the West – located in Rosemead, California\n• Won Institute of Graduate Studies – located in Glenside, Pennsylvania\"\"\"\n\nfor item in re.finditer(pattern, wiki, re.M):\n print(item.groupdict())\n\nOutput\n{'title': 'Dhammakaya Open University', 'city': 'Azusa', 'state': 'California,'}\n{'title': 'Dharmakirti College', 'city': 'Tucson', 'state': 'Arizona'}\n{'title': 'Dharma Realm Buddhist University', 'city': 'Ukiah', 'state': 'California'}\n{'title': 'Ewam Buddhist Institute', 'city': 'Arlee', 'state': 'Montana'}\n{'title': 'Naropa University', 'city': 'Boulder', 'state': 'Colorado'}\n{'title': 'Institute of Buddhist Studies', 'city': 'Berkeley', 'state': 'California'}\n{'title': 'Maitripa College', 'city': 'Portland', 'state': 'Oregon'}\n{'title': 'Soka University of America', 'city': 'Aliso Viejo', 'state': 'California'}\n{'title': 'University of the West', 'city': 'Rosemead', 'state': 'California'}\n{'title': 'Won Institute of Graduate Studies', 'city': 'Glenside', 'state': 'Pennsylvania'}\n\n"
] | [
0,
0,
0
] | [] | [] | [
"pattern_matching",
"python",
"regex"
] | stackoverflow_0074670737_pattern_matching_python_regex.txt |
Q:
regex matched values convert to float/integers
Consider this example:
import re
string = "1-3-a"
a, b, c = re.match("(\d+)-(\d+)-(\w+)", string).groups()
print(a + b)
This will print: '13'. However, I want to use these values as digits (integers or floats), while keeping variable c as a string. Of course I can do a = int(a) etc. but I think there must be a more convenient way to do this (especially when you are matching way more variables).
Unfortunately I cannot find anything about this, originally I thought that regex will deal with this automatically as I am saying it must be a digit.
EDIT:
I think this is different from the supposed duplicate question as I am trying to match multiple parts of the string into multiple variables.
A:
Regex will not do this natively, that's simply not its job. One way you could achieve it (if you wanted more of a "one-line" solution) is to use the map function to apply the int() function to every element in the groups tuple.
import re
string = "1-3"
a, b = map(int, re.match("(\d+)-(\d+)", string).groups())
print(a + b)
| regex matched values convert to float/integers | Consider this example:
import re
string = "1-3-a"
a, b, c = re.match("(\d+)-(\d+)-(\w+)", string).groups()
print(a + b)
This will print: '13'. However, I want to use these values as digits (integers or floats), while keeping variable c as a string. Of course I can do a = int(a) etc. but I think there must be a more convenient way to do this (especially when you are matching way more variables).
Unfortunately I cannot find anything about this, originally I thought that regex will deal with this automatically as I am saying it must be a digit.
EDIT:
I think this is different from the supposed duplicate question as I am trying to match multiple parts of the string into multiple variables.
| [
"Regex will not do this natively, that's simply not its job. One way you could achieve it (if you wanted more of a \"one-line\" solution) is to use the map function to apply the int() function to every element in the groups tuple.\nimport re\nstring = \"1-3\"\na, b = map(int, re.match(\"(\\d+)-(\\d+)\", string).groups())\nprint(a + b)\n\n"
] | [
1
] | [] | [] | [
"match",
"python",
"regex"
] | stackoverflow_0074674564_match_python_regex.txt |
Q:
Generate a connected line with different amplitude
I'm trying to make a game like Line, but with a horizontal and not vertical wave. The problem is making that the wave continues even after changing its amplitude (I will change the frequency later). So far I have reached this part of wave:
import pygame
import pygame.gfxdraw
import math
import time
DISPLAY_W, DISPLAY_H = 400, 800
clock = pygame.time.Clock()
pygame.init()
SCREEN = pygame.Surface((DISPLAY_W, DISPLAY_H))
GAME_DISPLAY = pygame.display.set_mode((DISPLAY_W, DISPLAY_H))
class Line():
def __init__(self):
self.pointsList = [0]*800
self.listIndex = 0
def game(self):
while True:
clock.tick(60)
SCREEN.fill((0, 0, 0))
self.listIndex += +1
self.generateWave()
self.drawWave()
for event in pygame.event.get():
if (event.type == pygame.QUIT):
quit()
pygame.display.update()
GAME_DISPLAY.blit(SCREEN, (0, 0))
def drawWave(self):
for Y_CORD in range(len(self.pointsList)):
pygame.gfxdraw.pixel(
GAME_DISPLAY, self.pointsList[Y_CORD]-55, DISPLAY_H-Y_CORD, (255, 255, 255))
pygame.gfxdraw.pixel(
GAME_DISPLAY, self.pointsList[Y_CORD]-350, DISPLAY_H-Y_CORD, (255, 255, 255))
def generateWave(self):
waveAmplitude = 50
waveFrequency = 1
XCord = int((DISPLAY_H/2) + waveAmplitude*math.sin(
waveFrequency * ((float(0)/-DISPLAY_W)*(2*math.pi) + (time.time()))))
if self.pointsList[-1] != 0:
self.pointsList.pop(0)
self.pointsList.append(XCord)
else:
self.pointsList[self.listIndex] = XCord
if __name__ == "__main__":
game = Line()
game.game()
I thought about having another function to change the amplitude, but then there would be a gap:
A:
One issue with your code is that you are using a variable called XCord to store the Y-coordinate of each point in the wave. This variable should be called YCord instead, since it represents the Y-coordinate of the point on the screen.
Another issue is that you are using a variable called waveFrequency to control the speed of the wave. This variable should be called waveSpeed instead, since it controls the speed of the wave rather than its frequency.
To fix the issue of the wave not continuing after changing the amplitude, you can modify the generateWave() function as follows:
def generateWave(self, waveAmplitude):
waveFrequency = 1
waveSpeed = 0.05
for i in range(len(self.pointsList)):
YCord = int((DISPLAY_H/2) + waveAmplitude*math.sin(
waveFrequency * ((float(i)/-DISPLAY_W)*(2*math.pi) + (time.time()*waveSpeed))))
if self.pointsList[i] != 0:
self.pointsList[i] = YCord
else:
self.pointsList[i] = YCord
In this updated version of the function, we loop through each point in the pointsList array and calculate its Y-coordinate using the given waveAmplitude value. We also use the waveSpeed variable to control the speed of the wave. This allows us to change the amplitude of the wave without creating a gap in the wave.
You can then call this function with a desired value for waveAmplitude.
| Generate a connected line with different amplitude | I'm trying to make a game like Line, but with a horizontal and not vertical wave. The problem is making that the wave continues even after changing its amplitude (I will change the frequency later). So far I have reached this part of wave:
import pygame
import pygame.gfxdraw
import math
import time
DISPLAY_W, DISPLAY_H = 400, 800
clock = pygame.time.Clock()
pygame.init()
SCREEN = pygame.Surface((DISPLAY_W, DISPLAY_H))
GAME_DISPLAY = pygame.display.set_mode((DISPLAY_W, DISPLAY_H))
class Line():
def __init__(self):
self.pointsList = [0]*800
self.listIndex = 0
def game(self):
while True:
clock.tick(60)
SCREEN.fill((0, 0, 0))
self.listIndex += +1
self.generateWave()
self.drawWave()
for event in pygame.event.get():
if (event.type == pygame.QUIT):
quit()
pygame.display.update()
GAME_DISPLAY.blit(SCREEN, (0, 0))
def drawWave(self):
for Y_CORD in range(len(self.pointsList)):
pygame.gfxdraw.pixel(
GAME_DISPLAY, self.pointsList[Y_CORD]-55, DISPLAY_H-Y_CORD, (255, 255, 255))
pygame.gfxdraw.pixel(
GAME_DISPLAY, self.pointsList[Y_CORD]-350, DISPLAY_H-Y_CORD, (255, 255, 255))
def generateWave(self):
waveAmplitude = 50
waveFrequency = 1
XCord = int((DISPLAY_H/2) + waveAmplitude*math.sin(
waveFrequency * ((float(0)/-DISPLAY_W)*(2*math.pi) + (time.time()))))
if self.pointsList[-1] != 0:
self.pointsList.pop(0)
self.pointsList.append(XCord)
else:
self.pointsList[self.listIndex] = XCord
if __name__ == "__main__":
game = Line()
game.game()
I thought about having another function to change the amplitude, but then there would be a gap:
| [
"One issue with your code is that you are using a variable called XCord to store the Y-coordinate of each point in the wave. This variable should be called YCord instead, since it represents the Y-coordinate of the point on the screen.\nAnother issue is that you are using a variable called waveFrequency to control the speed of the wave. This variable should be called waveSpeed instead, since it controls the speed of the wave rather than its frequency.\nTo fix the issue of the wave not continuing after changing the amplitude, you can modify the generateWave() function as follows:\ndef generateWave(self, waveAmplitude):\n waveFrequency = 1\n waveSpeed = 0.05\n for i in range(len(self.pointsList)):\n YCord = int((DISPLAY_H/2) + waveAmplitude*math.sin(\n waveFrequency * ((float(i)/-DISPLAY_W)*(2*math.pi) + (time.time()*waveSpeed))))\n\n if self.pointsList[i] != 0:\n self.pointsList[i] = YCord\n else:\n self.pointsList[i] = YCord\n\nIn this updated version of the function, we loop through each point in the pointsList array and calculate its Y-coordinate using the given waveAmplitude value. We also use the waveSpeed variable to control the speed of the wave. This allows us to change the amplitude of the wave without creating a gap in the wave.\nYou can then call this function with a desired value for waveAmplitude.\n"
] | [
0
] | [] | [] | [
"pygame",
"python"
] | stackoverflow_0074649361_pygame_python.txt |
Q:
How to implement third Nelson's rule with Pandas?
I am trying to implement Nelson's rules using Pandas. One of them is giving me grief, specifically number 3:
Using some example data:
data = pd.DataFrame({"values":[1,2,3,4,5,6,7,5,6,5,3]})
values
0
1
1
2
2
3
3
4
4
5
5
6
6
7
7
5
8
6
9
5
10
3
My first approach was to use a rolling window to check if they are in/decreasing with diff()>0 and use this to identify "hits" on the rule:
(data.diff()>0).rolling(6).sum()==6
This correctly identifies the end values (1=True, 0=False):
values
correct /desired
0
0
0
1
0
1
2
0
1
3
0
1
4
0
1
5
0
1
6
1
1
7
0
0
8
0
0
9
0
0
10
0
0
This misses the first points (which are part of the run) because rolling is a look behind. Given this rule requires 6 points in a row, I essentially need to evaluate for a given point the 6 possible windows it can fall in and then mark it as true if it is part of any window in which the points are consecutively in/decreasing.
I can think of how I could do this with some custom Python code with iterrows() or apply. I am, however keen to keep this performant, so want to limit myself to the Panda's API.
How can this be achieved ?
A:
With the following toy dataframe (an extended version of yours):
import pandas as pd
df = pd.DataFrame({"values": [1, 2, 3, 4, 5, 6, 7, 5, 6, 5, 3, 11, 12, 13, 14, 15, 16, 4, 3, 8, 9, 10, 2]})
Here is one way to do it:
# Find consecutive values
df["check"] = (df.diff() > 0).rolling(6).sum()
df["check"] = df.apply(lambda x: 1 if x["check"] >= 6 else pd.NA, axis=1)
# Mark values
for idx in df[df["check"] == 1].index:
df.loc[idx - 5 : idx, "check"] = 1
# Set 0 for other values
df = df.fillna(0)
Then:
print(df)
# Output
values check
0 1 0
1 2 1
2 3 1
3 4 1
4 5 1
5 6 1
6 7 1
7 5 0
8 6 0
9 5 0
10 3 0
11 11 1
12 12 1
13 13 1
14 14 1
15 15 1
16 16 1
17 4 0
18 3 0
19 8 0
20 9 0
21 10 0
22 2 0
| How to implement third Nelson's rule with Pandas? | I am trying to implement Nelson's rules using Pandas. One of them is giving me grief, specifically number 3:
Using some example data:
data = pd.DataFrame({"values":[1,2,3,4,5,6,7,5,6,5,3]})
values
0
1
1
2
2
3
3
4
4
5
5
6
6
7
7
5
8
6
9
5
10
3
My first approach was to use a rolling window to check if they are in/decreasing with diff()>0 and use this to identify "hits" on the rule:
(data.diff()>0).rolling(6).sum()==6
This correctly identifies the end values (1=True, 0=False):
values
correct /desired
0
0
0
1
0
1
2
0
1
3
0
1
4
0
1
5
0
1
6
1
1
7
0
0
8
0
0
9
0
0
10
0
0
This misses the first points (which are part of the run) because rolling is a look behind. Given this rule requires 6 points in a row, I essentially need to evaluate for a given point the 6 possible windows it can fall in and then mark it as true if it is part of any window in which the points are consecutively in/decreasing.
I can think of how I could do this with some custom Python code with iterrows() or apply. I am, however keen to keep this performant, so want to limit myself to the Panda's API.
How can this be achieved ?
| [
"With the following toy dataframe (an extended version of yours):\nimport pandas as pd\n\n\ndf = pd.DataFrame({\"values\": [1, 2, 3, 4, 5, 6, 7, 5, 6, 5, 3, 11, 12, 13, 14, 15, 16, 4, 3, 8, 9, 10, 2]})\n\nHere is one way to do it:\n# Find consecutive values\ndf[\"check\"] = (df.diff() > 0).rolling(6).sum()\ndf[\"check\"] = df.apply(lambda x: 1 if x[\"check\"] >= 6 else pd.NA, axis=1)\n\n# Mark values\nfor idx in df[df[\"check\"] == 1].index:\n df.loc[idx - 5 : idx, \"check\"] = 1\n\n# Set 0 for other values\ndf = df.fillna(0)\n\nThen:\nprint(df)\n# Output\n values check\n0 1 0\n1 2 1\n2 3 1\n3 4 1\n4 5 1\n5 6 1\n6 7 1\n7 5 0\n8 6 0\n9 5 0\n10 3 0\n11 11 1\n12 12 1\n13 13 1\n14 14 1\n15 15 1\n16 16 1\n17 4 0\n18 3 0\n19 8 0\n20 9 0\n21 10 0\n22 2 0\n\n"
] | [
0
] | [] | [] | [
"pandas",
"python"
] | stackoverflow_0074630430_pandas_python.txt |
Q:
Sending JSON to Flask, request.args vs request.form
My understanding is that request.args in Flask contains the URL encoded parameters from a GET request while request.form contains POST data. What I'm having a hard time grasping is why when sending a POST request, trying to access the data with request.form returns a 400 error but when I try to access it with request.args it seems to work fine.
I have tried sending the request with both Postman and curl and the results are identical.
curl -X POST -d {"name":"Joe"} http://127.0.0.1:8080/testpoint --header "Content-Type:application/json"
Code:
@app.route('/testpoint', methods = ['POST'])
def testpoint():
name = request.args.get('name', '')
return jsonify(name = name)
A:
You are POST-ing JSON, neither request.args nor request.form will work.
request.form works only if you POST data with the right content types; form data is either POSTed with the application/x-www-form-urlencoded or multipart/form-data encodings.
When you use application/json, you are no longer POSTing form data. Use request.get_json() to access JSON POST data instead:
@app.route('/testpoint', methods = ['POST'])
def testpoint():
name = request.get_json().get('name', '')
return jsonify(name = name)
As you state, request.args only ever contains values included in the request query string, the optional part of a URL after the ? question mark. Since it’s part of the URL, it is independent from the POST request body.
A:
Your json data in curl is wrong, so Flask does not parse data to form.
Send data like this: '{"name":"Joe"}'
curl -X POST -d '{"name":"Joe"}' http://example.com:8080/testpoint --header "Content-Type:application/json"
A:
just change args for form and it will work
@app.route('/testpoint', methods = ['POST'])
def testpoint():
name = request.form.get('name', '')`enter code here`
return jsonify(name = name)
| Sending JSON to Flask, request.args vs request.form | My understanding is that request.args in Flask contains the URL encoded parameters from a GET request while request.form contains POST data. What I'm having a hard time grasping is why when sending a POST request, trying to access the data with request.form returns a 400 error but when I try to access it with request.args it seems to work fine.
I have tried sending the request with both Postman and curl and the results are identical.
curl -X POST -d {"name":"Joe"} http://127.0.0.1:8080/testpoint --header "Content-Type:application/json"
Code:
@app.route('/testpoint', methods = ['POST'])
def testpoint():
name = request.args.get('name', '')
return jsonify(name = name)
| [
"You are POST-ing JSON, neither request.args nor request.form will work.\nrequest.form works only if you POST data with the right content types; form data is either POSTed with the application/x-www-form-urlencoded or multipart/form-data encodings.\nWhen you use application/json, you are no longer POSTing form data. Use request.get_json() to access JSON POST data instead:\[email protected]('/testpoint', methods = ['POST'])\ndef testpoint():\n name = request.get_json().get('name', '')\n return jsonify(name = name)\n\nAs you state, request.args only ever contains values included in the request query string, the optional part of a URL after the ? question mark. Since it’s part of the URL, it is independent from the POST request body.\n",
"Your json data in curl is wrong, so Flask does not parse data to form.\nSend data like this: '{\"name\":\"Joe\"}'\ncurl -X POST -d '{\"name\":\"Joe\"}' http://example.com:8080/testpoint --header \"Content-Type:application/json\"\n\n",
"just change args for form and it will work\[email protected]('/testpoint', methods = ['POST'])\ndef testpoint():\n name = request.form.get('name', '')`enter code here`\n return jsonify(name = name)\n\n"
] | [
63,
3,
0
] | [] | [] | [
"flask",
"json",
"post",
"python",
"rest"
] | stackoverflow_0023326368_flask_json_post_python_rest.txt |
Q:
What should be the correct code in order to get the factorial of n?
n=int(input("Enter a number: "))
p=1
for i in range(n):
p*=i
print(p)
I wanted to find out the factorial of a number but I always get 0 as output.
A:
The factorial of a number is the product of all the numbers from 1 to that number. However, in your code, you are starting the loop from 0 and then multiplying the product by the loop variable. This means that the product will always be 0 because any number multiplied by 0 is 0.
You can change the starting value of the loop variable to 1 instead of 0. This way, the product will be initialized to 1 and then multiplied by the numbers from 1 to n, which is the correct way to calculate the factorial of a number.
n = int(input("Enter a number: "))
p = 1
for i in range(1, n+1):
p *= i
print(p)
You could also just use the math library which is built-in.
import math
n = int(input("Enter a number: "))
p = math.factorial(n)
print(p)
A:
The code you provided does not return the correct result because the loop variable i is being used to calculate the factorial, but it is not initialized to the correct value. The i variable is initialized to 0 in the range() function, but the factorial of 0 is not defined. Instead, the loop variable should be initialized to 1 in order to correctly calculate the factorial.
Here is an example of how you can modify the code to correctly calculate the factorial of a number:
# Get the input number
n = int(input("Enter a number: "))
# Initialize the result to 1
p = 1
# Loop over the numbers from 1 to n
for i in range(1, n+1):
# Multiply the result by the current number
p *= i
# Print the result
print(p)
In this example, the loop variable i is initialized to 1 in the range() function, which ensures that the factorial is calculated correctly. The loop variable is incremented by 1 each time the loop is executed, and the result is multiplied by the current value of the loop variable. This allows the code to correctly calculate the factorial of any number.
| What should be the correct code in order to get the factorial of n? | n=int(input("Enter a number: "))
p=1
for i in range(n):
p*=i
print(p)
I wanted to find out the factorial of a number but I always get 0 as output.
| [
"The factorial of a number is the product of all the numbers from 1 to that number. However, in your code, you are starting the loop from 0 and then multiplying the product by the loop variable. This means that the product will always be 0 because any number multiplied by 0 is 0.\nYou can change the starting value of the loop variable to 1 instead of 0. This way, the product will be initialized to 1 and then multiplied by the numbers from 1 to n, which is the correct way to calculate the factorial of a number.\nn = int(input(\"Enter a number: \"))\np = 1\nfor i in range(1, n+1):\n p *= i\nprint(p)\n\nYou could also just use the math library which is built-in.\nimport math\n\nn = int(input(\"Enter a number: \"))\np = math.factorial(n)\nprint(p)\n\n",
"The code you provided does not return the correct result because the loop variable i is being used to calculate the factorial, but it is not initialized to the correct value. The i variable is initialized to 0 in the range() function, but the factorial of 0 is not defined. Instead, the loop variable should be initialized to 1 in order to correctly calculate the factorial.\nHere is an example of how you can modify the code to correctly calculate the factorial of a number:\n# Get the input number\nn = int(input(\"Enter a number: \"))\n\n# Initialize the result to 1\np = 1\n\n# Loop over the numbers from 1 to n\nfor i in range(1, n+1):\n # Multiply the result by the current number\n p *= i\n\n# Print the result\nprint(p)\n\nIn this example, the loop variable i is initialized to 1 in the range() function, which ensures that the factorial is calculated correctly. The loop variable is incremented by 1 each time the loop is executed, and the result is multiplied by the current value of the loop variable. This allows the code to correctly calculate the factorial of any number.\n"
] | [
0,
0
] | [] | [] | [
"factorial",
"numbers",
"python"
] | stackoverflow_0074674629_factorial_numbers_python.txt |
Q:
How to convert dataframe to nested dictionary with specific array and list?
How can I use a dataframe to create a nested dictionary, with interleaved lists and columns, as in the example below?
Create dictionary:
columns = ["name","reason","cgc","limit","email","address","message","type","value"]
data = [("Paulo", "La Fava","123456","0","[email protected]","avenue A","msg txt 1","string","low"), ("Pedro", "Petrus","123457","20.00","[email protected]","avenue A","msg txt 2","string", "average"), ("Saulo", "Salix","123458","150.00","[email protected]","avenue B","msg txt 3","string","high")]
df = spark.createDataFrame(data).toDF(*columns)
df.show()
expected outcome
{
"accepted": [
{
"issuer": {
"name": "Paulo",
"reason": "La Fava",
"cgc": "123456"
},
"Recipient": {
"limit": "0",
"email": "[email protected]",
"address": "avenue A"
},
"additional_fields": [
{
"message": "msg txt 1",
"type": "string",
"value": "low"
}
]
}
]
}
A:
Arrays in Spark are homogeneous i.e. the elements should have same data type. In your sample expected output, the array type of "additional_fields" does not match with other two map fields "issuer" & "recipient".
You have two ways to resolve this:
If you can relax "additional_fields" to be just the map (not array) like "issuer" & "recipient", then you can use following transformation:
df = df.withColumn("issuer", F.create_map(F.lit("name"), F.col("name"), \
F.lit("reason"), F.col("reason"), \
F.lit("cgc"), F.col("cgc"), \
)
) \
.withColumn("recipient", F.create_map(F.lit("limit"), F.col("limit"), \
F.lit("email"), F.col("email"), \
F.lit("address"), F.col("address"), \
)
) \
.withColumn("additional_fields", F.create_map(F.lit("message"), F.col("message"), \
F.lit("type"), F.col("type"), \
F.lit("value"), F.col("value"), \
)
) \
.withColumn("accepted", F.array(F.create_map(F.lit("issuer"), F.col("issuer"), \
F.lit("recipient"), F.col("recipient"), \
F.lit("additional_fields"), F.col("additional_fields"), \
))
) \
.drop(*[c for c in df.columns if c != "accepted"] + ["issuer", "recipient", "additional_fields"])
or, if you want to make "issuer" & "recipient" field types similar to "additional_fields" then use:
df = df.withColumn("issuer", F.array([F.create_map(F.lit(c), F.col(c)) for c in ["name", "reason", "cgc"]])) \
.withColumn("recipient", F.array([F.create_map(F.lit(c), F.col(c)) for c in ["limit", "email", "address"]])) \
.withColumn("additional_fields", F.array([F.create_map(F.lit(c), F.col(c)) for c in ["message", "type", "value"]])) \
.withColumn("accepted", F.array([F.create_map(F.lit(c), F.col(c)) for c in ["issuer", "recipient", "additional_fields"]])) \
.drop(*[c for c in df.columns if c != "accepted"] + ["issuer", "recipient", "additional_fields"])
| How to convert dataframe to nested dictionary with specific array and list? | How can I use a dataframe to create a nested dictionary, with interleaved lists and columns, as in the example below?
Create dictionary:
columns = ["name","reason","cgc","limit","email","address","message","type","value"]
data = [("Paulo", "La Fava","123456","0","[email protected]","avenue A","msg txt 1","string","low"), ("Pedro", "Petrus","123457","20.00","[email protected]","avenue A","msg txt 2","string", "average"), ("Saulo", "Salix","123458","150.00","[email protected]","avenue B","msg txt 3","string","high")]
df = spark.createDataFrame(data).toDF(*columns)
df.show()
expected outcome
{
"accepted": [
{
"issuer": {
"name": "Paulo",
"reason": "La Fava",
"cgc": "123456"
},
"Recipient": {
"limit": "0",
"email": "[email protected]",
"address": "avenue A"
},
"additional_fields": [
{
"message": "msg txt 1",
"type": "string",
"value": "low"
}
]
}
]
}
| [
"Arrays in Spark are homogeneous i.e. the elements should have same data type. In your sample expected output, the array type of \"additional_fields\" does not match with other two map fields \"issuer\" & \"recipient\".\nYou have two ways to resolve this:\nIf you can relax \"additional_fields\" to be just the map (not array) like \"issuer\" & \"recipient\", then you can use following transformation:\ndf = df.withColumn(\"issuer\", F.create_map(F.lit(\"name\"), F.col(\"name\"), \\\n F.lit(\"reason\"), F.col(\"reason\"), \\\n F.lit(\"cgc\"), F.col(\"cgc\"), \\\n )\n ) \\\n .withColumn(\"recipient\", F.create_map(F.lit(\"limit\"), F.col(\"limit\"), \\\n F.lit(\"email\"), F.col(\"email\"), \\\n F.lit(\"address\"), F.col(\"address\"), \\\n )\n ) \\\n .withColumn(\"additional_fields\", F.create_map(F.lit(\"message\"), F.col(\"message\"), \\\n F.lit(\"type\"), F.col(\"type\"), \\\n F.lit(\"value\"), F.col(\"value\"), \\\n )\n ) \\\n .withColumn(\"accepted\", F.array(F.create_map(F.lit(\"issuer\"), F.col(\"issuer\"), \\\n F.lit(\"recipient\"), F.col(\"recipient\"), \\\n F.lit(\"additional_fields\"), F.col(\"additional_fields\"), \\\n ))\n ) \\\n .drop(*[c for c in df.columns if c != \"accepted\"] + [\"issuer\", \"recipient\", \"additional_fields\"])\n\nor, if you want to make \"issuer\" & \"recipient\" field types similar to \"additional_fields\" then use:\ndf = df.withColumn(\"issuer\", F.array([F.create_map(F.lit(c), F.col(c)) for c in [\"name\", \"reason\", \"cgc\"]])) \\\n .withColumn(\"recipient\", F.array([F.create_map(F.lit(c), F.col(c)) for c in [\"limit\", \"email\", \"address\"]])) \\\n .withColumn(\"additional_fields\", F.array([F.create_map(F.lit(c), F.col(c)) for c in [\"message\", \"type\", \"value\"]])) \\\n .withColumn(\"accepted\", F.array([F.create_map(F.lit(c), F.col(c)) for c in [\"issuer\", \"recipient\", \"additional_fields\"]])) \\\n .drop(*[c for c in df.columns if c != \"accepted\"] + [\"issuer\", \"recipient\", \"additional_fields\"])\n\n"
] | [
0
] | [] | [] | [
"pandas",
"pyspark",
"python"
] | stackoverflow_0074669493_pandas_pyspark_python.txt |
Q:
How do I fill a list with with tuples using a for-loop in python?
I just finished implementing a working Python code for the Dijkstra-Pathfinding algorithm. I am applying this algorithm to a graph with edges, which I have written as a list of tuples:
graph = Graph([
("a", "b", 2),("a", "c", 5),
("a", "d", 2),("b", "c", 3),
("b", "e", 1),("c", "e", 1),
("c", "h", 1),("c", "f", 1),
("c", "d", 3),("d", "g", 2),
("e", "i", 7),("f", "h", 3),
("f", "g", 2),("h", "i", 1)])
I don't want to leave it like that and rather fill the graph using a for-loop, but this is exactly where I fail.
I have tried writing
graph.append(("i", "j", "4"))
And several other variants using the append function but it just keeps giving me errors.
I am aware that this isn't a for-loop, I am simply trying to add one edge for now.
This is how I defined my add_edge function:
Edge = namedtuple('Edge', 'start, end, cost')
def add_edge(start, end, cost):
return Edge(start, end, cost)
A:
In this line the parenthesis are serving as a container for multiple string arguments.
graph.append("i", "j", "4")
You need to add a layer of nested parenthesis to indicate that the argument is a single tuple.
graph.append(("i", "j", "4"))
A:
To add an edge to a graph, you can use the add_edge method of the Graph class. This method takes three arguments: the source node, the destination node, and the weight of the edge.
Here is an example of how you might use the add_edge method to add an edge to your graph:
# Create a graph
graph = Graph([
("a", "b", 2),("a", "c", 5),
("a", "d", 2),("b", "c", 3),
("b", "e", 1),("c", "e", 1),
("c", "h", 1),("c", "f", 1),
("c", "d", 3),("d", "g", 2),
("e", "i", 7),("f", "h", 3),
("f", "g", 2),("h", "i", 1)])
# Add an edge to the graph
graph.add_edge("i", "j", 4)
If you want to add multiple edges to your graph using a for-loop, you can use the add_edge method inside the for-loop to add each edge. Here is an example of how you might do this:
# Create a list of edges to add to the graph
edges = [("i", "j", 4), ("j", "k", 5), ("k", "l", 6)]
# Create a graph
graph = Graph([
("a", "b", 2),("a", "c", 5),
("a", "d", 2),("b", "c", 3),
("b", "e", 1),("c", "e", 1),
("c", "h", 1),("c", "f", 1),
("c", "d", 3),("d", "g", 2),
("e", "i", 7),("f", "h", 3),
("f", "g", 2),("h", "i", 1)])
# Iterate over the edges in the list
for source, destination, weight in edges:
# Add the edge to the graph
graph.add_edge(source, destination, weight)
This should add the edges in the edges list to your graph.
| How do I fill a list with with tuples using a for-loop in python? | I just finished implementing a working Python code for the Dijkstra-Pathfinding algorithm. I am applying this algorithm to a graph with edges, which I have written as a list of tuples:
graph = Graph([
("a", "b", 2),("a", "c", 5),
("a", "d", 2),("b", "c", 3),
("b", "e", 1),("c", "e", 1),
("c", "h", 1),("c", "f", 1),
("c", "d", 3),("d", "g", 2),
("e", "i", 7),("f", "h", 3),
("f", "g", 2),("h", "i", 1)])
I don't want to leave it like that and rather fill the graph using a for-loop, but this is exactly where I fail.
I have tried writing
graph.append(("i", "j", "4"))
And several other variants using the append function but it just keeps giving me errors.
I am aware that this isn't a for-loop, I am simply trying to add one edge for now.
This is how I defined my add_edge function:
Edge = namedtuple('Edge', 'start, end, cost')
def add_edge(start, end, cost):
return Edge(start, end, cost)
| [
"In this line the parenthesis are serving as a container for multiple string arguments.\ngraph.append(\"i\", \"j\", \"4\")\n\nYou need to add a layer of nested parenthesis to indicate that the argument is a single tuple.\ngraph.append((\"i\", \"j\", \"4\"))\n\n",
"To add an edge to a graph, you can use the add_edge method of the Graph class. This method takes three arguments: the source node, the destination node, and the weight of the edge.\nHere is an example of how you might use the add_edge method to add an edge to your graph:\n# Create a graph\ngraph = Graph([\n (\"a\", \"b\", 2),(\"a\", \"c\", 5),\n (\"a\", \"d\", 2),(\"b\", \"c\", 3),\n (\"b\", \"e\", 1),(\"c\", \"e\", 1),\n (\"c\", \"h\", 1),(\"c\", \"f\", 1),\n (\"c\", \"d\", 3),(\"d\", \"g\", 2),\n (\"e\", \"i\", 7),(\"f\", \"h\", 3),\n (\"f\", \"g\", 2),(\"h\", \"i\", 1)])\n\n# Add an edge to the graph\ngraph.add_edge(\"i\", \"j\", 4)\n\nIf you want to add multiple edges to your graph using a for-loop, you can use the add_edge method inside the for-loop to add each edge. Here is an example of how you might do this:\n# Create a list of edges to add to the graph\nedges = [(\"i\", \"j\", 4), (\"j\", \"k\", 5), (\"k\", \"l\", 6)]\n\n# Create a graph\ngraph = Graph([\n (\"a\", \"b\", 2),(\"a\", \"c\", 5),\n (\"a\", \"d\", 2),(\"b\", \"c\", 3),\n (\"b\", \"e\", 1),(\"c\", \"e\", 1),\n (\"c\", \"h\", 1),(\"c\", \"f\", 1),\n (\"c\", \"d\", 3),(\"d\", \"g\", 2),\n (\"e\", \"i\", 7),(\"f\", \"h\", 3),\n (\"f\", \"g\", 2),(\"h\", \"i\", 1)])\n\n# Iterate over the edges in the list\nfor source, destination, weight in edges:\n # Add the edge to the graph\n graph.add_edge(source, destination, weight)\n\n\nThis should add the edges in the edges list to your graph.\n"
] | [
0,
0
] | [] | [] | [
"algorithm",
"dijkstra",
"graph_theory",
"python",
"search"
] | stackoverflow_0074674611_algorithm_dijkstra_graph_theory_python_search.txt |
Q:
Changing a class value of a class attribute with default 0 through instance value
I am working with a certain script that calculates discount, where its default is 0, hwoever special items have varied discounts, and my challenge is that I am unable top update the discount. Here's a sample code:
class Person():
def __init__(self, item, quantity, money,discount=0):
self.discount=discount
self.item=item
self.quantity=quantity
self.money=money
if self.money < quantity*1000:
print('Not enough money')
else:
self.quantity=quantity
if discount == 0:
self.money=self.money-self.quantity*1000
else:
self.money=self.money-self.quantity*1000*(1-discount)
class Privilage(Person):
def __init__(self, item, quantity, money, tag):
super().__init__(item, quantity, money,)
self.tag=tag
if self.tag == 'vip':
self.discount=0.1
elif self.tag == 'vvip':
self.discount=0.2
else:
self.discount=0
I tried changing the numbers and checking outputs by printing self.money, but they always pass trhough discount == 0 instead on the else, whcihc should carry over the discount by class Privilage. I also tried adding other methods, and it works, it simply won't pass in the class Person.
A:
I think your problem here is that you are trying to define the attributes of the superclass Person by the subclass Privilage. The subclass will inherit any attributes and methods from the superclass, but not vice versa.
A solution would be to move the if-else loop from Person to the Privilage class and then it works.
class Person():
def __init__(self, item, quantity, money,discount=0):
self.discount=discount
self.item=item
self.quantity=quantity
self.money=money
class Privilage(Person):
def __init__(self, item, quantity, money, tag):
super().__init__(item, quantity, money,)
self.tag=tag
# if loop to determine discount status
if self.tag == 'vip':
self.discount=0.1
elif self.tag == 'vvip':
self.discount=0.2
else:
self.discount=0
# if loop to check money with discount status
if self.money < quantity*1000:
print('Not enough money')
else:
self.quantity=quantity
if self.discount == 0:
self.money=self.money-self.quantity*1000
else:
self.money=self.money-self.quantity*1000*(1-self.discount)
print('-------------------------------------------------------')
bob = Privilage(item='jacket', quantity=4, money=50000, tag='vip')
print("Bob has:", bob.discount, bob.money)
sue = Privilage(item='jacket', quantity=5, money=4000, tag=0)
print("Sue has:", sue.discount, sue.money)
john = Privilage(item='jacket', quantity=10, money=100000, tag='vvip')
print("John has:", john.discount, john.money)
Resulting output:
-------------------------------------------------------
Bob has: 0.1 46400.0
Not enough money
Sue has: 0 4000
John has: 0.2 92000.0
| Changing a class value of a class attribute with default 0 through instance value | I am working with a certain script that calculates discount, where its default is 0, hwoever special items have varied discounts, and my challenge is that I am unable top update the discount. Here's a sample code:
class Person():
def __init__(self, item, quantity, money,discount=0):
self.discount=discount
self.item=item
self.quantity=quantity
self.money=money
if self.money < quantity*1000:
print('Not enough money')
else:
self.quantity=quantity
if discount == 0:
self.money=self.money-self.quantity*1000
else:
self.money=self.money-self.quantity*1000*(1-discount)
class Privilage(Person):
def __init__(self, item, quantity, money, tag):
super().__init__(item, quantity, money,)
self.tag=tag
if self.tag == 'vip':
self.discount=0.1
elif self.tag == 'vvip':
self.discount=0.2
else:
self.discount=0
I tried changing the numbers and checking outputs by printing self.money, but they always pass trhough discount == 0 instead on the else, whcihc should carry over the discount by class Privilage. I also tried adding other methods, and it works, it simply won't pass in the class Person.
| [
"I think your problem here is that you are trying to define the attributes of the superclass Person by the subclass Privilage. The subclass will inherit any attributes and methods from the superclass, but not vice versa.\nA solution would be to move the if-else loop from Person to the Privilage class and then it works.\nclass Person():\n def __init__(self, item, quantity, money,discount=0):\n self.discount=discount\n self.item=item\n self.quantity=quantity\n self.money=money\n \nclass Privilage(Person):\n def __init__(self, item, quantity, money, tag):\n super().__init__(item, quantity, money,)\n self.tag=tag\n \n # if loop to determine discount status\n if self.tag == 'vip':\n self.discount=0.1\n elif self.tag == 'vvip':\n self.discount=0.2\n else:\n self.discount=0\n \n # if loop to check money with discount status\n if self.money < quantity*1000:\n print('Not enough money')\n else:\n self.quantity=quantity\n if self.discount == 0:\n self.money=self.money-self.quantity*1000\n else:\n self.money=self.money-self.quantity*1000*(1-self.discount)\n \n\nprint('-------------------------------------------------------')\nbob = Privilage(item='jacket', quantity=4, money=50000, tag='vip')\nprint(\"Bob has:\", bob.discount, bob.money)\n\nsue = Privilage(item='jacket', quantity=5, money=4000, tag=0)\nprint(\"Sue has:\", sue.discount, sue.money)\n\njohn = Privilage(item='jacket', quantity=10, money=100000, tag='vvip')\nprint(\"John has:\", john.discount, john.money)\n\nResulting output:\n-------------------------------------------------------\nBob has: 0.1 46400.0\nNot enough money\nSue has: 0 4000\nJohn has: 0.2 92000.0\n\n"
] | [
0
] | [] | [] | [
"class",
"inheritance",
"methods",
"oop",
"python"
] | stackoverflow_0074674186_class_inheritance_methods_oop_python.txt |
Q:
Start / Resume Generator without using next
Is there a way to continue a function based on where it was last run.
We want each call to do something else, e.g. (first call adds 1, second adds 2, third call adds 3), and then do something else.
def a_generator():
yield lambda x: x + 1
yield lambda x: x + 2
yield lambda x: x + 3
yield lambda x: f"Okay we are almost complete {x}"
generator = a_generator()
What currently works:
assert next(generator)(5) == 6
assert next(generator)(5) == 7
assert next(generator)(5) == 8
assert next(generator)(5) == "Okay we are almost complete 5"
What I want to be able to do:
assert generator(5) == 6
assert generator(5) == 7
assert generator(5) == 8
assert generator(5) == "Okay we are almost complete 5"
A:
Your code does that already, but consider that you have a generator that returns functions, and treat it accordingly:
def a_generator():
yield lambda x: x + 1
yield lambda x: x + 2
yield lambda x: x + 3
yield lambda x: f"Okay we are almost complete {x}"
for generator in a_generator():
print(generator(5))
6
7
8
Okay we are almost complete 5
I'm not sure if this is exactly what you want, but it seems pretty close so I'll leave it here unless you can narrow down the requirements.
Fundamentally, a generator already is a function that remembers where it was. But you have a generator generating other functions (which are not generators).
A:
So the solution was that I needed to create a generator consumer helper function, and run that instead.
generator = a_generator()
def generator_consumer(x, generator=generator):
try:
return next(generator)(x)
except StopIteration:
raise ValueError("Can't Run the generator anymore")
And having the kwarg generator=generator means I can just pass args as normal, without needing to specify the generator.
Then I can call the function without calling next.
assert generator_consumer(5) == 6
assert generator_consumer(5) == 7
assert generator_consumer(5) == 8
assert generator_consumer(5) == "Okay we are almost complete 5"
| Start / Resume Generator without using next | Is there a way to continue a function based on where it was last run.
We want each call to do something else, e.g. (first call adds 1, second adds 2, third call adds 3), and then do something else.
def a_generator():
yield lambda x: x + 1
yield lambda x: x + 2
yield lambda x: x + 3
yield lambda x: f"Okay we are almost complete {x}"
generator = a_generator()
What currently works:
assert next(generator)(5) == 6
assert next(generator)(5) == 7
assert next(generator)(5) == 8
assert next(generator)(5) == "Okay we are almost complete 5"
What I want to be able to do:
assert generator(5) == 6
assert generator(5) == 7
assert generator(5) == 8
assert generator(5) == "Okay we are almost complete 5"
| [
"Your code does that already, but consider that you have a generator that returns functions, and treat it accordingly:\ndef a_generator():\n yield lambda x: x + 1\n yield lambda x: x + 2\n yield lambda x: x + 3\n yield lambda x: f\"Okay we are almost complete {x}\"\n\nfor generator in a_generator():\n print(generator(5))\n\n\n6\n7\n8\nOkay we are almost complete 5\n\nI'm not sure if this is exactly what you want, but it seems pretty close so I'll leave it here unless you can narrow down the requirements.\nFundamentally, a generator already is a function that remembers where it was. But you have a generator generating other functions (which are not generators).\n",
"So the solution was that I needed to create a generator consumer helper function, and run that instead.\ngenerator = a_generator()\n\ndef generator_consumer(x, generator=generator):\n try:\n return next(generator)(x)\n except StopIteration:\n raise ValueError(\"Can't Run the generator anymore\")\n\nAnd having the kwarg generator=generator means I can just pass args as normal, without needing to specify the generator.\nThen I can call the function without calling next.\nassert generator_consumer(5) == 6\nassert generator_consumer(5) == 7\nassert generator_consumer(5) == 8\nassert generator_consumer(5) == \"Okay we are almost complete 5\"\n\n"
] | [
1,
0
] | [] | [] | [
"python"
] | stackoverflow_0074663305_python.txt |
Q:
RuntimeWarning: overflow encountered in exp predictions = 1 / (1 + np.exp(-predictions))
this is the code I'm trying to implement for the dataset file and as I mentioned before the result just gives a 0 and the error :
RuntimeWarning: overflow encountered in exp
predictions = 1 / (1 + np.exp(-predictions))
I tried many solutions for other codes related with this prediction but still the same
`import numpy as np
import pandas as pd
dataset = pd.read_csv('data.csv')
dataset = (dataset - dataset.mean()) / dataset.std()
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(dataset.iloc[:, :-1], dataset.iloc[:, -1], test_size=0.25, random_state=42)
def logisticRegression_model(X, y, learning_rate, num_epochs):
weights = np.zeros(X.shape[1])
for epoch in range(num_epochs):
logisticRegression_update_weights(X, y, weights, learning_rate)
return weights
def logisticRegression_update_weights(X, y, weights, learning_rate):
gradient = logisticRegression_calculate_gradient(X, y, weights)
weights += learning_rate * gradient
return weights
def logisticRegression_calculate_gradient(X, y, weights):
#calculating the predictions
predictions = logisticRegression_predict(X, weights)
#calculating the errors
error = y - predictions
gradient = np.dot(X.T, error)
return gradient
def logisticRegression_predict(X, weights):
predictions = np.dot(X, weights)
predictions = 1 / (1 + np.exp(-predictions))
return predictions
def logisticRegression_accuracy(y_true, y_pred):
accuracy = np.sum(y_true == y_pred) / len(y_true)
return accuracy
def logisticRegression_train(X_train, y_train, learning_rate, num_epochs):
weights = logisticRegression_model(X_train, y_train, learning_rate, num_epochs)
return weights
weights = logisticRegression_train(X_train, y_train, 0.1, 1000)
y_pred_train = logisticRegression_predict(X_train, weights)
y_pred_test = logisticRegression_predict(X_test, weights)
y_pred_train = (y_pred_train > 0.5).astype(int)
y_pred_test = (y_pred_test > 0.5).astype(int)
acc_train = logisticRegression_accuracy(y_train, y_pred_train)
acc_test = logisticRegression_accuracy(y_test, y_pred_test)
print('Train accuracy:', acc_train)
print('Test accuracy:', acc_test)`
A:
The RuntimeWarning: overflow encountered in exp warning indicates that the exp function in NumPy has encountered an overflow error. This means that the input value to the exp function is too large, and the function cannot compute the exponential of this value.
The exp function in NumPy computes the exponential of a given input value. The exponential function is defined as exp(x) = e^x, where e is the base of the natural logarithm and x is the input value. When the input value is too large, the exp function can encounter an overflow error because the result of the computation is too large to be represented as a floating-point number.
To avoid the RuntimeWarning: overflow encountered in exp warning, you can use the numpy.clip function to limit the input values to the exp function within a certain range. The numpy.clip function allows you to specify a minimum and maximum value for the input, and any input values outside this range will be clipped to the minimum or maximum value.
Here is an example of how to use the numpy.clip function to avoid the RuntimeWarning: overflow encountered in exp warning:
import numpy as np
# Define a large input value
x = 1e100
# Compute the exponential of the input value
y = np.exp(x)
# Print the result
print(y)
In this example, the input value x is set to a large value (1e100), and the exp function is used to compute the exponential of this value. When you run this program, it will output the result of the computation, which is inf (infinity), as shown below:
inf
However, this program will also generate the RuntimeWarning: overflow encountered in exp warning because the input value is too large for the exp function to compute.
To avoid this warning, you can use the numpy.clip function to limit the input value to the exp function within a certain range. Here is an example of how to do this:
import numpy as np
# Define a large input value
x = 1e100
# Use the numpy.clip function to limit the input value
x = np.clip(x, -np.inf, np.inf)
# Compute the exponential of the input value
y = np.exp(x)
# Print the result
print(y)
In this example, the numpy.clip function is used to limit the input value x within the range (-inf, inf). This ensures that the input value is not too large for the exp function to compute. When you run this program, it will output the same result as before (inf), but it will not generate the RuntimeWarning: overflow encountered in exp warning because the input value is now within a valid range for the exp function.
I hope this helps you understand the RuntimeWarning: overflow encountered in exp warning and how to avoid it using the numpy.clip function in NumPy. Let me know if you have any other questions or need any further assistance.
A:
This warning occurs because the exponential function exceeds the maximum value accepted for Floating Point (FP) numbers. FP numbers have a limited number of bits to store their exponent in scientific notation, so they can eventually overflow.
This warning is relatively common, and it has no serious consequences (numpy is smart enough to handle the situation, and realize if the number actually corresponds to inf, nan, 0, etc.).
You can even supress the warning message as follows:
import numpy as np
import warnings
warnings.filterwarnings('ignore')
print(1/np.exp(999999999999))
https://www.statology.org/runtimewarning-overflow-encountered-in-exp/#:~:text=This%20warning%20occurs%20when%20you,provides%20the%20warning%20by%20default.
Unfortunately, the issue in the OP code is related to another problem (that is not giving the right result).
PS. If you wrote a code where warnings should not occur at all (because they are related to numerical issues, bugs, etc), you can also transform all numpy warnings into system errors:
numpy.seterr(all='raise')
Now the previous code would crash:
print(1/np.exp(999999999999))
FloatingPointError: overflow encountered in exp
| RuntimeWarning: overflow encountered in exp predictions = 1 / (1 + np.exp(-predictions)) | this is the code I'm trying to implement for the dataset file and as I mentioned before the result just gives a 0 and the error :
RuntimeWarning: overflow encountered in exp
predictions = 1 / (1 + np.exp(-predictions))
I tried many solutions for other codes related with this prediction but still the same
`import numpy as np
import pandas as pd
dataset = pd.read_csv('data.csv')
dataset = (dataset - dataset.mean()) / dataset.std()
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(dataset.iloc[:, :-1], dataset.iloc[:, -1], test_size=0.25, random_state=42)
def logisticRegression_model(X, y, learning_rate, num_epochs):
weights = np.zeros(X.shape[1])
for epoch in range(num_epochs):
logisticRegression_update_weights(X, y, weights, learning_rate)
return weights
def logisticRegression_update_weights(X, y, weights, learning_rate):
gradient = logisticRegression_calculate_gradient(X, y, weights)
weights += learning_rate * gradient
return weights
def logisticRegression_calculate_gradient(X, y, weights):
#calculating the predictions
predictions = logisticRegression_predict(X, weights)
#calculating the errors
error = y - predictions
gradient = np.dot(X.T, error)
return gradient
def logisticRegression_predict(X, weights):
predictions = np.dot(X, weights)
predictions = 1 / (1 + np.exp(-predictions))
return predictions
def logisticRegression_accuracy(y_true, y_pred):
accuracy = np.sum(y_true == y_pred) / len(y_true)
return accuracy
def logisticRegression_train(X_train, y_train, learning_rate, num_epochs):
weights = logisticRegression_model(X_train, y_train, learning_rate, num_epochs)
return weights
weights = logisticRegression_train(X_train, y_train, 0.1, 1000)
y_pred_train = logisticRegression_predict(X_train, weights)
y_pred_test = logisticRegression_predict(X_test, weights)
y_pred_train = (y_pred_train > 0.5).astype(int)
y_pred_test = (y_pred_test > 0.5).astype(int)
acc_train = logisticRegression_accuracy(y_train, y_pred_train)
acc_test = logisticRegression_accuracy(y_test, y_pred_test)
print('Train accuracy:', acc_train)
print('Test accuracy:', acc_test)`
| [
"The RuntimeWarning: overflow encountered in exp warning indicates that the exp function in NumPy has encountered an overflow error. This means that the input value to the exp function is too large, and the function cannot compute the exponential of this value.\nThe exp function in NumPy computes the exponential of a given input value. The exponential function is defined as exp(x) = e^x, where e is the base of the natural logarithm and x is the input value. When the input value is too large, the exp function can encounter an overflow error because the result of the computation is too large to be represented as a floating-point number.\nTo avoid the RuntimeWarning: overflow encountered in exp warning, you can use the numpy.clip function to limit the input values to the exp function within a certain range. The numpy.clip function allows you to specify a minimum and maximum value for the input, and any input values outside this range will be clipped to the minimum or maximum value.\nHere is an example of how to use the numpy.clip function to avoid the RuntimeWarning: overflow encountered in exp warning:\nimport numpy as np\n\n# Define a large input value\nx = 1e100\n\n# Compute the exponential of the input value\ny = np.exp(x)\n\n# Print the result\nprint(y)\n\n\nIn this example, the input value x is set to a large value (1e100), and the exp function is used to compute the exponential of this value. When you run this program, it will output the result of the computation, which is inf (infinity), as shown below:\ninf\n\nHowever, this program will also generate the RuntimeWarning: overflow encountered in exp warning because the input value is too large for the exp function to compute.\nTo avoid this warning, you can use the numpy.clip function to limit the input value to the exp function within a certain range. Here is an example of how to do this:\nimport numpy as np\n\n# Define a large input value\nx = 1e100\n\n# Use the numpy.clip function to limit the input value\nx = np.clip(x, -np.inf, np.inf)\n\n# Compute the exponential of the input value\ny = np.exp(x)\n\n# Print the result\nprint(y)\n\n\nIn this example, the numpy.clip function is used to limit the input value x within the range (-inf, inf). This ensures that the input value is not too large for the exp function to compute. When you run this program, it will output the same result as before (inf), but it will not generate the RuntimeWarning: overflow encountered in exp warning because the input value is now within a valid range for the exp function.\nI hope this helps you understand the RuntimeWarning: overflow encountered in exp warning and how to avoid it using the numpy.clip function in NumPy. Let me know if you have any other questions or need any further assistance.\n",
"This warning occurs because the exponential function exceeds the maximum value accepted for Floating Point (FP) numbers. FP numbers have a limited number of bits to store their exponent in scientific notation, so they can eventually overflow.\nThis warning is relatively common, and it has no serious consequences (numpy is smart enough to handle the situation, and realize if the number actually corresponds to inf, nan, 0, etc.).\nYou can even supress the warning message as follows:\nimport numpy as np\nimport warnings\nwarnings.filterwarnings('ignore')\nprint(1/np.exp(999999999999))\n\nhttps://www.statology.org/runtimewarning-overflow-encountered-in-exp/#:~:text=This%20warning%20occurs%20when%20you,provides%20the%20warning%20by%20default.\nUnfortunately, the issue in the OP code is related to another problem (that is not giving the right result).\n\nPS. If you wrote a code where warnings should not occur at all (because they are related to numerical issues, bugs, etc), you can also transform all numpy warnings into system errors:\nnumpy.seterr(all='raise') \n\nNow the previous code would crash:\nprint(1/np.exp(999999999999))\nFloatingPointError: overflow encountered in exp\n\n"
] | [
1,
0
] | [] | [] | [
"logistic_regression",
"python"
] | stackoverflow_0074674245_logistic_regression_python.txt |
Q:
python: keep only unique combinations from two columns in either order of dataframe
I have a problem very similar to the question here: Unique combination of two columns with mixed values
however my original dataframe has an additional column of values. This value is always the same for each combination (ie A,B,5 and B,A,5). My plan is to essentially ignore it when creating the key column and then drop duplicate keys.
My ideal solution would be a modified version of the df['key'] = np.sort(df.to_numpy(), axis=1).sum(1) solution that accounts for the third column since as is I get the error TypeError: '<' not supported between instances of 'float' and 'str'
I also tried network['key'] = np.sort(network['col1', 'col2'].to_numpy(), axis=1).sum(1) but I get KeyError: ('col1', 'col2')
I have also tried modifying the solution here: Python: Pandas: two columns with same values, alphabetically sorted and stored
to be df['key'] = np.minimum(df['col1'], df['col2']) + np.maximum(df['col1'], df['col2']) but I get a very long message starting with A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead
I have also tried the following solutions with no luck:
(pandas) Drop duplicates based on subset where order doesn't matter
Pandas complicated duplicate removal with three comparisons to other rows
(pandas) Drop duplicates based on subset where order doesn't matter
example input:
col1
col2
col3
A
B
5
B
A
5
desired output:
col1
col2
col3
A
B
5
A:
With the following toy dataframe:
import pandas as pd
df = pd.DataFrame(
{
"p1": ["a", "b", "a", "a", "b", "d", "c"],
"p2": ["b", "a", "c", "d", "c", "a", "b"],
"value": [1, 1, 2, 3, 5, 3, 5],
},
columns=["p1", "p2", "value"],
)
print(df)
# Output
p1 p2 value
0 a b 1
1 b a 1
2 a c 2
3 a d 3
4 b c 5
5 d a 3
6 c b 5
Here is one way to do it:
df = df.loc[
(df["p1"] + df["p2"]).apply(sorted).drop_duplicates(keep="first").index, :
].reset_index(drop=True)
Then:
p1 p2 value
0 a b 1
1 a c 2
2 a d 3
3 b c 5
| python: keep only unique combinations from two columns in either order of dataframe | I have a problem very similar to the question here: Unique combination of two columns with mixed values
however my original dataframe has an additional column of values. This value is always the same for each combination (ie A,B,5 and B,A,5). My plan is to essentially ignore it when creating the key column and then drop duplicate keys.
My ideal solution would be a modified version of the df['key'] = np.sort(df.to_numpy(), axis=1).sum(1) solution that accounts for the third column since as is I get the error TypeError: '<' not supported between instances of 'float' and 'str'
I also tried network['key'] = np.sort(network['col1', 'col2'].to_numpy(), axis=1).sum(1) but I get KeyError: ('col1', 'col2')
I have also tried modifying the solution here: Python: Pandas: two columns with same values, alphabetically sorted and stored
to be df['key'] = np.minimum(df['col1'], df['col2']) + np.maximum(df['col1'], df['col2']) but I get a very long message starting with A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead
I have also tried the following solutions with no luck:
(pandas) Drop duplicates based on subset where order doesn't matter
Pandas complicated duplicate removal with three comparisons to other rows
(pandas) Drop duplicates based on subset where order doesn't matter
example input:
col1
col2
col3
A
B
5
B
A
5
desired output:
col1
col2
col3
A
B
5
| [
"With the following toy dataframe:\nimport pandas as pd\n\ndf = pd.DataFrame(\n {\n \"p1\": [\"a\", \"b\", \"a\", \"a\", \"b\", \"d\", \"c\"],\n \"p2\": [\"b\", \"a\", \"c\", \"d\", \"c\", \"a\", \"b\"],\n \"value\": [1, 1, 2, 3, 5, 3, 5],\n },\n columns=[\"p1\", \"p2\", \"value\"],\n)\n\nprint(df)\n# Output\n p1 p2 value\n0 a b 1\n1 b a 1\n2 a c 2\n3 a d 3\n4 b c 5\n5 d a 3\n6 c b 5\n\nHere is one way to do it:\ndf = df.loc[\n (df[\"p1\"] + df[\"p2\"]).apply(sorted).drop_duplicates(keep=\"first\").index, :\n].reset_index(drop=True)\n\nThen:\n p1 p2 value\n0 a b 1\n1 a c 2\n2 a d 3\n3 b c 5\n\n"
] | [
0
] | [] | [] | [
"pandas",
"python",
"sorting"
] | stackoverflow_0074618025_pandas_python_sorting.txt |
Q:
Need only parent tag when i parse a html tag using beautifulsoup
If i specify li in find_all() method i should only get the parent elements and not all li elements in the html. ofcourse it makes sense that find_all() takes all li into consideration but i can use child or parent in the loop to get the child list elements. I'm trying to parse only the parent tags and print them in a single block. nested li should not be taken into consideration, Please help!
<html>
<p>
something
</p>
<li>
text i need
</li>
<li>
text i need
<ol>
<li>
text i need but appended to parent li tag
</li>
<li>
text i need but appended to parent li tag
</li>
</ol>
</li>
<li>
text i need
</li>
when i use
soup = BeautifulSoup(fp, 'html.parser')
for list in soup.find_all("li"):
output_text = list.get_text()
print(output_text)
print("--sep--")
My result should be
text i need
--sep--
text i need text i need but appended to parent li tag text i need but appended to parent li tag
--sep--
text i need
--sep--
But my result is
text i need
--sep--
text i need
--sep--
text i need but appended to parent li tag
--sep--
text i need but appended to parent li tag
--sep--
text i need
--sep--
A:
Try to use .find_parent to filter-out unwanted <li>:
from bs4 import BeautifulSoup
html_doc = """\
<html>
<p>
something
</p>
<li>
text i need
</li>
<li>
text i need
<ol>
<li>
text i need but appended to parent li tag
</li>
<li>
text i need but appended to parent li tag
</li>
</ol>
</li>
<li>
text i need
</li>"""
soup = BeautifulSoup(html_doc, "html.parser")
for li in soup.select("li"):
if li.find_parent("li"):
continue
print(" ".join(li.text.split()))
print("--sep--")
Prints:
text i need
--sep--
text i need text i need but appended to parent li tag text i need but appended to parent li tag
--sep--
text i need
--sep--
| Need only parent tag when i parse a html tag using beautifulsoup | If i specify li in find_all() method i should only get the parent elements and not all li elements in the html. ofcourse it makes sense that find_all() takes all li into consideration but i can use child or parent in the loop to get the child list elements. I'm trying to parse only the parent tags and print them in a single block. nested li should not be taken into consideration, Please help!
<html>
<p>
something
</p>
<li>
text i need
</li>
<li>
text i need
<ol>
<li>
text i need but appended to parent li tag
</li>
<li>
text i need but appended to parent li tag
</li>
</ol>
</li>
<li>
text i need
</li>
when i use
soup = BeautifulSoup(fp, 'html.parser')
for list in soup.find_all("li"):
output_text = list.get_text()
print(output_text)
print("--sep--")
My result should be
text i need
--sep--
text i need text i need but appended to parent li tag text i need but appended to parent li tag
--sep--
text i need
--sep--
But my result is
text i need
--sep--
text i need
--sep--
text i need but appended to parent li tag
--sep--
text i need but appended to parent li tag
--sep--
text i need
--sep--
| [
"Try to use .find_parent to filter-out unwanted <li>:\nfrom bs4 import BeautifulSoup\n\nhtml_doc = \"\"\"\\\n<html>\n<p>\nsomething\n</p>\n<li>\ntext i need\n</li>\n<li>\ntext i need \n <ol>\n <li>\n text i need but appended to parent li tag\n </li>\n <li>\n text i need but appended to parent li tag\n </li>\n </ol>\n</li>\n<li>\ntext i need\n</li>\"\"\"\n\nsoup = BeautifulSoup(html_doc, \"html.parser\")\n\nfor li in soup.select(\"li\"):\n if li.find_parent(\"li\"):\n continue\n print(\" \".join(li.text.split()))\n print(\"--sep--\")\n\nPrints:\ntext i need\n--sep--\ntext i need text i need but appended to parent li tag text i need but appended to parent li tag\n--sep--\ntext i need\n--sep--\n\n"
] | [
0
] | [] | [] | [
"beautifulsoup",
"html",
"python",
"python_3.x"
] | stackoverflow_0074674636_beautifulsoup_html_python_python_3.x.txt |
Q:
Recognizing matrix from image
I have written algorithm that solves the pluszle game matrix.
Input is numpy array.
Now I want to recognize the digits of matrix from screenshot.
there are different levels, this is hard one
this is easy one
the output of recognition should be numpy array
array([[6, 2, 4, 2],
[7, 8, 9, 7],
[1, 2, 4, 4],
[7, 2, 4, 0]])
I have tried to feed last image to tesseract
from PIL import Image
import pytesseract
pytesseract.pytesseract.tesseract_cmd = r'C:\Program Files\Tesseract-OCR\tesseract.exe'
print(pytesseract.image_to_string(Image.open('C:/Users/79017/screen_plus.jpg')))
The output is unacceptable
LEVEL 4
(}00:03 M0
J] —.°—@—@©
I think that I should use contours from opencv, because the font is always the same. maybe I should save contours for every digit, than save every countour that exist on screenshot than somehow make matrix from coordinates of every digit-contour. But I have no idea how to do it.
A:
1- Binarize
Tesseract needs you to binarize the image first. No need for contour or any convolution here. Just a threshold should do. Especially considering that you are trying to che... I mean win intelligently to a specific game. So I guess you are open to some ad-hoc adjustments.
For example, (hard<240).any(axis=2) put in white (True) everything that is not white on the original image, and black the white parts.
Note that you don't the the sums (or whatever they are, I don't know what this game is) here. Which are on the contrary almost black areas
But you can have them with another filter
(hard>120).any(axis=2)
You could merge those filters, obviously
(hard<240).any(axis=2) & (hard>120).any(axis=2)
But that may not be a good idea: after all, it gives you an opportunity to distinguish to different kind of data, why you may want to do.
2- Restrict
Secondly, you know you are looking for digits, so, restrict to digits. By adding config='digits' to your pytesseract args.
pytesseract.image_to_string((hard>240).all(axis=2))
# 'LEVEL10\nNOVEMBER 2022\n\n™\noe\nOs\nfoo)\nso\n‘|\noO\n\n9949 6 2 2 8\n\nN W\nN ©\nOo w\nVon\n+? ah ®)\nas\noOo\n©\n\n \n\x0c'
pytesseract.image_to_string((hard>240).all(axis=2), config='digits')
# '10\n2022\n\n99496228\n\n17\n-\n\n \n\x0c'
3- Don't use image_to_string
Use image_to_data preferably.
It gives you bounding boxes of text.
Or even image_to_boxes which give you digits one by one, with coordinates
Because image_to_string is for when you have a good old linear text in the image. image_to_data or image_to_boxes assumes that text is distributed all around, and give you piece of text with position.
image_to_string on such image may intervert what you would consider the logical order
4- Select areas yourself
Since it is an ad-hoc usage for a specific application, you know where the data are.
For example, your main matrix seems to be in area
hard[740:1512, 132:910]
See
print(pytesseract.image_to_boxes((hard[740:1512, 132:910]<240).any(axis=2), config='digits'))
Not only it avoids flooding you with irrelevant data. But also, tesseract performs better when called only with an image without other things than what you want to read.
Seems to have almost all your digits here.
5- Don't expect for miracles
Tesseract is one of the best OCR. But OCR are not a sure thing...
See what I get with this code (summarizing what I've said so far)
hard=hard[740:1512, 132:910]
hard=(hard<240).any(axis=2)
boxes=[s.split(' ') for s in pytesseract.image_to_boxes(hard, config='digits').split('\n')[:-1]]
out=255*np.stack([hard, hard, hard], axis=2).astype(np.uint8)
H=len(hard)
for b in boxes:
cv2.putText(out, b[0], (30+int(b[1]), H-int(b[2])), cv2.FONT_HERSHEY_SIMPLEX, 1, (255,0,0), 2)
As you can see, result are fairly good. But there are 5 missing numbers. And one 3 was read as "3.".
For this kind of ad-hoc reading of an app, I wouldn't even use tesseract. I am pretty sure that, with trial and errors, you can easily learn to extract each digits box your self (there are linearly spaced in both dimension).
And then, inside each box, well there are only 9 possible values. Should be quite easy, on a generated image, to find some easy criterions, such as the number of white pixels, number of white pixels in top area, ..., that permits a very simple classification
A:
You might want to pre-process the image first. By applying a filter, you can, for example, get the contours of an image.
The basic idea of a filter, is to 'slide' some matrix of values over the image, and multiply every pixel value by the value inside the matrix. This process is called convolution.
Convolution helps out here, because all irrelevant information is discarded, and thus it is made easier for tesseract to 'read' the image.
This might help you out: https://medium.com/swlh/image-processing-with-python-convolutional-filters-and-kernels-b9884d91a8fd
| Recognizing matrix from image | I have written algorithm that solves the pluszle game matrix.
Input is numpy array.
Now I want to recognize the digits of matrix from screenshot.
there are different levels, this is hard one
this is easy one
the output of recognition should be numpy array
array([[6, 2, 4, 2],
[7, 8, 9, 7],
[1, 2, 4, 4],
[7, 2, 4, 0]])
I have tried to feed last image to tesseract
from PIL import Image
import pytesseract
pytesseract.pytesseract.tesseract_cmd = r'C:\Program Files\Tesseract-OCR\tesseract.exe'
print(pytesseract.image_to_string(Image.open('C:/Users/79017/screen_plus.jpg')))
The output is unacceptable
LEVEL 4
(}00:03 M0
J] —.°—@—@©
I think that I should use contours from opencv, because the font is always the same. maybe I should save contours for every digit, than save every countour that exist on screenshot than somehow make matrix from coordinates of every digit-contour. But I have no idea how to do it.
| [
"1- Binarize\nTesseract needs you to binarize the image first. No need for contour or any convolution here. Just a threshold should do. Especially considering that you are trying to che... I mean win intelligently to a specific game. So I guess you are open to some ad-hoc adjustments.\nFor example, (hard<240).any(axis=2) put in white (True) everything that is not white on the original image, and black the white parts.\n\nNote that you don't the the sums (or whatever they are, I don't know what this game is) here. Which are on the contrary almost black areas\nBut you can have them with another filter\n(hard>120).any(axis=2)\n\n\nYou could merge those filters, obviously\n(hard<240).any(axis=2) & (hard>120).any(axis=2)\n\n\nBut that may not be a good idea: after all, it gives you an opportunity to distinguish to different kind of data, why you may want to do.\n2- Restrict\nSecondly, you know you are looking for digits, so, restrict to digits. By adding config='digits' to your pytesseract args.\npytesseract.image_to_string((hard>240).all(axis=2))\n# 'LEVEL10\\nNOVEMBER 2022\\n\\n™\\noe\\nOs\\nfoo)\\nso\\n‘|\\noO\\n\\n9949 6 2 2 8\\n\\nN W\\nN ©\\nOo w\\nVon\\n+? ah ®)\\nas\\noOo\\n©\\n\\n \\n\\x0c'\n\npytesseract.image_to_string((hard>240).all(axis=2), config='digits')\n# '10\\n2022\\n\\n99496228\\n\\n17\\n-\\n\\n \\n\\x0c'\n\n3- Don't use image_to_string\nUse image_to_data preferably.\nIt gives you bounding boxes of text.\nOr even image_to_boxes which give you digits one by one, with coordinates\nBecause image_to_string is for when you have a good old linear text in the image. image_to_data or image_to_boxes assumes that text is distributed all around, and give you piece of text with position.\nimage_to_string on such image may intervert what you would consider the logical order\n4- Select areas yourself\nSince it is an ad-hoc usage for a specific application, you know where the data are.\nFor example, your main matrix seems to be in area\nhard[740:1512, 132:910]\n\n\nSee\nprint(pytesseract.image_to_boxes((hard[740:1512, 132:910]<240).any(axis=2), config='digits'))\n\nNot only it avoids flooding you with irrelevant data. But also, tesseract performs better when called only with an image without other things than what you want to read.\nSeems to have almost all your digits here.\n5- Don't expect for miracles\nTesseract is one of the best OCR. But OCR are not a sure thing...\nSee what I get with this code (summarizing what I've said so far)\nhard=hard[740:1512, 132:910]\nhard=(hard<240).any(axis=2)\nboxes=[s.split(' ') for s in pytesseract.image_to_boxes(hard, config='digits').split('\\n')[:-1]]\nout=255*np.stack([hard, hard, hard], axis=2).astype(np.uint8)\nH=len(hard)\nfor b in boxes:\n cv2.putText(out, b[0], (30+int(b[1]), H-int(b[2])), cv2.FONT_HERSHEY_SIMPLEX, 1, (255,0,0), 2)\n\n\nAs you can see, result are fairly good. But there are 5 missing numbers. And one 3 was read as \"3.\".\nFor this kind of ad-hoc reading of an app, I wouldn't even use tesseract. I am pretty sure that, with trial and errors, you can easily learn to extract each digits box your self (there are linearly spaced in both dimension).\nAnd then, inside each box, well there are only 9 possible values. Should be quite easy, on a generated image, to find some easy criterions, such as the number of white pixels, number of white pixels in top area, ..., that permits a very simple classification\n",
"You might want to pre-process the image first. By applying a filter, you can, for example, get the contours of an image.\nThe basic idea of a filter, is to 'slide' some matrix of values over the image, and multiply every pixel value by the value inside the matrix. This process is called convolution.\nConvolution helps out here, because all irrelevant information is discarded, and thus it is made easier for tesseract to 'read' the image.\nThis might help you out: https://medium.com/swlh/image-processing-with-python-convolutional-filters-and-kernels-b9884d91a8fd\n"
] | [
1,
0
] | [] | [] | [
"computer_vision",
"opencv",
"python",
"python_tesseract",
"tesseract"
] | stackoverflow_0074674268_computer_vision_opencv_python_python_tesseract_tesseract.txt |
Q:
Django - migrate command not using latest migrations file
I have 5 migration files created. But when I run ./manage.py migrate
it always tries to apply the migrations file "3". Even though the latest one is file 5.
How can I fix this issue?
I have tried:
./manage.py makemigrations app_name
./manage.py migrate app_name
./manage.py migrate --run-syncdb
Also, I checked the dbshell, and there is a table already created for the model which is part of migrations file 5.
A:
Simple thing, because you didn't use migration file value while doing makemigrations. And migration file value is 0005. You must specify that value while doing makemigrations.
Use these three commands for migrations:
python manage.py makemigrations appname
python manage.py sqlmigrate appname 0005 #specified that migration file 5 value here
python manage.py migrate
Now migrations will apply on that migration file 5 using its value 0005
A:
I ended up deleting the migration file 3 which was getting picked up by django and add the operations of migration file 3 to inital file.
Then when I ran migrate <app_name>, it picked up the last file (file 5). I did have to resolve some conflicts between file 3 and preceding files though
| Django - migrate command not using latest migrations file | I have 5 migration files created. But when I run ./manage.py migrate
it always tries to apply the migrations file "3". Even though the latest one is file 5.
How can I fix this issue?
I have tried:
./manage.py makemigrations app_name
./manage.py migrate app_name
./manage.py migrate --run-syncdb
Also, I checked the dbshell, and there is a table already created for the model which is part of migrations file 5.
| [
"Simple thing, because you didn't use migration file value while doing makemigrations. And migration file value is 0005. You must specify that value while doing makemigrations.\nUse these three commands for migrations:\npython manage.py makemigrations appname\n\npython manage.py sqlmigrate appname 0005 #specified that migration file 5 value here\n\npython manage.py migrate\n\nNow migrations will apply on that migration file 5 using its value 0005\n",
"I ended up deleting the migration file 3 which was getting picked up by django and add the operations of migration file 3 to inital file.\nThen when I ran migrate <app_name>, it picked up the last file (file 5). I did have to resolve some conflicts between file 3 and preceding files though\n"
] | [
0,
0
] | [] | [] | [
"django",
"django_migrations",
"python"
] | stackoverflow_0074561280_django_django_migrations_python.txt |
Q:
Discord event on_member_join not working when a member joins the guild
I have an event in my discord bot that sends an embed to welcome a member when the join the guild. No errors are produced but the event does not seem to work for me.
Here is the code for the event:
@bot.event
async def on_member_join(member):
"""
The code in this event is executed every time a member joins the server
"""
embed = discord.embed(title=f'Welcome to {member.guild.name}',
description=f'{member.mention}, welcome to the server! \nMake sure to checkout the rules first. Enjoy your stay <3',
color=0x0061ff)
if member.guild.icon is not None:
embed.set_thumbnail(
url=member.guild.icon.url
)
await bot.get_channel(1047615507995562014).send(embed=embed)
I'm also using the following intents as well and have enabled them properly so I know that is not the issue with my code.
intents = discord.Intents.all()
intents.members = True
A:
The reason you're getting an Error, is because the e in discord.embed is lowercase
embed = discord.embed(title=f'Welcome to {member.guild.name}',
description=f'{member.mention}, welcome to the server! \nMake sure to checkout the rules first. Enjoy your stay <3',
color=0x0061ff)
Correct version would, obviously, be:
embed = discord.Embed(title=f'Welcome to {member.guild.name}',
description=f'{member.mention}, welcome to the server! \nMake sure to checkout the rules first. Enjoy your stay <3',
color=0x0061ff)
A:
your bot is probably missing a privileged intent,go to your bot on the discord developer portal, then turn the server members intent on
| Discord event on_member_join not working when a member joins the guild | I have an event in my discord bot that sends an embed to welcome a member when the join the guild. No errors are produced but the event does not seem to work for me.
Here is the code for the event:
@bot.event
async def on_member_join(member):
"""
The code in this event is executed every time a member joins the server
"""
embed = discord.embed(title=f'Welcome to {member.guild.name}',
description=f'{member.mention}, welcome to the server! \nMake sure to checkout the rules first. Enjoy your stay <3',
color=0x0061ff)
if member.guild.icon is not None:
embed.set_thumbnail(
url=member.guild.icon.url
)
await bot.get_channel(1047615507995562014).send(embed=embed)
I'm also using the following intents as well and have enabled them properly so I know that is not the issue with my code.
intents = discord.Intents.all()
intents.members = True
| [
"The reason you're getting an Error, is because the e in discord.embed is lowercase\nembed = discord.embed(title=f'Welcome to {member.guild.name}',\n description=f'{member.mention}, welcome to the server! \\nMake sure to checkout the rules first. Enjoy your stay <3',\n color=0x0061ff)\n\nCorrect version would, obviously, be:\nembed = discord.Embed(title=f'Welcome to {member.guild.name}',\n description=f'{member.mention}, welcome to the server! \\nMake sure to checkout the rules first. Enjoy your stay <3',\n color=0x0061ff)\n\n",
"your bot is probably missing a privileged intent,go to your bot on the discord developer portal, then turn the server members intent on\n"
] | [
0,
0
] | [] | [] | [
"discord",
"discord.py",
"python"
] | stackoverflow_0074663887_discord_discord.py_python.txt |
Q:
Switching Values in Dataframe with Lambda expression
I have a dataframe with 3 columns
For each TicketID, I want to iterate through all the other rows in the dataframe, searching for the TicketID as a String somewhere within the TicketStatus.
If we find a match for the ticketID within another row's TicketStatus, we will switch the Odds fields for that matching pair.
Example input:
TicketID, Odds, TicketStatus
123456, 4.0, 'Hello'
654799, 11.0, 'Yes 123456'
Example output:
TicketID, Odds, TicketStatus
123456, 11.0, 'Hello'
654799, 4.0, 'Yes 123456'
df['Odds'] = df.apply(lambda x: df.loc[df['TicketStatus'].str.contains(x['TicketID']), 'Odds'].values[0]
if df['TicketStatus'].str.contains(x['TicketID']).any()
else x['Odds'], axis=1)
I'm returning a TypeError: first argument must be string or compiled pattern, but can't figure out what I'm doing wrong.
A:
You can iterate through the rows of the dataframe and use the str.contains() method to check if the TicketStatus of a row contains a given TicketID. If a match is found, you can update the Odds value for both rows.
Here is an example implementation:
import pandas as pd
# define the function that will be applied to each row in the dataframe
def switch_odds(row):
# get the current TicketID and Odds for the current row
ticket_id = row['TicketID']
odds = row['Odds']
# search for matches of the current TicketID in the TicketStatus of other rows
matches = df[df['TicketStatus'].str.contains(ticket_id)]
# iterate through the matching rows
for match in matches.itertuples():
# get the index of the matching row
index = match.Index
# switch the Odds values for the current row and the matching row
df.loc[index, 'Odds'] = odds
df.loc[row.name, 'Odds'] = match.Odds
# load the data into a dataframe
df = pd.DataFrame([
[123456, 4.0, 'Hello'],
[654799, 11.0, 'Yes 123456']
], columns=['TicketID', 'Odds', 'TicketStatus'])
# apply the function to each row in the dataframe
df.apply(switch_odds, axis=1)
| Switching Values in Dataframe with Lambda expression |
I have a dataframe with 3 columns
For each TicketID, I want to iterate through all the other rows in the dataframe, searching for the TicketID as a String somewhere within the TicketStatus.
If we find a match for the ticketID within another row's TicketStatus, we will switch the Odds fields for that matching pair.
Example input:
TicketID, Odds, TicketStatus
123456, 4.0, 'Hello'
654799, 11.0, 'Yes 123456'
Example output:
TicketID, Odds, TicketStatus
123456, 11.0, 'Hello'
654799, 4.0, 'Yes 123456'
df['Odds'] = df.apply(lambda x: df.loc[df['TicketStatus'].str.contains(x['TicketID']), 'Odds'].values[0]
if df['TicketStatus'].str.contains(x['TicketID']).any()
else x['Odds'], axis=1)
I'm returning a TypeError: first argument must be string or compiled pattern, but can't figure out what I'm doing wrong.
| [
"You can iterate through the rows of the dataframe and use the str.contains() method to check if the TicketStatus of a row contains a given TicketID. If a match is found, you can update the Odds value for both rows.\nHere is an example implementation:\nimport pandas as pd\n\n# define the function that will be applied to each row in the dataframe\ndef switch_odds(row):\n # get the current TicketID and Odds for the current row\n ticket_id = row['TicketID']\n odds = row['Odds']\n\n # search for matches of the current TicketID in the TicketStatus of other rows\n matches = df[df['TicketStatus'].str.contains(ticket_id)]\n\n # iterate through the matching rows\n for match in matches.itertuples():\n # get the index of the matching row\n index = match.Index\n\n # switch the Odds values for the current row and the matching row\n df.loc[index, 'Odds'] = odds\n df.loc[row.name, 'Odds'] = match.Odds\n\n# load the data into a dataframe\ndf = pd.DataFrame([\n [123456, 4.0, 'Hello'],\n [654799, 11.0, 'Yes 123456']\n], columns=['TicketID', 'Odds', 'TicketStatus'])\n\n# apply the function to each row in the dataframe\ndf.apply(switch_odds, axis=1)\n\n"
] | [
1
] | [] | [] | [
"dataframe",
"lambda",
"python"
] | stackoverflow_0074674843_dataframe_lambda_python.txt |
Q:
How to solve "AttributeError: 'float' object has no attribute 'lower'"
enter image description here
Getting issues with my code unable to understand what to do next can anyone help me out
# Importing the libraries
import numpy as np
import pandas as pd
import tensorflow as tf
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Embedding, LSTM, SpatialDropout1D
from sklearn.model_selection import train_test_split
from tensorflow.keras.utils import to_categorical
import pickle
import re
# Importing the dataset
filename = "MoviePlots.csv"
data = pd.read_csv(filename, encoding= 'unicode_escape')
# Keeping only the neccessary columns
data = data[['Plot']]
# Clean the data
data['Plot'] = data['Plot'].apply(lambda x: x.lower())
data['Plot'] = data['Plot'].apply((lambda x: re.sub('[^a-zA-z0-9\s]', '', x)))
# Create the tokenizer
tokenizer = Tokenizer(num_words=5000, split=" ")
tokenizer.fit_on_texts(data['Plot'].values)
# Save the tokenizer
with open('tokenizer.pickle', 'wb') as handle:
pickle.dump(tokenizer, handle, protocol=pickle.HIGHEST_PROTOCOL)
# Create the sequences
X = tokenizer.texts_to_sequences(data['Plot'].values)
X = pad_sequences(X)
# Create the model
model = Sequential()
model.add(Embedding(5000, 256, input_length=X.shape[1]))
model.add(Bidirectional(LSTM(256, return_sequences=True, dropout=0.1, recurrent_dropout=0.1)))
model.add(LSTM(256, return_sequences=True, dropout=0.1, recurrent_dropout=0.1))
model.add(LSTM(256, dropout=0.1, recurrent_dropout=0.1))
model.add(Dense(256, activation='relu', kernel_regularizer=regularizers.l2(0.01)))
model.add(Dense(5000, activation='softmax'))
# Compile the model
model.compile(loss='categorical_crossentropy', optimizer=Adam(lr=0.01), metrics=['accuracy'])
# Train the model
model.fit(X, X, epochs=100, batch_size=128, verbose=1)
# Saving the model
model.save('visioniser.h5')
This is my code and error in the image attached
Anyone please help me out solve this problem of my code please diagnose it
A:
It appears that the error is happening with data['Plot'] = data['Plot'].apply(lambda x: x.lower()) (you are calling the apply function on a column of data -> one of the values in the column is not a string so it doesn't have the lower method)!
You could fix this by checking if the instance is actually of type string:
data['Plot'] = data['Plot'].apply(lambda x: x.lower() if isinstance(x, str) else x)
or instead of using a lambda function:
data['Plot'] = data['Plot'].str.lower() whereas panda´s str.lower skips values that are not strings!
A:
It seems like your column Plot holds some NaN values (considered as float by pandas), hence the error. Try then to cast the column as str with pandas.Series.astype before calling pandas.Series.apply :
data['Plot'] = data['Plot'].astype(str).apply(lambda x: x.lower())
Or simply use pandas.Series.str.lower :
data['Plot'] = data['Plot'].astype(str).str.lower()
The same goes with re.sub, you could use pandas.Series.replace :
data['Plot'] = data['Plot'].astype(str).replace(r'[^a-zA-z0-9\s]', '', regex=True)
| How to solve "AttributeError: 'float' object has no attribute 'lower'" | enter image description here
Getting issues with my code unable to understand what to do next can anyone help me out
# Importing the libraries
import numpy as np
import pandas as pd
import tensorflow as tf
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Embedding, LSTM, SpatialDropout1D
from sklearn.model_selection import train_test_split
from tensorflow.keras.utils import to_categorical
import pickle
import re
# Importing the dataset
filename = "MoviePlots.csv"
data = pd.read_csv(filename, encoding= 'unicode_escape')
# Keeping only the neccessary columns
data = data[['Plot']]
# Clean the data
data['Plot'] = data['Plot'].apply(lambda x: x.lower())
data['Plot'] = data['Plot'].apply((lambda x: re.sub('[^a-zA-z0-9\s]', '', x)))
# Create the tokenizer
tokenizer = Tokenizer(num_words=5000, split=" ")
tokenizer.fit_on_texts(data['Plot'].values)
# Save the tokenizer
with open('tokenizer.pickle', 'wb') as handle:
pickle.dump(tokenizer, handle, protocol=pickle.HIGHEST_PROTOCOL)
# Create the sequences
X = tokenizer.texts_to_sequences(data['Plot'].values)
X = pad_sequences(X)
# Create the model
model = Sequential()
model.add(Embedding(5000, 256, input_length=X.shape[1]))
model.add(Bidirectional(LSTM(256, return_sequences=True, dropout=0.1, recurrent_dropout=0.1)))
model.add(LSTM(256, return_sequences=True, dropout=0.1, recurrent_dropout=0.1))
model.add(LSTM(256, dropout=0.1, recurrent_dropout=0.1))
model.add(Dense(256, activation='relu', kernel_regularizer=regularizers.l2(0.01)))
model.add(Dense(5000, activation='softmax'))
# Compile the model
model.compile(loss='categorical_crossentropy', optimizer=Adam(lr=0.01), metrics=['accuracy'])
# Train the model
model.fit(X, X, epochs=100, batch_size=128, verbose=1)
# Saving the model
model.save('visioniser.h5')
This is my code and error in the image attached
Anyone please help me out solve this problem of my code please diagnose it
| [
"It appears that the error is happening with data['Plot'] = data['Plot'].apply(lambda x: x.lower()) (you are calling the apply function on a column of data -> one of the values in the column is not a string so it doesn't have the lower method)!\nYou could fix this by checking if the instance is actually of type string:\ndata['Plot'] = data['Plot'].apply(lambda x: x.lower() if isinstance(x, str) else x) \nor instead of using a lambda function:\ndata['Plot'] = data['Plot'].str.lower() whereas panda´s str.lower skips values that are not strings!\n",
"It seems like your column Plot holds some NaN values (considered as float by pandas), hence the error. Try then to cast the column as str with pandas.Series.astype before calling pandas.Series.apply :\ndata['Plot'] = data['Plot'].astype(str).apply(lambda x: x.lower())\n\nOr simply use pandas.Series.str.lower :\ndata['Plot'] = data['Plot'].astype(str).str.lower()\n\nThe same goes with re.sub, you could use pandas.Series.replace :\ndata['Plot'] = data['Plot'].astype(str).replace(r'[^a-zA-z0-9\\s]', '', regex=True)\n\n"
] | [
0,
0
] | [] | [] | [
"artificial_intelligence",
"attributeerror",
"pandas",
"python"
] | stackoverflow_0074674913_artificial_intelligence_attributeerror_pandas_python.txt |
Q:
Naive bayes with python but seperated two file as 'trainset.csv' and 'testset.csv'
I need to apply the naive Bayes algorithm with these files but I search for the algorithm and every example contains '1 CSV file and manually separated train and test set'. I already have 2 CSV file for the train and test set how can I apply a naive Bayes algorithm?
I tried to use sklearn train_test_split without(test_size) command but I could not do anything
A:
It sounds like you want to use the Naive Bayes algorithm to train a model on one CSV file and then test that model using another CSV file. If that's the case, you can use the pandas library to load the two CSV files into separate dataframes, and then use the sklearn library to train a Naive Bayes model using the first dataframe and evaluate the model using the second dataframe.
Here's an example of how you might do this:
# Load the training and test data into separate dataframes using pandas
import pandas as pd
train_df = pd.read_csv("trainset.csv")
test_df = pd.read_csv("testset.csv")
# Extract the input features and target labels from the training data
X_train = train_df.drop("target_label", axis=1)
y_train = train_df["target_label"]
# Extract the input features from the test data
X_test = test_df.drop("target_label", axis=1)
# Import the GaussianNB classifier from sklearn
from sklearn.naive_bayes import GaussianNB
# Create a GaussianNB classifier instance
nb = GaussianNB()
# Train the classifier using the training data
nb.fit(X_train, y_train)
# Use the trained classifier to make predictions on the test data
y_pred = nb.predict(X_test)
Once you have made predictions using the trained model, you can evaluate the model's performance using the sklearn.metrics module, which provides a variety of metrics for evaluating machine learning models. For example, you could use the accuracy_score function to calculate the accuracy of the model on the test data, like this:
from sklearn.metrics import accuracy_score
# Calculate the accuracy of the model on the test data
accuracy = accuracy_score(y_test, y_pred)
print("Model accuracy: {:.2f}%".format(accuracy * 100))
| Naive bayes with python but seperated two file as 'trainset.csv' and 'testset.csv' | I need to apply the naive Bayes algorithm with these files but I search for the algorithm and every example contains '1 CSV file and manually separated train and test set'. I already have 2 CSV file for the train and test set how can I apply a naive Bayes algorithm?
I tried to use sklearn train_test_split without(test_size) command but I could not do anything
| [
"It sounds like you want to use the Naive Bayes algorithm to train a model on one CSV file and then test that model using another CSV file. If that's the case, you can use the pandas library to load the two CSV files into separate dataframes, and then use the sklearn library to train a Naive Bayes model using the first dataframe and evaluate the model using the second dataframe.\nHere's an example of how you might do this:\n# Load the training and test data into separate dataframes using pandas\nimport pandas as pd\ntrain_df = pd.read_csv(\"trainset.csv\")\ntest_df = pd.read_csv(\"testset.csv\")\n\n# Extract the input features and target labels from the training data\nX_train = train_df.drop(\"target_label\", axis=1)\ny_train = train_df[\"target_label\"]\n\n# Extract the input features from the test data\nX_test = test_df.drop(\"target_label\", axis=1)\n\n# Import the GaussianNB classifier from sklearn\nfrom sklearn.naive_bayes import GaussianNB\n\n# Create a GaussianNB classifier instance\nnb = GaussianNB()\n\n# Train the classifier using the training data\nnb.fit(X_train, y_train)\n\n# Use the trained classifier to make predictions on the test data\ny_pred = nb.predict(X_test)\n\nOnce you have made predictions using the trained model, you can evaluate the model's performance using the sklearn.metrics module, which provides a variety of metrics for evaluating machine learning models. For example, you could use the accuracy_score function to calculate the accuracy of the model on the test data, like this:\nfrom sklearn.metrics import accuracy_score\n\n# Calculate the accuracy of the model on the test data\naccuracy = accuracy_score(y_test, y_pred)\nprint(\"Model accuracy: {:.2f}%\".format(accuracy * 100))\n\n"
] | [
0
] | [] | [] | [
"naivebayes",
"python"
] | stackoverflow_0074674912_naivebayes_python.txt |
Q:
Filter Pyspark Dataframe column based on whether it contains or does not contain substring
I have a pyspark dataframe message_df with millions of rows that looks like this
id
message
ab123
Hello my name is Chris
cd345
The room should be 2301
ef567
Welcome! What is your name?
gh873
That way please
kj893
The current year is 2022
and two lists
wanted_words = ['name','room']
unwanted_words = ['welcome','year']
I only want to get rows where message contains any of the words in wanted_words and does not contain any of the words in unwanted_words, hence the result should be:
id
message
ab123
Hello my name is Chris
cd345
The room should be 2301
As of right now I am doing it word by word
message_df.select(lower(F.col('message'))).filter(
(
F.col('lower(message)').contains('name') |
F.col('lower(message)').contains('room')
) & (
~F.col('lower(message)').contains('welcome') &
~F.col('lower(message)').contains('year')
)
)
Which is very tedious to code. However, when I instead use rlike:
wanted_words ="(name|room)"
unwanted_words ="(welcome|year)"
message_df.select(lower(F.col('message'))).filter(
~F.col('lower(message)').rlike(not_contain) &
F.col('lower(message)').rlike(contain)
)
The process slows down immensely. Is the reason because rlike is significantly slower, and if so what is a better way of filtering when wanted_words and unwanted_words may contain hundreds of words?
A:
Split text into tokens/words and use arrays_overlap function to check if wanted or unwanted token is present:
df = df.filter(
(
F.arrays_overlap(
F.split(F.regexp_replace(F.lower("message"), r"[^a-zA-Z0-9\s]+", ""), "\s+"),
F.array([F.lit(c) for c in wanted_words])
)
)
&
(
~F.arrays_overlap(
F.split(F.regexp_replace(F.lower("message"), r"[^a-zA-Z0-9\s]+", ""), "\s+"),
F.array([F.lit(c) for c in unwanted_words])
)
)
)
Full example:
columns = ["id","message"]
data = [["ab123","Hello my name is Chris"],["cd345","The room should be 2301"],["ef567","Welcome! What is your name?"],["gh873","That way please"],["kj893","The current year is 2022"]]
df = spark.createDataFrame(data).toDF(*columns)
wanted_words = ['name','room']
unwanted_words = ['welcome','year']
df = df.filter(
(
F.arrays_overlap(
F.split(F.regexp_replace(F.lower("message"), r"[^a-zA-Z0-9\s]+", ""), "\s+"),
F.array([F.lit(c) for c in wanted_words])
)
)
&
(
~F.arrays_overlap(
F.split(F.regexp_replace(F.lower("message"), r"[^a-zA-Z0-9\s]+", ""), "\s+"),
F.array([F.lit(c) for c in unwanted_words])
)
)
)
[Out]:
+-----+------------------------+
|id |message |
+-----+------------------------+
|ab123|Hello my name is Chris |
|cd345|The room should be 2301 |
+-----+------------------------+
You can also pre-compute the tokens at once for efficiency:
df = df.withColumn("tokens", F.split(F.regexp_replace(F.lower("message"), r"[^a-zA-Z0-9\s]+", ""), "\s+"))
and use in "arrays_overlap":
F.arrays_overlap(F.col("tokens"), F.array([F.lit(c) for c in wanted_words]))
| Filter Pyspark Dataframe column based on whether it contains or does not contain substring | I have a pyspark dataframe message_df with millions of rows that looks like this
id
message
ab123
Hello my name is Chris
cd345
The room should be 2301
ef567
Welcome! What is your name?
gh873
That way please
kj893
The current year is 2022
and two lists
wanted_words = ['name','room']
unwanted_words = ['welcome','year']
I only want to get rows where message contains any of the words in wanted_words and does not contain any of the words in unwanted_words, hence the result should be:
id
message
ab123
Hello my name is Chris
cd345
The room should be 2301
As of right now I am doing it word by word
message_df.select(lower(F.col('message'))).filter(
(
F.col('lower(message)').contains('name') |
F.col('lower(message)').contains('room')
) & (
~F.col('lower(message)').contains('welcome') &
~F.col('lower(message)').contains('year')
)
)
Which is very tedious to code. However, when I instead use rlike:
wanted_words ="(name|room)"
unwanted_words ="(welcome|year)"
message_df.select(lower(F.col('message'))).filter(
~F.col('lower(message)').rlike(not_contain) &
F.col('lower(message)').rlike(contain)
)
The process slows down immensely. Is the reason because rlike is significantly slower, and if so what is a better way of filtering when wanted_words and unwanted_words may contain hundreds of words?
| [
"Split text into tokens/words and use arrays_overlap function to check if wanted or unwanted token is present:\ndf = df.filter(\n (\n F.arrays_overlap(\n F.split(F.regexp_replace(F.lower(\"message\"), r\"[^a-zA-Z0-9\\s]+\", \"\"), \"\\s+\"),\n F.array([F.lit(c) for c in wanted_words])\n )\n )\n & \n (\n ~F.arrays_overlap(\n F.split(F.regexp_replace(F.lower(\"message\"), r\"[^a-zA-Z0-9\\s]+\", \"\"), \"\\s+\"),\n F.array([F.lit(c) for c in unwanted_words])\n )\n )\n)\n\nFull example:\ncolumns = [\"id\",\"message\"]\ndata = [[\"ab123\",\"Hello my name is Chris\"],[\"cd345\",\"The room should be 2301\"],[\"ef567\",\"Welcome! What is your name?\"],[\"gh873\",\"That way please\"],[\"kj893\",\"The current year is 2022\"]]\ndf = spark.createDataFrame(data).toDF(*columns)\n\nwanted_words = ['name','room']\nunwanted_words = ['welcome','year']\n\ndf = df.filter(\n (\n F.arrays_overlap(\n F.split(F.regexp_replace(F.lower(\"message\"), r\"[^a-zA-Z0-9\\s]+\", \"\"), \"\\s+\"),\n F.array([F.lit(c) for c in wanted_words])\n )\n )\n & \n (\n ~F.arrays_overlap(\n F.split(F.regexp_replace(F.lower(\"message\"), r\"[^a-zA-Z0-9\\s]+\", \"\"), \"\\s+\"),\n F.array([F.lit(c) for c in unwanted_words])\n )\n )\n)\n\n[Out]:\n+-----+------------------------+\n|id |message |\n+-----+------------------------+\n|ab123|Hello my name is Chris |\n|cd345|The room should be 2301 |\n+-----+------------------------+\n\nYou can also pre-compute the tokens at once for efficiency:\ndf = df.withColumn(\"tokens\", F.split(F.regexp_replace(F.lower(\"message\"), r\"[^a-zA-Z0-9\\s]+\", \"\"), \"\\s+\"))\n\nand use in \"arrays_overlap\":\nF.arrays_overlap(F.col(\"tokens\"), F.array([F.lit(c) for c in wanted_words]))\n\n"
] | [
0
] | [] | [] | [
"dataframe",
"pyspark",
"python"
] | stackoverflow_0074668162_dataframe_pyspark_python.txt |
Q:
Python Write every Nth Filename from Folder to a Text File
Hello I am trying to write every odd and then even filename from a Folder to a text file.
import os
TXT = "C:/Users/Admin/Documents/combine.txt"
# Collects Files
with open(TXT, "w") as a:
for path, subdirs, files in os.walk(r'C:\Users\Admin\Desktop\combine'):
for filename in files:
f = os.path.join(path, filename)
a.write("file '" + str(f) + "'" + '\n')
example filenames: 01, 02, 03, 04, 05, 06, 07, 08, 09 .png
wanted results:
file 'C:\Users\Admin\Desktop\combine\01.png'
file 'C:\Users\Admin\Desktop\combine\03.png'
file 'C:\Users\Admin\Desktop\combine\05.png'
file 'C:\Users\Admin\Desktop\combine\07.png'
file 'C:\Users\Admin\Desktop\combine\09.png'
file 'C:\Users\Admin\Desktop\combine\02.png'
file 'C:\Users\Admin\Desktop\combine\04.png'
file 'C:\Users\Admin\Desktop\combine\06.png'
file 'C:\Users\Admin\Desktop\combine\08.png'
A:
As suggested by Mitchell van Zuylen, and Tomerikoo, you could use slicing and listdir to produce your desired output:
Code:
import os
N = 2 # every 2nd filename
combine_txt = "C:\Users\Admin\Documents\combine.txt"
folder_of_interest = 'C:\Users\Admin\Desktop\combine'
files = sorted(os.listdir(folder_of_interest))
files = [f for f in files if f.endswith('.png')] #only select .png files
with open(combine_txt, "w") as a:
for i in range(N):
for f in files[i::N]:
a.write(f"file '{folder_of_interest}\\{f}'\n")
Output:
combine.txt
file 'C:\Users\Admin\Desktop\combine\01.png'
file 'C:\Users\Admin\Desktop\combine\03.png'
file 'C:\Users\Admin\Desktop\combine\05.png'
file 'C:\Users\Admin\Desktop\combine\07.png'
file 'C:\Users\Admin\Desktop\combine\09.png'
file 'C:\Users\Admin\Desktop\combine\02.png'
file 'C:\Users\Admin\Desktop\combine\04.png'
file 'C:\Users\Admin\Desktop\combine\06.png'
file 'C:\Users\Admin\Desktop\combine\08.png'
Note:
You can change N = 2 to N = 3 to list every 3rd filename in the same way.
| Python Write every Nth Filename from Folder to a Text File | Hello I am trying to write every odd and then even filename from a Folder to a text file.
import os
TXT = "C:/Users/Admin/Documents/combine.txt"
# Collects Files
with open(TXT, "w") as a:
for path, subdirs, files in os.walk(r'C:\Users\Admin\Desktop\combine'):
for filename in files:
f = os.path.join(path, filename)
a.write("file '" + str(f) + "'" + '\n')
example filenames: 01, 02, 03, 04, 05, 06, 07, 08, 09 .png
wanted results:
file 'C:\Users\Admin\Desktop\combine\01.png'
file 'C:\Users\Admin\Desktop\combine\03.png'
file 'C:\Users\Admin\Desktop\combine\05.png'
file 'C:\Users\Admin\Desktop\combine\07.png'
file 'C:\Users\Admin\Desktop\combine\09.png'
file 'C:\Users\Admin\Desktop\combine\02.png'
file 'C:\Users\Admin\Desktop\combine\04.png'
file 'C:\Users\Admin\Desktop\combine\06.png'
file 'C:\Users\Admin\Desktop\combine\08.png'
| [
"As suggested by Mitchell van Zuylen, and Tomerikoo, you could use slicing and listdir to produce your desired output:\nCode:\nimport os\n\nN = 2 # every 2nd filename\n\ncombine_txt = \"C:\\Users\\Admin\\Documents\\combine.txt\"\nfolder_of_interest = 'C:\\Users\\Admin\\Desktop\\combine'\n\nfiles = sorted(os.listdir(folder_of_interest))\nfiles = [f for f in files if f.endswith('.png')] #only select .png files\n\nwith open(combine_txt, \"w\") as a:\n for i in range(N):\n for f in files[i::N]:\n a.write(f\"file '{folder_of_interest}\\\\{f}'\\n\")\n\nOutput:\ncombine.txt\nfile 'C:\\Users\\Admin\\Desktop\\combine\\01.png'\nfile 'C:\\Users\\Admin\\Desktop\\combine\\03.png'\nfile 'C:\\Users\\Admin\\Desktop\\combine\\05.png'\nfile 'C:\\Users\\Admin\\Desktop\\combine\\07.png'\nfile 'C:\\Users\\Admin\\Desktop\\combine\\09.png'\nfile 'C:\\Users\\Admin\\Desktop\\combine\\02.png'\nfile 'C:\\Users\\Admin\\Desktop\\combine\\04.png'\nfile 'C:\\Users\\Admin\\Desktop\\combine\\06.png'\nfile 'C:\\Users\\Admin\\Desktop\\combine\\08.png'\n\nNote:\nYou can change N = 2 to N = 3 to list every 3rd filename in the same way.\n"
] | [
0
] | [] | [] | [
"filenames",
"iteration",
"python",
"subdirectory"
] | stackoverflow_0074674464_filenames_iteration_python_subdirectory.txt |
Q:
increment the name of the variable, ex: prdt1, prdt2, prdt3 ...etc
I didn't try anything because I don't even know where to start...
the program would associate every item of the list to the variables like (name)1, (name)2, (name)3, and so on to the number of items the list has.
prdt = ["WD40", "001", "oleo de carro, 1L", "liquidos", "seccao 1", 5, 30]
prdt1 ="WD40"
prdt2 ="001"
prdt3 ="oleo de carro, 1L"
prdt4 ="liquidos"
a program that creates a variable incremented by 1 in a for a loop.
A:
Basically with python version above 3.8 you can use eval and walrus operator in order to achieve this behaviour. You will get variables with names corresponding to your list items
for idx, item in enumerate(prdt):
eval(f"({item}{idx}:={item})")
If you look at this weird syntax in eval it's walrus operator := combined with a parenthesis and all that in a f-string. Very unreadable and ugly solution imho, but eval only allows for expressions, NOT compound statements (so you cannot use the regular assignment with = ).
And you have int values in your list, which will cause the above code to fail, since var name in python cannot be an int... 5=5 is not a legal code, neither are values with spaces...Overall sorry to say, but this question does not make too much sense to be honest.
But in general it sounds like a terrible idea to be honest (whatever is your usecase). If you need to associate specific names with values you should use dict probably.
| increment the name of the variable, ex: prdt1, prdt2, prdt3 ...etc | I didn't try anything because I don't even know where to start...
the program would associate every item of the list to the variables like (name)1, (name)2, (name)3, and so on to the number of items the list has.
prdt = ["WD40", "001", "oleo de carro, 1L", "liquidos", "seccao 1", 5, 30]
prdt1 ="WD40"
prdt2 ="001"
prdt3 ="oleo de carro, 1L"
prdt4 ="liquidos"
a program that creates a variable incremented by 1 in a for a loop.
| [
"Basically with python version above 3.8 you can use eval and walrus operator in order to achieve this behaviour. You will get variables with names corresponding to your list items\nfor idx, item in enumerate(prdt):\n eval(f\"({item}{idx}:={item})\")\n\nIf you look at this weird syntax in eval it's walrus operator := combined with a parenthesis and all that in a f-string. Very unreadable and ugly solution imho, but eval only allows for expressions, NOT compound statements (so you cannot use the regular assignment with = ).\nAnd you have int values in your list, which will cause the above code to fail, since var name in python cannot be an int... 5=5 is not a legal code, neither are values with spaces...Overall sorry to say, but this question does not make too much sense to be honest.\nBut in general it sounds like a terrible idea to be honest (whatever is your usecase). If you need to associate specific names with values you should use dict probably.\n"
] | [
0
] | [] | [] | [
"python"
] | stackoverflow_0074674932_python.txt |
Q:
invert 2 items in a list Python
I'm building a Formula1 race simulator in python and I'm trying to make a overtake function, basically, i've all the drivers stored in a list and once one of the drivers surpasses the other I need to invert their position in the list
['hamilton','vertsappen','perez','sainz']
['hamilton','perez','verstappen','sainz']
is there any way to do so?
since now I have tried to store the original positions in a temporary variable but I keep finding myself with duplicates in the list
original temporary variables
overtaken temp = Valteri Bottas
overtaker temp = Nicholas Latifi
after the inverting
overtaken temp = Valteri Bottas
overtaker temp = Valteri Bottas
A:
A simple overtake function:
def overtake_driver(drivers, overtaker, overtaken):
# Find the indices of the overtaker and the overtaken in the list of drivers
overtaker_index = drivers.index(overtaker)
overtaken_index = drivers.index(overtaken)
# Swap the positions of the overtaker and the overtaken in the list of drivers
drivers[overtaker_index], drivers[overtaken_index] = drivers[overtaken_index], drivers[overtaker_index]
# Return the updated list of drivers
return drivers
Basically, fetch the indices of the respective drivers and just swap them in a double assignment!
A:
A lazy way to do it.
a = ['hamilton','vertsappen','perez','sainz']
a.remove('vertsappen')
a.insert(2, 'vertsappen')
print(a)
#['hamilton','perez','verstappen','sainz']
A:
i solved the problem, it was much easier then i thought:
grid[0], grid[1] = grid[1] = grid[0]
| invert 2 items in a list Python | I'm building a Formula1 race simulator in python and I'm trying to make a overtake function, basically, i've all the drivers stored in a list and once one of the drivers surpasses the other I need to invert their position in the list
['hamilton','vertsappen','perez','sainz']
['hamilton','perez','verstappen','sainz']
is there any way to do so?
since now I have tried to store the original positions in a temporary variable but I keep finding myself with duplicates in the list
original temporary variables
overtaken temp = Valteri Bottas
overtaker temp = Nicholas Latifi
after the inverting
overtaken temp = Valteri Bottas
overtaker temp = Valteri Bottas
| [
"A simple overtake function:\ndef overtake_driver(drivers, overtaker, overtaken):\n # Find the indices of the overtaker and the overtaken in the list of drivers\n overtaker_index = drivers.index(overtaker)\n overtaken_index = drivers.index(overtaken)\n\n # Swap the positions of the overtaker and the overtaken in the list of drivers\n drivers[overtaker_index], drivers[overtaken_index] = drivers[overtaken_index], drivers[overtaker_index]\n\n # Return the updated list of drivers\n return drivers\n\nBasically, fetch the indices of the respective drivers and just swap them in a double assignment!\n",
"A lazy way to do it.\na = ['hamilton','vertsappen','perez','sainz']\n\na.remove('vertsappen')\n\na.insert(2, 'vertsappen')\n\nprint(a)\n\n#['hamilton','perez','verstappen','sainz']\n\n",
"i solved the problem, it was much easier then i thought:\ngrid[0], grid[1] = grid[1] = grid[0]\n\n"
] | [
1,
1,
0
] | [] | [] | [
"python"
] | stackoverflow_0074674950_python.txt |
Q:
Box-box collision detection with PyOpenGL and Pygame at 3d
I'm writing a player class that, among other things, has mesh attributes (I use the py3d library and the mesh class from it) and collider (a class that I need to implement myself). The collider is a simple cube and should have a method to determine whether it collided with another collider-cube or not. I have a class that allows you to rotate and move 3d objects, I inherit the collider from it. The main problem is precisely to write a collision check function
I tried to use the methods built into Pygame to detect collisions, but it didn't work, because when the camera is removed, the collider remains the same size, and it can't be rotated. I'm bad at math, and all the guides I found were in C.game example
A:
One way to detect box-box collisions in 3D using PyOpenGL and Pygame is to use the Bullet physics engine. Bullet is a 3D physics engine that can be used to detect collisions, apply forces, and simulate the motion of rigid bodies. To use Bullet, you would need to implement the collider class as a Bullet body, and then use the Bullet functions to detect collisions between the collider objects. You can also use the Bullet functions to rotate and move the colliders, which will allow you to keep the same size collider regardless of the camera position.
Here's a link to a tutorial on how to integrate bullet
http://www.opengl-tutorial.org/miscellaneous/clicking-on-objects/picking-with-a-physics-library/
| Box-box collision detection with PyOpenGL and Pygame at 3d | I'm writing a player class that, among other things, has mesh attributes (I use the py3d library and the mesh class from it) and collider (a class that I need to implement myself). The collider is a simple cube and should have a method to determine whether it collided with another collider-cube or not. I have a class that allows you to rotate and move 3d objects, I inherit the collider from it. The main problem is precisely to write a collision check function
I tried to use the methods built into Pygame to detect collisions, but it didn't work, because when the camera is removed, the collider remains the same size, and it can't be rotated. I'm bad at math, and all the guides I found were in C.game example
| [
"One way to detect box-box collisions in 3D using PyOpenGL and Pygame is to use the Bullet physics engine. Bullet is a 3D physics engine that can be used to detect collisions, apply forces, and simulate the motion of rigid bodies. To use Bullet, you would need to implement the collider class as a Bullet body, and then use the Bullet functions to detect collisions between the collider objects. You can also use the Bullet functions to rotate and move the colliders, which will allow you to keep the same size collider regardless of the camera position.\nHere's a link to a tutorial on how to integrate bullet\nhttp://www.opengl-tutorial.org/miscellaneous/clicking-on-objects/picking-with-a-physics-library/\n"
] | [
0
] | [] | [] | [
"collision_detection",
"pyopengl",
"python"
] | stackoverflow_0074674884_collision_detection_pyopengl_python.txt |
Q:
I'm creating discord bot that plays audio but i got this eror "discord.ext.commands.errors.CommandNotFound: Command "join" is not found"
I'm creating discord bot that plays audio but i got this eror "discord.ext.commands.errors.CommandNotFound: Command "join" is not found"
here my code
music.py
import discord
from discord.ext import commands
import youtube_dl
class music(commands.Cog):
def __init__(self, client):
self.client = client
@commands.command()
async def join(self, ctx):
if ctx.author.voice is None:
await ctx.send("Et ole puhelussa vitun apina!")
voice_channel = ctx.author.voice.channel
if ctx.voice_client is None:
await voice_channel.connect()
else:
await ctx.voice_client.move_to(voice_channel)
@commands.command()
async def disconnect(self,ctx):
await ctx.voice_client.disconnect()
@commands.command()
async def play(self,ctx,url):
ctx.voice_client.stop()
FNPEG_OPTIONS = {'before_optopms': '-reconnect 1 -reconnect_streamed 1 -reconnect_delay max 5', 'options': '-vn'}
YDL_OPTIONS = {'format': "bestaudio"}
vc = ctx.voice_client
with youtube_dl.YoutubeDL(YDL_OPTIONS) as ydl:
info = ydl.extract_info(url, download=False)
url2 = info['formats'][0]['url']
source = await discord.FFmpegOpusAudio.from_probe(url2,**FNPEG_OPTIONS)
vc.play(source)
@commands.command()
async def pause(self, ctx):
await ctx.voice_client.pause()
await ctx.send("MUSIIKKI PYSÄYTETTY")
@commands.command()
async def resume(self, ctx):
await ctx.voice_client.resume()
await ctx.send("MUSIIKKI JATKUU")
def setup(client):
client.add_cog(music(client))
run.py
import discord
from discord.ext import commands
import music
cogs = [music]
client = commands.Bot(command_prefix="?",
intents = discord.Intents.all())
for i in range(len(cogs)):
cogs[i].setup(client)
client.run("MTA0ODUzNjcyMjY3NTMzNTE3OA.G4CK62.BwAK0qKYvuOYU_tm7-cVNCctL4RnnSDtIHmfyc")
i tried commads but they are not working it only says "discord.ext.commands.errors.CommandNotFound: Command "commad i was trying" is not found"
A:
As of discord.py 2, the add_cog method has become an async function, so you need to await it. And if you're using a cog from a other file, it is suggested to use load_extension to load it. For example:
cogs/music.py
class Music(commands.Cog):
...
# as of discord.py 2, this function needs to be an async function
async def setup(bot):
await bot.add_cog(Music(bot))
main.py
# subclass the bot to override the "setup_hook" method
class MyBot(commands.Bot):
async def setup_hook(self):
cogs_to_load = ("cogs.music",) # tuple of paths to the cogs you wanted to load
# use "load_extension" to load all the cogs
for cog in cogs_to_load:
await self.load_extension(cog)
bot = MyBot()
...
Here is an example of extensions, and here is an example of cogs.
| I'm creating discord bot that plays audio but i got this eror "discord.ext.commands.errors.CommandNotFound: Command "join" is not found" | I'm creating discord bot that plays audio but i got this eror "discord.ext.commands.errors.CommandNotFound: Command "join" is not found"
here my code
music.py
import discord
from discord.ext import commands
import youtube_dl
class music(commands.Cog):
def __init__(self, client):
self.client = client
@commands.command()
async def join(self, ctx):
if ctx.author.voice is None:
await ctx.send("Et ole puhelussa vitun apina!")
voice_channel = ctx.author.voice.channel
if ctx.voice_client is None:
await voice_channel.connect()
else:
await ctx.voice_client.move_to(voice_channel)
@commands.command()
async def disconnect(self,ctx):
await ctx.voice_client.disconnect()
@commands.command()
async def play(self,ctx,url):
ctx.voice_client.stop()
FNPEG_OPTIONS = {'before_optopms': '-reconnect 1 -reconnect_streamed 1 -reconnect_delay max 5', 'options': '-vn'}
YDL_OPTIONS = {'format': "bestaudio"}
vc = ctx.voice_client
with youtube_dl.YoutubeDL(YDL_OPTIONS) as ydl:
info = ydl.extract_info(url, download=False)
url2 = info['formats'][0]['url']
source = await discord.FFmpegOpusAudio.from_probe(url2,**FNPEG_OPTIONS)
vc.play(source)
@commands.command()
async def pause(self, ctx):
await ctx.voice_client.pause()
await ctx.send("MUSIIKKI PYSÄYTETTY")
@commands.command()
async def resume(self, ctx):
await ctx.voice_client.resume()
await ctx.send("MUSIIKKI JATKUU")
def setup(client):
client.add_cog(music(client))
run.py
import discord
from discord.ext import commands
import music
cogs = [music]
client = commands.Bot(command_prefix="?",
intents = discord.Intents.all())
for i in range(len(cogs)):
cogs[i].setup(client)
client.run("MTA0ODUzNjcyMjY3NTMzNTE3OA.G4CK62.BwAK0qKYvuOYU_tm7-cVNCctL4RnnSDtIHmfyc")
i tried commads but they are not working it only says "discord.ext.commands.errors.CommandNotFound: Command "commad i was trying" is not found"
| [
"As of discord.py 2, the add_cog method has become an async function, so you need to await it. And if you're using a cog from a other file, it is suggested to use load_extension to load it. For example:\ncogs/music.py\nclass Music(commands.Cog):\n ...\n\n# as of discord.py 2, this function needs to be an async function\nasync def setup(bot): \n await bot.add_cog(Music(bot))\n\nmain.py\n# subclass the bot to override the \"setup_hook\" method\nclass MyBot(commands.Bot):\n async def setup_hook(self):\n cogs_to_load = (\"cogs.music\",) # tuple of paths to the cogs you wanted to load\n\n # use \"load_extension\" to load all the cogs\n for cog in cogs_to_load:\n await self.load_extension(cog)\n\nbot = MyBot()\n\n...\n\nHere is an example of extensions, and here is an example of cogs.\n"
] | [
0
] | [] | [] | [
"discord",
"discord.py",
"python"
] | stackoverflow_0074674862_discord_discord.py_python.txt |
Q:
Sorting Nested Lists with Various Elements
I have a nested list like:
[["bla","blabla","x=17"],["bla","x=13","z=13","blabla"],["x=27","blabla","bla","y=24"]]
I need to have this sorted by x (from least to most) as (other strings should stay where they are):
[["bla","x=13","z=13","blabla"],["bla","blabla","x=17"],["x=27","blabla","bla","y=24"]]
and also from most to least:
[["x=27","blabla","bla","y=24"],["bla","blabla","x=17"],["bla","x=13","z=13","blabla"]]
I think I have to use key=lambda but I just couldn't figure out how to do it. Searched through the web and this website but I just can't do it.
A:
Given your list:
sort_this_list = [
["bla","blabla","x=17"],
["bla","x=13","z=13","blabla"],
["x=27","blabla","bla","y=24"]
]
First, extract the x element from the respective list!
def get_x(list):
# Iterate over the items in the given list
for item in list:
# Check if the item starts with "x="
if item.startswith("x="):
# Extract the value of x and return it as an integer
return int(item.split("=")[1])
Now you can sort it via sorted_ascending = sorted(sort_this_list, key=get_x) (Look up the sorted(..) function: This will return it ascending as you requested it.)!
A:
Just for the sake of it, here's one with a lambda function:
mylist = [["bla","blabla","x=17"],["bla","x=13","z=13","blabla"],["x=27","blabla","bla","y=24"]]
mylist.sort(key = lambda l:int([item for item in l if 'x=' in item][0].split('=')[1]), reverse = True)
# [['x=27', 'blabla', 'bla', 'y=24'],
# ['bla', 'blabla', 'x=17'],
# ['bla', 'x=13', 'z=13', 'blabla']]
| Sorting Nested Lists with Various Elements | I have a nested list like:
[["bla","blabla","x=17"],["bla","x=13","z=13","blabla"],["x=27","blabla","bla","y=24"]]
I need to have this sorted by x (from least to most) as (other strings should stay where they are):
[["bla","x=13","z=13","blabla"],["bla","blabla","x=17"],["x=27","blabla","bla","y=24"]]
and also from most to least:
[["x=27","blabla","bla","y=24"],["bla","blabla","x=17"],["bla","x=13","z=13","blabla"]]
I think I have to use key=lambda but I just couldn't figure out how to do it. Searched through the web and this website but I just can't do it.
| [
"Given your list:\nsort_this_list = [\n [\"bla\",\"blabla\",\"x=17\"],\n [\"bla\",\"x=13\",\"z=13\",\"blabla\"],\n [\"x=27\",\"blabla\",\"bla\",\"y=24\"]\n]\n\nFirst, extract the x element from the respective list!\ndef get_x(list):\n # Iterate over the items in the given list\n for item in list:\n # Check if the item starts with \"x=\"\n if item.startswith(\"x=\"):\n # Extract the value of x and return it as an integer\n return int(item.split(\"=\")[1])\n\nNow you can sort it via sorted_ascending = sorted(sort_this_list, key=get_x) (Look up the sorted(..) function: This will return it ascending as you requested it.)!\n",
"Just for the sake of it, here's one with a lambda function:\nmylist = [[\"bla\",\"blabla\",\"x=17\"],[\"bla\",\"x=13\",\"z=13\",\"blabla\"],[\"x=27\",\"blabla\",\"bla\",\"y=24\"]]\n\nmylist.sort(key = lambda l:int([item for item in l if 'x=' in item][0].split('=')[1]), reverse = True)\n\n# [['x=27', 'blabla', 'bla', 'y=24'],\n# ['bla', 'blabla', 'x=17'],\n# ['bla', 'x=13', 'z=13', 'blabla']]\n\n"
] | [
1,
0
] | [] | [] | [
"list",
"nested_lists",
"python",
"python_3.x",
"sorting"
] | stackoverflow_0074674996_list_nested_lists_python_python_3.x_sorting.txt |
Q:
discord.py "sub help command"
I was wondering if it's possible to make a somewhat "sub help command" basically if I were to do ;help mute it would show how to use the mute command and so on for each command. Kinda like dyno how you can do ?help (command name) and it shows you the usage of the command. I have my own help command already finished but I was thinking about adding to it so if someone did ;help commandname it would show them the usage such as arguments I tried at the bottom but I don't think that will work. If you know how please let me know
@client.hybrid_command(name = "help", with_app_command=True, description="Get a list of commands")
@commands.guild_only()
async def help(ctx, arg = None):
pages = 3
cur_page = 1
roleplayembed = discord.Embed(color=embedcolor, title="Roleplay Commands")
roleplayembed.add_field(name=f"{client.command_prefix}Cuddle", value="Cuddle a user and add a message(Optional)",inline=False)
roleplayembed.add_field(name=f"{client.command_prefix}Hug", value="Hug a user and add a message(Optional)",inline=False)
roleplayembed.add_field(name=f"{client.command_prefix}Kiss", value="Kiss a user and add a message(Optional)",inline=False)
roleplayembed.add_field(name=f"{client.command_prefix}Slap", value="Slap a user and add a message(Optional)",inline=False)
roleplayembed.add_field(name=f"{client.command_prefix}Pat", value="Pat a user and add a message(Optional)",inline=False)
roleplayembed.set_footer(text=f"Page {cur_page+1} of {pages}")
roleplayembed.timestamp = datetime.datetime.utcnow()
basicembed = discord.Embed(color=embedcolor, title="Basic Commands")
basicembed.add_field(name=f"{client.command_prefix}Waifu", value="Posts a random AI Generated Image of a waifu",inline=False)
basicembed.add_field(name=f"{client.command_prefix}8ball", value="Works as an 8 ball",inline=False)
basicembed.add_field(name=f"{client.command_prefix}Ara", value="Gives you a random ara ara from Kurumi Tokisaki",inline=False)
basicembed.add_field(name=f"{client.command_prefix}Wikipedia", value="Search something up on the wiki",inline=False)
basicembed.add_field(name=f"{client.command_prefix}Userinfo", value="Look up info about a user",inline=False)
basicembed.add_field(name=f"{client.command_prefix}Ask", value="Ask the bot a question",inline=False)
basicembed.add_field(name=f"{client.command_prefix}Askwhy", value="Ask the boy a question beginning with 'why'",inline=False)
basicembed.add_field(name=f"{client.command_prefix}Avatar", value="Get a user's avatar or your own avatar",inline=False)
basicembed.set_footer(text=f"Page {cur_page} of {pages}")
basicembed.timestamp = datetime.datetime.utcnow()
moderationembed = discord.Embed(color=embedcolor, title="Moderation Commands")
moderationembed.add_field(name=f"{client.command_prefix}Kick", value="Kick a member",inline=False)
moderationembed.add_field(name=f"{client.command_prefix}Ban", value="Ban a member",inline=False)
moderationembed.add_field(name=f"{client.command_prefix}Slowmode", value="Set the slowmode of a channel",inline=False)
moderationembed.add_field(name=f"{client.command_prefix}Purge", value="Purge an amount of messages in a channel",inline=False)
moderationembed.add_field(name=f"{client.command_prefix}Mute", value="Mute a member for a time and reason",inline=False)
moderationembed.add_field(name=f"{client.command_prefix}Unmute", value="Unmute a member for a time and reason",inline=False)
moderationembed.set_footer(text=f"Page {cur_page+2} of {pages}")
moderationembed.timestamp = datetime.datetime.utcnow()
contents = [basicembed, roleplayembed, moderationembed]
if arg == None:
message = await ctx.send(embed=contents[cur_page-1])
await message.add_reaction("◀️")
await message.add_reaction("▶️")
def check(reaction, user):
return user == ctx.author and str(reaction.emoji) in ["◀️", "▶️"]
while True:
try:
reaction, user = await client.wait_for("reaction_add", timeout=60, check=check)
if str(reaction.emoji) == "▶️":
cur_page += 1
elif str(reaction.emoji) == "◀️":
cur_page -= 1
if cur_page > pages: #check if forward on last page
cur_page = 1
elif cur_page < 1: #check if back on first page
cur_page = pages
await message.edit(embed=contents[cur_page-1])
await message.remove_reaction(reaction, user)
except asyncio.TimeoutError:
await message.delete()
break
if arg.lower() == client.command_name:
await ctx.reply(f"{client.command_prefix}{client.command_name}{client.command_argument}")
A:
There are several ways you can do this.
When you're using slash commands (which you are currently not,) there is a really elegant way to do this in the form of SlashCommandGroups. This would get the commands as [command name] help instead, but I don't think that is a downside.
This would work like this, an example I thought of was blocking:
class Block(discord.ext.commands.Cog):
block = SlashCommandGroup("block")
def __init__(self, bot):
self.bot = bot
@block.command(name="add")
async def add(args):
# Something here
@block.command(name="remove")
async def remove(args):
# Something here
@block.command(name="help")
async def help(args):
# Get help message for this command
This would expose the commands block add [args], block remove [args] and block help each of which calls their own sub-command in the cog, and I think this is the cleanest way to get a consistent help system.
You can add this cog to your bot with bot.add_cog(Block(bot)) somewhere in your code. Specifically, I'd look into extensions
Then, for what you want to do, You're not using slash commands, so you don't need to provide autocomplete. If you want to, you can do something really hacky, using this helper function, which will work as long as you have the function in your current scope:
def help(function_name):
return globals()[function_name].__doc__
Now, you can define the help of every function individually using docstrings, and the help command will simply get those doc strings and presumably do something with it.
The way Dyno would do it is more complex, using slash commands again, but really similar to the first version. You simply add a slash command group again, but this time it is for helping specifically. I personally don't like this as much, as I think the code is a lot less clean, but if you really want the help [function] syntax instead of [function] help, this is how to do that:
class Help(discord.ext.commands.Cog):
help = SlashCommandGroup("help")
def __init__(self, bot):
self.bot = bot
@help.command(name="block")
async def block(args):
# Send the user the help response for blocking
@help.command(name="ask")
async def ask(args):
# Send the user the help response for asking
I hope that helps! :-)
| discord.py "sub help command" | I was wondering if it's possible to make a somewhat "sub help command" basically if I were to do ;help mute it would show how to use the mute command and so on for each command. Kinda like dyno how you can do ?help (command name) and it shows you the usage of the command. I have my own help command already finished but I was thinking about adding to it so if someone did ;help commandname it would show them the usage such as arguments I tried at the bottom but I don't think that will work. If you know how please let me know
@client.hybrid_command(name = "help", with_app_command=True, description="Get a list of commands")
@commands.guild_only()
async def help(ctx, arg = None):
pages = 3
cur_page = 1
roleplayembed = discord.Embed(color=embedcolor, title="Roleplay Commands")
roleplayembed.add_field(name=f"{client.command_prefix}Cuddle", value="Cuddle a user and add a message(Optional)",inline=False)
roleplayembed.add_field(name=f"{client.command_prefix}Hug", value="Hug a user and add a message(Optional)",inline=False)
roleplayembed.add_field(name=f"{client.command_prefix}Kiss", value="Kiss a user and add a message(Optional)",inline=False)
roleplayembed.add_field(name=f"{client.command_prefix}Slap", value="Slap a user and add a message(Optional)",inline=False)
roleplayembed.add_field(name=f"{client.command_prefix}Pat", value="Pat a user and add a message(Optional)",inline=False)
roleplayembed.set_footer(text=f"Page {cur_page+1} of {pages}")
roleplayembed.timestamp = datetime.datetime.utcnow()
basicembed = discord.Embed(color=embedcolor, title="Basic Commands")
basicembed.add_field(name=f"{client.command_prefix}Waifu", value="Posts a random AI Generated Image of a waifu",inline=False)
basicembed.add_field(name=f"{client.command_prefix}8ball", value="Works as an 8 ball",inline=False)
basicembed.add_field(name=f"{client.command_prefix}Ara", value="Gives you a random ara ara from Kurumi Tokisaki",inline=False)
basicembed.add_field(name=f"{client.command_prefix}Wikipedia", value="Search something up on the wiki",inline=False)
basicembed.add_field(name=f"{client.command_prefix}Userinfo", value="Look up info about a user",inline=False)
basicembed.add_field(name=f"{client.command_prefix}Ask", value="Ask the bot a question",inline=False)
basicembed.add_field(name=f"{client.command_prefix}Askwhy", value="Ask the boy a question beginning with 'why'",inline=False)
basicembed.add_field(name=f"{client.command_prefix}Avatar", value="Get a user's avatar or your own avatar",inline=False)
basicembed.set_footer(text=f"Page {cur_page} of {pages}")
basicembed.timestamp = datetime.datetime.utcnow()
moderationembed = discord.Embed(color=embedcolor, title="Moderation Commands")
moderationembed.add_field(name=f"{client.command_prefix}Kick", value="Kick a member",inline=False)
moderationembed.add_field(name=f"{client.command_prefix}Ban", value="Ban a member",inline=False)
moderationembed.add_field(name=f"{client.command_prefix}Slowmode", value="Set the slowmode of a channel",inline=False)
moderationembed.add_field(name=f"{client.command_prefix}Purge", value="Purge an amount of messages in a channel",inline=False)
moderationembed.add_field(name=f"{client.command_prefix}Mute", value="Mute a member for a time and reason",inline=False)
moderationembed.add_field(name=f"{client.command_prefix}Unmute", value="Unmute a member for a time and reason",inline=False)
moderationembed.set_footer(text=f"Page {cur_page+2} of {pages}")
moderationembed.timestamp = datetime.datetime.utcnow()
contents = [basicembed, roleplayembed, moderationembed]
if arg == None:
message = await ctx.send(embed=contents[cur_page-1])
await message.add_reaction("◀️")
await message.add_reaction("▶️")
def check(reaction, user):
return user == ctx.author and str(reaction.emoji) in ["◀️", "▶️"]
while True:
try:
reaction, user = await client.wait_for("reaction_add", timeout=60, check=check)
if str(reaction.emoji) == "▶️":
cur_page += 1
elif str(reaction.emoji) == "◀️":
cur_page -= 1
if cur_page > pages: #check if forward on last page
cur_page = 1
elif cur_page < 1: #check if back on first page
cur_page = pages
await message.edit(embed=contents[cur_page-1])
await message.remove_reaction(reaction, user)
except asyncio.TimeoutError:
await message.delete()
break
if arg.lower() == client.command_name:
await ctx.reply(f"{client.command_prefix}{client.command_name}{client.command_argument}")
| [
"There are several ways you can do this.\nWhen you're using slash commands (which you are currently not,) there is a really elegant way to do this in the form of SlashCommandGroups. This would get the commands as [command name] help instead, but I don't think that is a downside.\nThis would work like this, an example I thought of was blocking:\nclass Block(discord.ext.commands.Cog):\n block = SlashCommandGroup(\"block\")\n\n def __init__(self, bot):\n self.bot = bot\n\n @block.command(name=\"add\")\n async def add(args):\n # Something here\n\n @block.command(name=\"remove\")\n async def remove(args):\n # Something here\n\n @block.command(name=\"help\")\n async def help(args):\n # Get help message for this command\n\nThis would expose the commands block add [args], block remove [args] and block help each of which calls their own sub-command in the cog, and I think this is the cleanest way to get a consistent help system.\nYou can add this cog to your bot with bot.add_cog(Block(bot)) somewhere in your code. Specifically, I'd look into extensions\n\nThen, for what you want to do, You're not using slash commands, so you don't need to provide autocomplete. If you want to, you can do something really hacky, using this helper function, which will work as long as you have the function in your current scope:\ndef help(function_name):\n return globals()[function_name].__doc__\n\nNow, you can define the help of every function individually using docstrings, and the help command will simply get those doc strings and presumably do something with it.\n\nThe way Dyno would do it is more complex, using slash commands again, but really similar to the first version. You simply add a slash command group again, but this time it is for helping specifically. I personally don't like this as much, as I think the code is a lot less clean, but if you really want the help [function] syntax instead of [function] help, this is how to do that:\nclass Help(discord.ext.commands.Cog):\n help = SlashCommandGroup(\"help\")\n\n def __init__(self, bot):\n self.bot = bot\n\n @help.command(name=\"block\")\n async def block(args):\n # Send the user the help response for blocking\n\n @help.command(name=\"ask\")\n async def ask(args):\n # Send the user the help response for asking\n\nI hope that helps! :-)\n"
] | [
0
] | [] | [] | [
"discord",
"discord.py",
"python"
] | stackoverflow_0074661669_discord_discord.py_python.txt |
Q:
Can't add title to mapbox map
I tried to create several maps and saved as png files. In cycle I got all mapes per year. I want to add which year on the map, and I tried title=i and fig.update_layout(title_text=i, title_x=0.5), but it does not work.
import plotly.express as px
import pandas as pd
year = [1980,1981,1983]
lat = [60.572959, 60.321403, 56.990280]
lon = [40.572759, 41.321203, 36.990299]
dataframe = pd.DataFrame(list(zip(year,lat,lon)),
columns =['year', 'lat', 'lon'])
for idx, i in enumerate(sorted(dataframe['year'].unique())):
#for x in range(1980,2022):
sp = sp1[sp1['year']==i]
fig = px.scatter_mapbox(dataframe, lat='lat', lon="lon",
color_discrete_sequence=["fuchsia"], zoom=2, height=400, opacity=0.3, title = i)
fig.update_layout(mapbox_style="open-street-map")
fig.update_layout(margin={"r":0,"t":0,"l":0,"b":0})
fig.update_layout(title_text=i, title_x=0.5)
fig.write_image("all/plot{idx}.png".format(idx=idx))
I put the picture of one map as example. I want to add year for every map in any place.
A:
Use the annotations attribute of the previously created layout object in the update_layout method to add text - specified by the x and y coordinates.
fig.update_layout(annotations=[
dict(text=i, x=0.5, y=0.5, font_size=15, showarrow=False)
])
Play around with the x and y coordinates to find the proper position you want to place your text at.
A:
All you should do is to specify a space for the title by customizing the margin:
import plotly.express as px
import pandas as pd
df = pd.read_csv(
"https://raw.githubusercontent.com/plotly/datasets/master/2011_february_us_airport_traffic.csv"
)
fig = px.scatter_mapbox(df, lat="lat", lon="long", size="cnt", zoom=3)
fig.update_layout(mapbox_style="open-street-map")
fig.update_layout(
title_x=0.5,
title_y=0.95,
title_text="2011_february_us_airport_traffic",
margin={"l": 0, "r": 0, "b": 0, "t": 80}
)
fig.show()
Output:
| Can't add title to mapbox map | I tried to create several maps and saved as png files. In cycle I got all mapes per year. I want to add which year on the map, and I tried title=i and fig.update_layout(title_text=i, title_x=0.5), but it does not work.
import plotly.express as px
import pandas as pd
year = [1980,1981,1983]
lat = [60.572959, 60.321403, 56.990280]
lon = [40.572759, 41.321203, 36.990299]
dataframe = pd.DataFrame(list(zip(year,lat,lon)),
columns =['year', 'lat', 'lon'])
for idx, i in enumerate(sorted(dataframe['year'].unique())):
#for x in range(1980,2022):
sp = sp1[sp1['year']==i]
fig = px.scatter_mapbox(dataframe, lat='lat', lon="lon",
color_discrete_sequence=["fuchsia"], zoom=2, height=400, opacity=0.3, title = i)
fig.update_layout(mapbox_style="open-street-map")
fig.update_layout(margin={"r":0,"t":0,"l":0,"b":0})
fig.update_layout(title_text=i, title_x=0.5)
fig.write_image("all/plot{idx}.png".format(idx=idx))
I put the picture of one map as example. I want to add year for every map in any place.
| [
"Use the annotations attribute of the previously created layout object in the update_layout method to add text - specified by the x and y coordinates.\nfig.update_layout(annotations=[\n dict(text=i, x=0.5, y=0.5, font_size=15, showarrow=False)\n])\n\nPlay around with the x and y coordinates to find the proper position you want to place your text at.\n",
"All you should do is to specify a space for the title by customizing the margin:\nimport plotly.express as px\nimport pandas as pd\n\ndf = pd.read_csv(\n \"https://raw.githubusercontent.com/plotly/datasets/master/2011_february_us_airport_traffic.csv\"\n)\nfig = px.scatter_mapbox(df, lat=\"lat\", lon=\"long\", size=\"cnt\", zoom=3)\nfig.update_layout(mapbox_style=\"open-street-map\")\n\nfig.update_layout(\n title_x=0.5,\n title_y=0.95,\n title_text=\"2011_february_us_airport_traffic\",\n margin={\"l\": 0, \"r\": 0, \"b\": 0, \"t\": 80}\n)\n\nfig.show()\n\nOutput:\n\n"
] | [
1,
1
] | [] | [] | [
"mapbox",
"plotly",
"python"
] | stackoverflow_0074674956_mapbox_plotly_python.txt |
Q:
Django url change language code
I am trying to change the language of the website when users click a button in Django.
I have a base project and the urls are:
urlpatterns += i18n_patterns(
# Ecommerce is the app where I want to change the language
url(r'^', include("ecommerce.urls")),
)
The url inside Ecommerce.urls is:
urlpatterns = [
url(r'^testing/$', views.test, name='url_testing'),
... other urls
]
When I visit the url above, I first go to: http://localhost/en/testing/.
I want to set a link <a href="{% url 'url_testing' %}">Change Language</a> so that when users click it, it will change language to http://localhost/zh-hans/testing/. How do I do this in my template?
EDIT
I can now change the language using the following code but the problem is that it only works once:
<form id="languageForm" action="/i18n/setlang/" method="post">
{% csrf_token %}
<input name="next" type="hidden" value="{% url 'url_testing' %}" />
<input id="newLanguageInput" type="hidden" name="language"/>
</form>
And my links are:
<li><a onclick="changeLanguage('zh-hans')">简体</a></li>
<li><a onclick="changeLanguage('zh-hant')">繁體</a></li>
The function changeLanguage is defined like:
function changeLanguage(newLanguage) {
$('input[name="newLanguageInput"]').val(newLanguage);
$('#languageForm').submit();
}
The code works when I first click any of the 2 links, and I will be redirected to the url http://localhost/zh-hans/testing/ or http://localhost/zh-hant/testing/. The problem is after I change the language once, it no longer changes. Is there something wrong with my submit?
A:
Actually it's not going to be a simple <a> link but a <form>.
Have a read on how to set_language redirect view. This form will be responsible for changing languages. It's easy as a pie.
Make sure you have set some LANGUAGES first.
A:
You can change the language of the website when users click a link (no url translation, no post) like this:
navigation.html (with bootstrap4 and font awesome)
<li class="nav-item dropdown">
{% get_current_language as LANGUAGE_CODE %}
<a class="nav-link dropdown-toggle" href="#" data-toggle="dropdown">{{ LANGUAGE_CODE }}</a>
<div class="dropdown-menu dropdown-menu-right">
{% get_available_languages as languages %}
{% for lang_code, lang_name in languages %}
<a href="{% url 'main:activate_language' lang_code %}" class="dropdown-item">
{% if lang_code == LANGUAGE_CODE %}
<i class="fas fa-check-circle"></i>
{% else %}
<i class="far fa-circle"></i>
{% endif %}
{{ lang_name }} ({{ lang_code }})
</a>
{% endfor %}
</div>
</li>
views.py
from django.shortcuts import redirect
from django.utils import translation
from django.views.generic.base import View
class ActivateLanguageView(View):
language_code = ''
redirect_to = ''
def get(self, request, *args, **kwargs):
self.redirect_to = request.META.get('HTTP_REFERER')
self.language_code = kwargs.get('language_code')
translation.activate(self.language_code)
request.session[translation.LANGUAGE_SESSION_KEY] = self.language_code
return redirect(self.redirect_to)
urls.py
from django.urls import path
from .views import ActivateLanguageView
app_name = 'main'
urlpatterns = [
path('language/activate/<language_code>/', ActivateLanguageView.as_view(), name='activate_language'),
]
It's work for me.
A:
New snippets compatible with new API (Boostrap 5 and Django >= 4.0) updated from the excellent @Boris Đurkan answer.
Django has dropped the translation.LANGUAGE_SESSION_KEY support.
So basically we need to make this setup using the session cookie.
class ActivateLanguageView(View):
def get(self, request, lang, **kwargs):
url = request.META.get('HTTP_REFERER', '/')
translation.activate(lang)
response = HttpResponseRedirect(url)
response.set_cookie(settings.LANGUAGE_COOKIE_NAME, lang)
return response
Boostrap has changed its jQuery binding for dropdown:
<li class="nav-item dropdown">
{% get_current_language as LANGUAGE_CODE %}
<a class="nav-link dropdown-toggle" href="#" role="button" id="dropdownMenuLink" data-bs-toggle="dropdown" aria-expanded="false">
<strong>{{ LANGUAGE_CODE }}</strong>
</a>
<div class="dropdown-menu dropdown-menu-right">
{% get_available_languages as languages %}
{% for lang_code, lang_name in languages %}
<a href="{% url 'activate_language' lang_code %}" class="dropdown-item">
{% if lang_code == LANGUAGE_CODE %}
<i class="bi bi-check-circle"></i>
{% else %}
<i class="bi bi-circle"></i>
{% endif %}
{{ lang_name }} ({{ lang_code }})
</a>
{% endfor %}
</div>
</li>
Other parts of code remains basically the same.
| Django url change language code | I am trying to change the language of the website when users click a button in Django.
I have a base project and the urls are:
urlpatterns += i18n_patterns(
# Ecommerce is the app where I want to change the language
url(r'^', include("ecommerce.urls")),
)
The url inside Ecommerce.urls is:
urlpatterns = [
url(r'^testing/$', views.test, name='url_testing'),
... other urls
]
When I visit the url above, I first go to: http://localhost/en/testing/.
I want to set a link <a href="{% url 'url_testing' %}">Change Language</a> so that when users click it, it will change language to http://localhost/zh-hans/testing/. How do I do this in my template?
EDIT
I can now change the language using the following code but the problem is that it only works once:
<form id="languageForm" action="/i18n/setlang/" method="post">
{% csrf_token %}
<input name="next" type="hidden" value="{% url 'url_testing' %}" />
<input id="newLanguageInput" type="hidden" name="language"/>
</form>
And my links are:
<li><a onclick="changeLanguage('zh-hans')">简体</a></li>
<li><a onclick="changeLanguage('zh-hant')">繁體</a></li>
The function changeLanguage is defined like:
function changeLanguage(newLanguage) {
$('input[name="newLanguageInput"]').val(newLanguage);
$('#languageForm').submit();
}
The code works when I first click any of the 2 links, and I will be redirected to the url http://localhost/zh-hans/testing/ or http://localhost/zh-hant/testing/. The problem is after I change the language once, it no longer changes. Is there something wrong with my submit?
| [
"Actually it's not going to be a simple <a> link but a <form>.\nHave a read on how to set_language redirect view. This form will be responsible for changing languages. It's easy as a pie.\nMake sure you have set some LANGUAGES first.\n",
"You can change the language of the website when users click a link (no url translation, no post) like this:\nnavigation.html (with bootstrap4 and font awesome)\n<li class=\"nav-item dropdown\">\n {% get_current_language as LANGUAGE_CODE %}\n <a class=\"nav-link dropdown-toggle\" href=\"#\" data-toggle=\"dropdown\">{{ LANGUAGE_CODE }}</a>\n <div class=\"dropdown-menu dropdown-menu-right\">\n\n {% get_available_languages as languages %}\n {% for lang_code, lang_name in languages %}\n\n <a href=\"{% url 'main:activate_language' lang_code %}\" class=\"dropdown-item\">\n {% if lang_code == LANGUAGE_CODE %}\n <i class=\"fas fa-check-circle\"></i> \n {% else %}\n <i class=\"far fa-circle\"></i> \n {% endif %}\n {{ lang_name }} ({{ lang_code }})\n </a>\n\n {% endfor %}\n </div>\n</li>\n\nviews.py\nfrom django.shortcuts import redirect\nfrom django.utils import translation\nfrom django.views.generic.base import View\n\nclass ActivateLanguageView(View):\n language_code = ''\n redirect_to = ''\n\n def get(self, request, *args, **kwargs):\n self.redirect_to = request.META.get('HTTP_REFERER')\n self.language_code = kwargs.get('language_code')\n translation.activate(self.language_code)\n request.session[translation.LANGUAGE_SESSION_KEY] = self.language_code\n return redirect(self.redirect_to)\n\nurls.py\nfrom django.urls import path\nfrom .views import ActivateLanguageView\n\napp_name = 'main'\nurlpatterns = [\n path('language/activate/<language_code>/', ActivateLanguageView.as_view(), name='activate_language'),\n]\n\nIt's work for me.\n",
"New snippets compatible with new API (Boostrap 5 and Django >= 4.0) updated from the excellent @Boris Đurkan answer.\nDjango has dropped the translation.LANGUAGE_SESSION_KEY support.\nSo basically we need to make this setup using the session cookie.\nclass ActivateLanguageView(View):\n\n def get(self, request, lang, **kwargs):\n url = request.META.get('HTTP_REFERER', '/')\n translation.activate(lang)\n response = HttpResponseRedirect(url)\n response.set_cookie(settings.LANGUAGE_COOKIE_NAME, lang)\n return response\n\nBoostrap has changed its jQuery binding for dropdown:\n<li class=\"nav-item dropdown\">\n {% get_current_language as LANGUAGE_CODE %}\n <a class=\"nav-link dropdown-toggle\" href=\"#\" role=\"button\" id=\"dropdownMenuLink\" data-bs-toggle=\"dropdown\" aria-expanded=\"false\">\n <strong>{{ LANGUAGE_CODE }}</strong>\n </a>\n <div class=\"dropdown-menu dropdown-menu-right\">\n\n {% get_available_languages as languages %}\n {% for lang_code, lang_name in languages %}\n\n <a href=\"{% url 'activate_language' lang_code %}\" class=\"dropdown-item\">\n {% if lang_code == LANGUAGE_CODE %}\n <i class=\"bi bi-check-circle\"></i> \n {% else %}\n <i class=\"bi bi-circle\"></i> \n {% endif %}\n {{ lang_name }} ({{ lang_code }})\n </a>\n\n {% endfor %}\n </div>\n</li>\n\nOther parts of code remains basically the same.\n"
] | [
4,
4,
0
] | [] | [] | [
"django",
"django_i18n",
"python"
] | stackoverflow_0042745198_django_django_i18n_python.txt |
Q:
decoding a Byte Array sent from arduino to TCP Server made with Python
I am converting sensor data to byte and writing a byte array from an arduino to a TCP server made with Python, but somehow the sensor data which are in the array triggers variations of the UTF-8 errors displayed below when decoded.
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xcf in position 1: invalid continuation byte
Where "0xcf" and "0xff" change from error to error.
I suspect this is because the sensor data can sometimes be negative values. I know a byte cannot hold a negative number and UTF-8 can do 0-256. I think I must send a dedicated "-" sign before the negative values. However, I cannot predict when the negative values occur. Therefore, there must be a better way of doing this. I am able to send the array of bytes without decoding it, but I suspect there are some problems here as well because the two first positions should hold different values than the remaining 6 positions, as shown below:
b'\xff\x00\x00\x00\x00\x00\x00\x00' b'\x02\x00\x00\x00\x00\x00\x00\x00'
My question is: how can I send negative values as byte and decode it correctly.
For context I will attach my code.
Arduino Client:
`
#include <Ethernet.h>
#include <SPI.h>
#include "AK09918.h"
#include "ICM20600.h"
#include <Wire.h>
//----------------------------------
//tiltsensor
AK09918_err_type_t err;
int32_t x, y, z;
AK09918 ak09918;
ICM20600 icm20600(true);
int16_t acc_x, acc_y, acc_z;
int32_t offset_x, offset_y, offset_z;
double roll, pitch;
//----------------------------------
//Ethernet
byte mac[] = { 0xBE, 0xAD, 0xBE, 0xEF, 0xFE, 0xED }; //not important if only one ethernet shield
byte ip[] = { 192, 168, X, X}; //IP of this arduino unit
byte server[] = { 192, 168, X, X}; //IP of server you want to contact
int tcp_port = 65432; // a nice port to send/acess the information on
EthernetClient client;
//----------------------------------
//byte array
byte array[8] = {0, 0, 0, 0, 0, 0, 0, 0};
//----------------------------------
void setup()
{
//tiltsensor
Wire.begin();
err = ak09918.initialize();
icm20600.initialize();
ak09918.switchMode(AK09918_POWER_DOWN);
ak09918.switchMode(AK09918_CONTINUOUS_100HZ);
Serial.begin(9600);
err = ak09918.isDataReady();
while (err != AK09918_ERR_OK) {
Serial.println("Waiting Sensor");
delay(100);
err = ak09918.isDataReady();}
Serial.println("Start figure-8 calibration after 2 seconds.");
delay(2000);
//calibrate(10000, &offset_x, &offset_y, &offset_z);
Serial.println("");
//----------------------------------
//Ethernet
Ethernet.begin(mac, ip);
//Serial.begin(9600);
delay(1000);
Serial.println("Connecting...");
if (client.connect(server, tcp_port)) { // Connection to server
Serial.println("Connected to server.js");
client.println();}
else {
Serial.println("connection failed");}
//----------------------------------
}
void loop()
{
//tiltsensor
acc_x = icm20600.getAccelerationX();
acc_y = icm20600.getAccelerationY();
acc_z = icm20600.getAccelerationZ();
roll = atan2((float)acc_y, (float)acc_z) * 57.3;
pitch = atan2(-(float)acc_x, sqrt((float)acc_y * acc_y + (float)acc_z * acc_z)) * 57.3;
//----------------------------------
//bytearray
array[0] = byte(roll);
array[1] = byte(pitch);
//----------------------------------
//test
Serial.write(array, 8);
Serial.println();
delay(500);
//----------------------------------
//Ethernet
if (client.available()) {
//client.print(array);
//client.write(array[0]);
client.write(array, 8);
//client.write(array, 8);//((uint8_t*) array, sizeof(array));
delay(3000);
}
if (!client.connected()) {
Serial.println();
Serial.println("disconnecting.");
client.stop();
for(;;)
;
}
//----------------------------------
}
`
TCP server (python):
`
# echo-server.py
import time
import socket
HOST = "192.168.X.X" # Standard loopback interface address (localhost)
PORT = 65432 # Port to listen on (non-privileged ports are > 1023)
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
s.bind((HOST, PORT))
s.listen()
conn, addr = s.accept()
with conn:
print(f"Connected by {addr}")
while True:
data = conn.recv(1024)
#msg = s.recv(1024)
#print(msg.decode("utf-8"))
print(data.decode("utf-8"))
#time.sleep(3)
#conn.sendall(data)
if not data:
break
conn.send(data)
`
I am able to establish a connection to the server and the client can write to it.
However, I get UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa4 in position 0: invalid start byte type errors.
A:
I was able to make some progress,
for Arduino:
#include <Ethernet.h>
#include <SPI.h>
#include "AK09918.h"
#include "ICM20600.h"
#include <Wire.h>
//----------------------------------
//tiltsensor
AK09918_err_type_t err;
int32_t x, y, z;
AK09918 ak09918;
ICM20600 icm20600(true);
int16_t acc_x, acc_y, acc_z;
int32_t offset_x, offset_y, offset_z;
double roll, pitch;
//----------------------------------
//Ethernet
byte mac[] = { 0xBE, 0xAD, 0xBE, 0xEF, 0xFE, 0xED }; //not important if only one ethernet shield
byte ip[] = { 192, 168, X, XX}; //IP of this arduino unit
byte server[] = { 192, 168, X, XX}; //IP of server you want to contact
int tcp_port = 65432; // a nice port to send/acess the information on
EthernetClient client;
//----------------------------------
//byte array
union some_data{ //convert a float to 4 bytes
float tobytes;
byte bytearray[4];
};
byte array[14] = {0,0,0,0,0,0,0,0,0,0,0,0,0,0}; //intial array
//----------------------------------
void setup()
{
//tiltsensor
Wire.begin();
err = ak09918.initialize();
icm20600.initialize();
ak09918.switchMode(AK09918_POWER_DOWN);
ak09918.switchMode(AK09918_CONTINUOUS_100HZ);
Serial.begin(9600);
err = ak09918.isDataReady();
while (err != AK09918_ERR_OK) {
Serial.println("Waiting Sensor");
delay(100);
err = ak09918.isDataReady();}
Serial.println("Start figure-8 calibration after 2 seconds.");
delay(2000);
//calibrate(10000, &offset_x, &offset_y, &offset_z);
Serial.println("");
//----------------------------------
//Ethernet
Ethernet.begin(mac, ip);
//Serial.begin(9600);
delay(1000);
Serial.println("Connecting...");
if (client.connect(server, tcp_port)) { // Connection to server
Serial.println("Connected to server.js");
client.println();}
else {
Serial.println("connection failed");}
//----------------------------------
//byte array
//----------------------------------
}
void loop()
{
//tiltsensor
acc_x = icm20600.getAccelerationX();
acc_y = icm20600.getAccelerationY();
acc_z = icm20600.getAccelerationZ();
roll = atan2((float)acc_y, (float)acc_z) * 57.3;
pitch = atan2(-(float)acc_x, sqrt((float)acc_y * acc_y + (float)acc_z * acc_z)) * 57.3;
//----------------------------------
//bytearray
if (roll < 0) {array[0] = 0;} //put identifier for positive or negative value in specific posision in byte array
else {array[0] = 1;}
if (pitch < 0) {array[5] = 0;} // same for second sensor value
else {array[5] = 1;}
union some_data sensor1; //use the union function separately
union some_data sensor2;
sensor1.tobytes =abs(roll); //get byte array for sensor value
sensor2.tobytes =abs(pitch); //get byte array for sensor value
for (int i=0; i<sizeof sensor1.bytearray/sizeof sensor1.bytearray[0]; i++) { //put sensor value byte array into main byte array
array[1+i] = sensor1.bytearray[i];
array[6+i] = sensor2.bytearray[i];
}
//----------------------------------
//test
Serial.write(array, sizeof array);
Serial.println();
delay(500);
//----------------------------------
//Ethernet
if (client.available()) {
//client.print(array);
//client.write(array[0]);
client.write(array, sizeof array);
//client.write(array, 8);//((uint8_t*) array, sizeof(array));
delay(3000);
}
if (!client.connected()) {
Serial.println();
Serial.println("disconnecting.");
client.stop();
for(;;)
;
}
//----------------------------------
}
For python TCP Server.
# echo-server.py
import time
import socket
HOST = "192.168.XX.XX" # Standard loopback interface address (localhost)
PORT = 65432 # Port to listen on (non-privileged ports are > 1023)
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
s.bind((HOST, PORT))
s.listen()
conn, addr = s.accept()
with conn:
print(f"Connected by {addr}")
while True:
data = conn.recv(1024)
#msg = s.recv(1024)
#print(msg.decode("utf-8"))
print(data)#.decode("utf-8"))
#time.sleep(3)
#conn.sendall(data)
if not data:
break
conn.send(data)
A:
As @hcheung mentioned in the comment, in the Arduino side you can simply use
Serial.write(&roll, 4);
Serial.wirte(&pitch,4);
The sign is already encoded in these bytes as the first bits. See wiki for example.
On your python side, I would suggest you look into the struct module
For your specific case just use
roll, pitch = struct.unpack("dd", data)
where "dd" describes your format of two doubles.
| decoding a Byte Array sent from arduino to TCP Server made with Python | I am converting sensor data to byte and writing a byte array from an arduino to a TCP server made with Python, but somehow the sensor data which are in the array triggers variations of the UTF-8 errors displayed below when decoded.
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xcf in position 1: invalid continuation byte
Where "0xcf" and "0xff" change from error to error.
I suspect this is because the sensor data can sometimes be negative values. I know a byte cannot hold a negative number and UTF-8 can do 0-256. I think I must send a dedicated "-" sign before the negative values. However, I cannot predict when the negative values occur. Therefore, there must be a better way of doing this. I am able to send the array of bytes without decoding it, but I suspect there are some problems here as well because the two first positions should hold different values than the remaining 6 positions, as shown below:
b'\xff\x00\x00\x00\x00\x00\x00\x00' b'\x02\x00\x00\x00\x00\x00\x00\x00'
My question is: how can I send negative values as byte and decode it correctly.
For context I will attach my code.
Arduino Client:
`
#include <Ethernet.h>
#include <SPI.h>
#include "AK09918.h"
#include "ICM20600.h"
#include <Wire.h>
//----------------------------------
//tiltsensor
AK09918_err_type_t err;
int32_t x, y, z;
AK09918 ak09918;
ICM20600 icm20600(true);
int16_t acc_x, acc_y, acc_z;
int32_t offset_x, offset_y, offset_z;
double roll, pitch;
//----------------------------------
//Ethernet
byte mac[] = { 0xBE, 0xAD, 0xBE, 0xEF, 0xFE, 0xED }; //not important if only one ethernet shield
byte ip[] = { 192, 168, X, X}; //IP of this arduino unit
byte server[] = { 192, 168, X, X}; //IP of server you want to contact
int tcp_port = 65432; // a nice port to send/acess the information on
EthernetClient client;
//----------------------------------
//byte array
byte array[8] = {0, 0, 0, 0, 0, 0, 0, 0};
//----------------------------------
void setup()
{
//tiltsensor
Wire.begin();
err = ak09918.initialize();
icm20600.initialize();
ak09918.switchMode(AK09918_POWER_DOWN);
ak09918.switchMode(AK09918_CONTINUOUS_100HZ);
Serial.begin(9600);
err = ak09918.isDataReady();
while (err != AK09918_ERR_OK) {
Serial.println("Waiting Sensor");
delay(100);
err = ak09918.isDataReady();}
Serial.println("Start figure-8 calibration after 2 seconds.");
delay(2000);
//calibrate(10000, &offset_x, &offset_y, &offset_z);
Serial.println("");
//----------------------------------
//Ethernet
Ethernet.begin(mac, ip);
//Serial.begin(9600);
delay(1000);
Serial.println("Connecting...");
if (client.connect(server, tcp_port)) { // Connection to server
Serial.println("Connected to server.js");
client.println();}
else {
Serial.println("connection failed");}
//----------------------------------
}
void loop()
{
//tiltsensor
acc_x = icm20600.getAccelerationX();
acc_y = icm20600.getAccelerationY();
acc_z = icm20600.getAccelerationZ();
roll = atan2((float)acc_y, (float)acc_z) * 57.3;
pitch = atan2(-(float)acc_x, sqrt((float)acc_y * acc_y + (float)acc_z * acc_z)) * 57.3;
//----------------------------------
//bytearray
array[0] = byte(roll);
array[1] = byte(pitch);
//----------------------------------
//test
Serial.write(array, 8);
Serial.println();
delay(500);
//----------------------------------
//Ethernet
if (client.available()) {
//client.print(array);
//client.write(array[0]);
client.write(array, 8);
//client.write(array, 8);//((uint8_t*) array, sizeof(array));
delay(3000);
}
if (!client.connected()) {
Serial.println();
Serial.println("disconnecting.");
client.stop();
for(;;)
;
}
//----------------------------------
}
`
TCP server (python):
`
# echo-server.py
import time
import socket
HOST = "192.168.X.X" # Standard loopback interface address (localhost)
PORT = 65432 # Port to listen on (non-privileged ports are > 1023)
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
s.bind((HOST, PORT))
s.listen()
conn, addr = s.accept()
with conn:
print(f"Connected by {addr}")
while True:
data = conn.recv(1024)
#msg = s.recv(1024)
#print(msg.decode("utf-8"))
print(data.decode("utf-8"))
#time.sleep(3)
#conn.sendall(data)
if not data:
break
conn.send(data)
`
I am able to establish a connection to the server and the client can write to it.
However, I get UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa4 in position 0: invalid start byte type errors.
| [
"I was able to make some progress,\nfor Arduino:\n#include <Ethernet.h>\n#include <SPI.h>\n#include \"AK09918.h\"\n#include \"ICM20600.h\"\n#include <Wire.h>\n//----------------------------------\n\n//tiltsensor\nAK09918_err_type_t err;\nint32_t x, y, z;\nAK09918 ak09918;\nICM20600 icm20600(true);\nint16_t acc_x, acc_y, acc_z;\nint32_t offset_x, offset_y, offset_z;\ndouble roll, pitch;\n//----------------------------------\n\n//Ethernet\nbyte mac[] = { 0xBE, 0xAD, 0xBE, 0xEF, 0xFE, 0xED }; //not important if only one ethernet shield\nbyte ip[] = { 192, 168, X, XX}; //IP of this arduino unit\nbyte server[] = { 192, 168, X, XX}; //IP of server you want to contact\nint tcp_port = 65432; // a nice port to send/acess the information on\nEthernetClient client; \n//----------------------------------\n\n\n//byte array\nunion some_data{ //convert a float to 4 bytes\n float tobytes;\n byte bytearray[4];\n};\n\nbyte array[14] = {0,0,0,0,0,0,0,0,0,0,0,0,0,0}; //intial array\n//----------------------------------\n\nvoid setup()\n{\n //tiltsensor\n Wire.begin();\n err = ak09918.initialize();\n icm20600.initialize();\n ak09918.switchMode(AK09918_POWER_DOWN);\n ak09918.switchMode(AK09918_CONTINUOUS_100HZ);\n Serial.begin(9600);\n err = ak09918.isDataReady();\n while (err != AK09918_ERR_OK) {\n Serial.println(\"Waiting Sensor\");\n delay(100);\n err = ak09918.isDataReady();}\n Serial.println(\"Start figure-8 calibration after 2 seconds.\");\n delay(2000);\n //calibrate(10000, &offset_x, &offset_y, &offset_z);\n Serial.println(\"\");\n //----------------------------------\n\n //Ethernet\n Ethernet.begin(mac, ip);\n //Serial.begin(9600);\n delay(1000);\n Serial.println(\"Connecting...\");\n if (client.connect(server, tcp_port)) { // Connection to server\n Serial.println(\"Connected to server.js\");\n client.println();} \n else {\n Serial.println(\"connection failed\");}\n //----------------------------------\n\n //byte array\n\n //----------------------------------\n}\n\nvoid loop()\n{\n //tiltsensor\n acc_x = icm20600.getAccelerationX();\n acc_y = icm20600.getAccelerationY();\n acc_z = icm20600.getAccelerationZ();\n roll = atan2((float)acc_y, (float)acc_z) * 57.3;\n pitch = atan2(-(float)acc_x, sqrt((float)acc_y * acc_y + (float)acc_z * acc_z)) * 57.3;\n //----------------------------------\n\n\n //bytearray\n if (roll < 0) {array[0] = 0;} //put identifier for positive or negative value in specific posision in byte array\n else {array[0] = 1;}\n\n if (pitch < 0) {array[5] = 0;} // same for second sensor value\n else {array[5] = 1;}\n\n union some_data sensor1; //use the union function separately\n union some_data sensor2;\n\n sensor1.tobytes =abs(roll); //get byte array for sensor value\n sensor2.tobytes =abs(pitch); //get byte array for sensor value\n\n for (int i=0; i<sizeof sensor1.bytearray/sizeof sensor1.bytearray[0]; i++) { //put sensor value byte array into main byte array\n array[1+i] = sensor1.bytearray[i];\n array[6+i] = sensor2.bytearray[i];\n }\n //----------------------------------\n\n //test\n Serial.write(array, sizeof array);\n Serial.println();\n delay(500); \n //----------------------------------\n\n\n //Ethernet\n if (client.available()) {\n //client.print(array);\n //client.write(array[0]);\n client.write(array, sizeof array);\n //client.write(array, 8);//((uint8_t*) array, sizeof(array));\n delay(3000); \n }\n if (!client.connected()) {\n Serial.println();\n Serial.println(\"disconnecting.\");\n client.stop();\n for(;;)\n ;\n }\n //----------------------------------\n}\n\nFor python TCP Server.\n# echo-server.py\nimport time\nimport socket\n\nHOST = \"192.168.XX.XX\" # Standard loopback interface address (localhost)\nPORT = 65432 # Port to listen on (non-privileged ports are > 1023)\n\nwith socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:\n s.bind((HOST, PORT))\n s.listen()\n conn, addr = s.accept()\n with conn:\n print(f\"Connected by {addr}\")\n while True:\n data = conn.recv(1024)\n #msg = s.recv(1024)\n #print(msg.decode(\"utf-8\"))\n print(data)#.decode(\"utf-8\"))\n #time.sleep(3)\n #conn.sendall(data)\n if not data:\n break\n conn.send(data)\n \n\n",
"As @hcheung mentioned in the comment, in the Arduino side you can simply use\nSerial.write(&roll, 4);\nSerial.wirte(&pitch,4);\n\nThe sign is already encoded in these bytes as the first bits. See wiki for example.\nOn your python side, I would suggest you look into the struct module\nFor your specific case just use\nroll, pitch = struct.unpack(\"dd\", data)\n\nwhere \"dd\" describes your format of two doubles.\n"
] | [
0,
0
] | [] | [] | [
"arduino",
"python",
"tcpclient",
"utf_8"
] | stackoverflow_0074613294_arduino_python_tcpclient_utf_8.txt |
Q:
Zip or create key-value pairs from two lists of lists
I have the following MWE:
token_uniqueness_sparse = pd.DataFrame({'token_a': [0.1, 0.0],
'token_b': [0.0, 0.2],
'token_c': [0.3, 0.0]
}
)
sf_fake = pd.DataFrame({'items': [ ['token_a', 'token_c'],
['token_b']],
'rcol': [1,2]
})
token_uniqueness_dense = (token_uniqueness_sparse
.apply(lambda x: list(x[x.ne(0)]), axis=1)
.to_frame('output_column'))
token_uniqueness_dense
output_column
0 [0.1, 0.3]
1 [0.2]
I'm trying to combine the two lists of lists such that I get key-value pairs and can sort the keys by the value. For example:
{token_a: 0.1, token_c: 0.3}
{token_b: 0.2}
If there's a better/smarter way than the way I'm asking for, please let me know.
A:
Here is a one way:
sf_fake=sf_fake.explode('items').set_index('items').T.reset_index(drop=True)
'''
items token_a token_c token_b
0 1 1 2
'''
#for example, token_a takes the value in index number 1 in token_uniqueness_sparse df
final={i:token_uniqueness_sparse[i].iloc[sf_fake[i].iloc[0] -1] for i in token_uniqueness_sparse.columns} #token_uniqueness_sparse' index starting 0. Thats why subtract 1.
# output: {'token_a': 0.1, 'token_b': 0.2, 'token_c': 0.3}
| Zip or create key-value pairs from two lists of lists | I have the following MWE:
token_uniqueness_sparse = pd.DataFrame({'token_a': [0.1, 0.0],
'token_b': [0.0, 0.2],
'token_c': [0.3, 0.0]
}
)
sf_fake = pd.DataFrame({'items': [ ['token_a', 'token_c'],
['token_b']],
'rcol': [1,2]
})
token_uniqueness_dense = (token_uniqueness_sparse
.apply(lambda x: list(x[x.ne(0)]), axis=1)
.to_frame('output_column'))
token_uniqueness_dense
output_column
0 [0.1, 0.3]
1 [0.2]
I'm trying to combine the two lists of lists such that I get key-value pairs and can sort the keys by the value. For example:
{token_a: 0.1, token_c: 0.3}
{token_b: 0.2}
If there's a better/smarter way than the way I'm asking for, please let me know.
| [
"Here is a one way:\nsf_fake=sf_fake.explode('items').set_index('items').T.reset_index(drop=True)\n'''\nitems token_a token_c token_b\n0 1 1 2\n'''\n#for example, token_a takes the value in index number 1 in token_uniqueness_sparse df\n\nfinal={i:token_uniqueness_sparse[i].iloc[sf_fake[i].iloc[0] -1] for i in token_uniqueness_sparse.columns} #token_uniqueness_sparse' index starting 0. Thats why subtract 1.\n\n# output: {'token_a': 0.1, 'token_b': 0.2, 'token_c': 0.3}\n\n"
] | [
0
] | [] | [] | [
"python"
] | stackoverflow_0074670491_python.txt |
Q:
Exact value of a root on Python
I'm writing a programme that converts complex numbers.
Right now I'm having problems with this piece of code:
import numpy
complexnr = 1+1j
mod= numpy.absolute(complexnr)
print(mod)
The output of this code is:
1.4142135623730951
I would like to get √2 as the output.
I have been advised to use the sympy module but I have had no luck with this either. What would be the easiest way to get this result?
EDIT
input_list = ["Enter your complex number (a+bi): ", \
"Degrees or radians?", \
"To how many decimal places do you want to round the argument?"]
output = multenterbox(text, title, input_list)
algebraline = output[0]
choice = output[1]
round2 = int(output[2])
#converting complex number to a suitable form for numpy
if "i" in algebraline:
j = algebraline.replace("i","j")
indeks = algebraline.index("i")
list = []
for element in algebraline:
list.append(element)
if "i" in algebraline and algebraline[indeks-1]=="+" or algebraline[indeks-1]=="-":
list.insert(indeks, 1)
x = "".join(str(e) for e in list)
j = x.replace("i","j")
arv = eval(j)
elif "i" not in algebraline:
arv = eval(algebraline)
#let's find the module
a = int(list[0])
b = int(list[2])
module = sqrt(a**2+b**2)
this method works well when the complex number is 1+i for example, however when i try to insert sqrt(3)-1i, the list looks like this ['s', 'q', 'r', 't', '(', '3', ')', '-', 1, 'i'] and my programme won't work. Same problem occurs when b is a root (for example 1-sqrt(3)i). What can be done to make it work for square roots as well? (I need numpy later on to calculate angles, that's why converting 'i' into 'j' is important)
A:
Works by using
I (from sympy) rather than 1j
builtin abs function which calls sympby.Abs for complex arguments
Code
from sympy import I
complexnr = 1 + I # use I rather than 1j
print(abs(complexnr)) # also works with np.abs and np.absolute
Output
A:
If you want to use SymPy, you have to write the complex numbers as sympy expressions.
from sympy import *
cabs = lambda z: sqrt(re(z)**2 + im(z)**2)
complexnr = 1 + 1j
print(cabs(complexnr))
# out: 1.4142135623731
We are getting a float number because complexnr is of type complex and its real and imaginary parts are of type float. Thus, SymPy's re and im functions returns float numbers. But when sqrt receives a float number, it evaluates the result.
We can workaround this problem in two ways.
The first: if we are dealing with simple complex numbers where real and imaginary parts are integers, we can write the complex number as a string, sympify it (which means convert to a sympy expression):
complexnr = sympify("1 + 1j")
print(cabs(complexnr))
# out: sqrt(2)
A second way consist in using the complex number directly, then apply nsimplify in order to attempt to convert the resulting float number to some symbolic form:
complexnr = 1 + 1j
result = cabs(complexnr) # result is a Float number, 1.4142135623731
print(result.nsimplify())
# out: sqrt(2)
| Exact value of a root on Python | I'm writing a programme that converts complex numbers.
Right now I'm having problems with this piece of code:
import numpy
complexnr = 1+1j
mod= numpy.absolute(complexnr)
print(mod)
The output of this code is:
1.4142135623730951
I would like to get √2 as the output.
I have been advised to use the sympy module but I have had no luck with this either. What would be the easiest way to get this result?
EDIT
input_list = ["Enter your complex number (a+bi): ", \
"Degrees or radians?", \
"To how many decimal places do you want to round the argument?"]
output = multenterbox(text, title, input_list)
algebraline = output[0]
choice = output[1]
round2 = int(output[2])
#converting complex number to a suitable form for numpy
if "i" in algebraline:
j = algebraline.replace("i","j")
indeks = algebraline.index("i")
list = []
for element in algebraline:
list.append(element)
if "i" in algebraline and algebraline[indeks-1]=="+" or algebraline[indeks-1]=="-":
list.insert(indeks, 1)
x = "".join(str(e) for e in list)
j = x.replace("i","j")
arv = eval(j)
elif "i" not in algebraline:
arv = eval(algebraline)
#let's find the module
a = int(list[0])
b = int(list[2])
module = sqrt(a**2+b**2)
this method works well when the complex number is 1+i for example, however when i try to insert sqrt(3)-1i, the list looks like this ['s', 'q', 'r', 't', '(', '3', ')', '-', 1, 'i'] and my programme won't work. Same problem occurs when b is a root (for example 1-sqrt(3)i). What can be done to make it work for square roots as well? (I need numpy later on to calculate angles, that's why converting 'i' into 'j' is important)
| [
"Works by using\n\nI (from sympy) rather than 1j\nbuiltin abs function which calls sympby.Abs for complex arguments\n\nCode\nfrom sympy import I\n\ncomplexnr = 1 + I # use I rather than 1j\nprint(abs(complexnr)) # also works with np.abs and np.absolute\n\nOutput\n\n",
"If you want to use SymPy, you have to write the complex numbers as sympy expressions.\nfrom sympy import *\ncabs = lambda z: sqrt(re(z)**2 + im(z)**2)\ncomplexnr = 1 + 1j\nprint(cabs(complexnr))\n# out: 1.4142135623731\n\nWe are getting a float number because complexnr is of type complex and its real and imaginary parts are of type float. Thus, SymPy's re and im functions returns float numbers. But when sqrt receives a float number, it evaluates the result.\nWe can workaround this problem in two ways.\nThe first: if we are dealing with simple complex numbers where real and imaginary parts are integers, we can write the complex number as a string, sympify it (which means convert to a sympy expression):\ncomplexnr = sympify(\"1 + 1j\")\nprint(cabs(complexnr))\n# out: sqrt(2)\n\nA second way consist in using the complex number directly, then apply nsimplify in order to attempt to convert the resulting float number to some symbolic form:\ncomplexnr = 1 + 1j\nresult = cabs(complexnr) # result is a Float number, 1.4142135623731\nprint(result.nsimplify())\n# out: sqrt(2)\n\n"
] | [
1,
0
] | [] | [] | [
"numpy",
"python",
"sympy"
] | stackoverflow_0074674649_numpy_python_sympy.txt |
Q:
Python folium - Circle not working along with popup
I found some nice solutions here:
How to create on click popup which includes plots using ipyleaflet, Folium or Geemap?
which potentially would allow me to assign more things to the marker when it's clicked. In my situation I have a lot of circles assigned to the marker, but they appear all which doesn't look well.
I need the folium.Circle populated at the moment when I click on the marker. It could appear along with the pop-up information.
My code looks as follows:
fm = folium.Marker(
location=[lat,lng],
popup=folium.Popup(max_width=450).add_child(
folium.Circle(
[lat,lng],
radius=10,
fill=True,
weight=0.2)),
icon = folium.Icon(color='darkpurple', icon='glyphicon-briefcase'))
map.add_child(fm)
Unfortunately, it doesn't work, as my map comes without some features:
Despite no error from Python's console side, I have an error in the map console
Uncaught TypeError: Cannot read properties of undefined (reading 'addLayer')
at i.addTo (leaflet.js:5:64072)
and I have no faintest idea how to solve it
Is there any option of making my circle populated just when clicked on the marker?
A:
To create a marker on a folium map that displays a circle when clicked, you can use the following steps:
First, create a marker on the map using the folium.Marker class and specify the location and any popup information you want to display when the marker is clicked.
fm = folium.Marker(
location=[lat, lng],
popup=folium.Popup(max_width=450).add_child(
folium.Vega(data, width=450, height=250)),
icon=folium.Icon(color='darkpurple', icon='glyphicon-briefcase'))
Next, create a circle using the folium.Circle class and specify the location and radius of the circle.
circle = folium.Circle(
[lat, lng],
radius=10,
fill=True,
weight=0.2)
To make the circle appear only when the marker is clicked, you can add the circle to the marker's popup attribute using the add_to() method.
fm.popup.add_child(circle)
Finally, add the marker to the map using the add_child() method.
map.add_child(fm)
Here is an example of what the final code might look like:
fm = folium.Marker(
location=[lat, lng],
popup=folium.Popup(max_width=450),
icon=folium.Icon(color='darkpurple', icon='glyphicon-briefcase'))
circle = folium.Circle(
[lat, lng],
radius=10,
fill=True,
weight=0.2)
fm.popup.add_child(circle)
map.add_child(fm)
A:
Not necessarily the best approach - but a smooth alternative to @gentleslaughter's implementation:
You could use a click_action argument in folium.Marker with a JavaScript function that will add the circle to the map whenever the marker is clicked!
js_f= """
function onClick(e) {
var circle = L.circle([e.latlng.lat, e.latlng.lng], {radius: 10, fill: true, weight: 0.2}).addTo(map);
}
"""
Here the exact same folium.Marker with the click_action:
fm = folium.Marker(
location=[lat, lng],
popup=folium.Popup(max_width=450),
icon=folium.Icon(color='darkpurple', icon='glyphicon-briefcase'),
click_action=js_f,
)
map.add_child(fm)
| Python folium - Circle not working along with popup | I found some nice solutions here:
How to create on click popup which includes plots using ipyleaflet, Folium or Geemap?
which potentially would allow me to assign more things to the marker when it's clicked. In my situation I have a lot of circles assigned to the marker, but they appear all which doesn't look well.
I need the folium.Circle populated at the moment when I click on the marker. It could appear along with the pop-up information.
My code looks as follows:
fm = folium.Marker(
location=[lat,lng],
popup=folium.Popup(max_width=450).add_child(
folium.Circle(
[lat,lng],
radius=10,
fill=True,
weight=0.2)),
icon = folium.Icon(color='darkpurple', icon='glyphicon-briefcase'))
map.add_child(fm)
Unfortunately, it doesn't work, as my map comes without some features:
Despite no error from Python's console side, I have an error in the map console
Uncaught TypeError: Cannot read properties of undefined (reading 'addLayer')
at i.addTo (leaflet.js:5:64072)
and I have no faintest idea how to solve it
Is there any option of making my circle populated just when clicked on the marker?
| [
"To create a marker on a folium map that displays a circle when clicked, you can use the following steps:\n\nFirst, create a marker on the map using the folium.Marker class and specify the location and any popup information you want to display when the marker is clicked.\n\nfm = folium.Marker(\n location=[lat, lng],\n popup=folium.Popup(max_width=450).add_child(\n folium.Vega(data, width=450, height=250)),\n icon=folium.Icon(color='darkpurple', icon='glyphicon-briefcase'))\n\n\nNext, create a circle using the folium.Circle class and specify the location and radius of the circle.\n\ncircle = folium.Circle(\n [lat, lng],\n radius=10,\n fill=True,\n weight=0.2)\n\n\nTo make the circle appear only when the marker is clicked, you can add the circle to the marker's popup attribute using the add_to() method.\n\nfm.popup.add_child(circle)\n\n\nFinally, add the marker to the map using the add_child() method.\n\nmap.add_child(fm)\n\nHere is an example of what the final code might look like:\nfm = folium.Marker(\n location=[lat, lng],\n popup=folium.Popup(max_width=450),\n icon=folium.Icon(color='darkpurple', icon='glyphicon-briefcase'))\n\ncircle = folium.Circle(\n [lat, lng],\n radius=10,\n fill=True,\n weight=0.2)\n\nfm.popup.add_child(circle)\nmap.add_child(fm)\n\n",
"Not necessarily the best approach - but a smooth alternative to @gentleslaughter's implementation:\nYou could use a click_action argument in folium.Marker with a JavaScript function that will add the circle to the map whenever the marker is clicked!\njs_f= \"\"\"\n function onClick(e) {\n var circle = L.circle([e.latlng.lat, e.latlng.lng], {radius: 10, fill: true, weight: 0.2}).addTo(map);\n }\n\"\"\"\n\nHere the exact same folium.Marker with the click_action:\nfm = folium.Marker(\n location=[lat, lng],\n popup=folium.Popup(max_width=450),\n icon=folium.Icon(color='darkpurple', icon='glyphicon-briefcase'),\n click_action=js_f,\n)\nmap.add_child(fm)\n\n"
] | [
0,
0
] | [] | [] | [
"folium",
"leaflet",
"python"
] | stackoverflow_0074520790_folium_leaflet_python.txt |
Q:
Left join in (flask)sqlalchemy with getting unmatched values and filter on the right table
I want to get a list of all assignments, with the progress of the user (the UserAssignments table) also in the result set. That means there should be a join between the assignments and userassignments table (where the assignmentid is equal), but also a filter to check if the progress is from the current user. The diagram of the database and the actual models are listed below.
id = db.Column(db.Integer, primary_key=True)
username = db.Column(db.String(20), index=True, unique=True, nullable=False)
password_hash = db.Column(db.String(128), nullable=False)
roleid = db.Column(db.Integer, db.ForeignKey('role.roleid'), nullable=False)
groups = db.relationship('Group', secondary=users_groups, lazy='dynamic')
assignments = db.relationship('Assignment', secondary=users_assignments, lazy='dynamic')
class Assignment(db.Model):
assignmentid = db.Column(db.Integer, primary_key=True)
assignmentname = db.Column(db.String(128))
assignmentranking = db.Column(db.Integer)
assignmentquestion = db.Column(db.String, nullable=False)
def __repr__(self):
return '<Assignment {}>'.format(self.assignmentid)
class UserAssignments(db.Model):
__tablename__ = 'user_assignments'
userid = db.Column(db.Integer, db.ForeignKey('user.id'), primary_key=True)
assignmentid = db.Column(db.Integer, db.ForeignKey('assignment.assignmentid'), primary_key=True)
status = db.Column(db.Integer)
progress = db.Column(db.String)
def __repr__(self):
return '<UserAssignments {}>'.format(self.userid, self.assignmentid)
diagram
I tried the following query, but that resulted only the assignments with a matched userassignment (progress). (the userid is given into the function)
results = db.session.query(Assignment, UserAssignments).join(UserAssignments, (UserAssignments.assignmentid == Assignment.assignmentid)&(UserAssignments.userid==userid), isouter=True).filter(UserAssignments.userid==userid).all()
I also tried the query without the filter, but that resulted in all userassignments (also from other users).
results = db.session.query(Assignment, UserAssignments).join(UserAssignments, (UserAssignments.assignmentid == Assignment.assignmentid)&(UserAssignments.userid==userid), isouter=True).all()
As said earlier, I want to achieve a result with all assignments listed, with the userassignment included when there is one for the current user.
A:
try next query
results = db.session.query(
Assignment,
UserAssignments,
).join(
UserAssignments,
UserAssignments.assignmentid == Assignment.assignmentid,
isouter=True,
).filter(
or_(
UserAssignments.userid == userid,
UserAssignments.userid.is_(None),
)
).all()
| Left join in (flask)sqlalchemy with getting unmatched values and filter on the right table | I want to get a list of all assignments, with the progress of the user (the UserAssignments table) also in the result set. That means there should be a join between the assignments and userassignments table (where the assignmentid is equal), but also a filter to check if the progress is from the current user. The diagram of the database and the actual models are listed below.
id = db.Column(db.Integer, primary_key=True)
username = db.Column(db.String(20), index=True, unique=True, nullable=False)
password_hash = db.Column(db.String(128), nullable=False)
roleid = db.Column(db.Integer, db.ForeignKey('role.roleid'), nullable=False)
groups = db.relationship('Group', secondary=users_groups, lazy='dynamic')
assignments = db.relationship('Assignment', secondary=users_assignments, lazy='dynamic')
class Assignment(db.Model):
assignmentid = db.Column(db.Integer, primary_key=True)
assignmentname = db.Column(db.String(128))
assignmentranking = db.Column(db.Integer)
assignmentquestion = db.Column(db.String, nullable=False)
def __repr__(self):
return '<Assignment {}>'.format(self.assignmentid)
class UserAssignments(db.Model):
__tablename__ = 'user_assignments'
userid = db.Column(db.Integer, db.ForeignKey('user.id'), primary_key=True)
assignmentid = db.Column(db.Integer, db.ForeignKey('assignment.assignmentid'), primary_key=True)
status = db.Column(db.Integer)
progress = db.Column(db.String)
def __repr__(self):
return '<UserAssignments {}>'.format(self.userid, self.assignmentid)
diagram
I tried the following query, but that resulted only the assignments with a matched userassignment (progress). (the userid is given into the function)
results = db.session.query(Assignment, UserAssignments).join(UserAssignments, (UserAssignments.assignmentid == Assignment.assignmentid)&(UserAssignments.userid==userid), isouter=True).filter(UserAssignments.userid==userid).all()
I also tried the query without the filter, but that resulted in all userassignments (also from other users).
results = db.session.query(Assignment, UserAssignments).join(UserAssignments, (UserAssignments.assignmentid == Assignment.assignmentid)&(UserAssignments.userid==userid), isouter=True).all()
As said earlier, I want to achieve a result with all assignments listed, with the userassignment included when there is one for the current user.
| [
"try next query\nresults = db.session.query(\n Assignment, \n UserAssignments,\n).join(\n UserAssignments, \n UserAssignments.assignmentid == Assignment.assignmentid, \n isouter=True,\n).filter(\n or_(\n UserAssignments.userid == userid,\n UserAssignments.userid.is_(None),\n )\n).all()\n\n"
] | [
0
] | [] | [] | [
"flask_sqlalchemy",
"python",
"sqlalchemy"
] | stackoverflow_0074675033_flask_sqlalchemy_python_sqlalchemy.txt |
Q:
Sum by Factors From Codewars.com
Sinopsis: my code runs well with simple lists, but when I attempt, after the 4 basic test its execution time gets timed out.
Since I don't want to look for others solution, I'm asking for help and someone can show me which part of the code its messing with the time execution in order to focus only into modify that part.
Note: I don't want a finally solution, just know which part of the code I have to change please
Exercise:
Given an array of positive or negative integers
I= [i1,..,in]
you have to produce a sorted array P of the form
[ [p, sum of all ij of I for which p is a prime factor (p positive) of ij] ...]
P will be sorted by increasing order of the prime numbers. The final result has to be given as a string in Java, C# or C++ and as an array of arrays in other languages.
Example:
I = [12, 15] # result = [[2, 12], [3, 27], [5, 15]]
[2, 3, 5] is the list of all prime factors of the elements of I, hence the result.
Notes: It can happen that a sum is 0 if some numbers are negative!
Example: I = [15, 30, -45] 5 divides 15, 30 and (-45) so 5 appears in the result, the sum of the numbers for which 5 is a factor is 0 so we have [5, 0] in the result amongst others.
`
def sum_for_list(lst):
if len(lst) == 0:
return []
max = sorted(list(map(lambda x: abs(x), lst)), reverse = True)[0]
#create the list with the primes, already filtered
primes = []
for i in range (2, max + 1):
for j in range (2, i):
if i % j == 0:
break
else:
for x in lst:
if x % i == 0:
primes.append([i])
break
#i add the sums to the primes
for i in primes:
sum = 0
for j in lst:
if j % i[0] == 0:
sum += j
i.append(sum)
return primes
`
Image
I tried to simplyfy the code as much as I could but same result.
I also tried other ways to iterate in the first step:
# Find the maximum value in the list
from functools import reduce
max = reduce(lambda x,y: abs(x) if abs(x)>abs(y) else abs(y), lst)
A:
One possible cause of timeouts in your code is the use of the sorted function with the reverse = True argument. This sorts the input list in reverse order, which can be inefficient for large lists.
Instead of sorting the list in reverse order, you can use the built-in max function to find the maximum value in the list. This will avoid the need to sort the entire list, which can improve the performance of your code.
Here is an example of how you could modify your code to use the max function instead of sorting the list:
import math
def sum_for_list(lst):
if len(lst) == 0:
return []
# Find the greatest common divisor of all numbers in the list
maxItem = abs(lst[0])
for i in range(1, len(lst)):
maxItem = math.gcd(maxItem, abs(lst[i]))
# Create the list with the primes, already filtered
primes = []
for i in range (2, maxItem + 1):
for j in range (2, i):
if i % j == 0:
break
else:
for x in lst:
if x % i == 0:
primes.append(i)
break
# Add the sums to the primes
sums = []
for i in primes:
sum = 0
for j in lst:
if j % i == 0:
sum += j
sums.append(sum)
return sums
This code should be more efficient than the original code, and should be able to run without timing out on larger input lists. However, it is still not optimized for large lists, and you may need to consider further improvements if you need to handle very large inputs.
| Sum by Factors From Codewars.com | Sinopsis: my code runs well with simple lists, but when I attempt, after the 4 basic test its execution time gets timed out.
Since I don't want to look for others solution, I'm asking for help and someone can show me which part of the code its messing with the time execution in order to focus only into modify that part.
Note: I don't want a finally solution, just know which part of the code I have to change please
Exercise:
Given an array of positive or negative integers
I= [i1,..,in]
you have to produce a sorted array P of the form
[ [p, sum of all ij of I for which p is a prime factor (p positive) of ij] ...]
P will be sorted by increasing order of the prime numbers. The final result has to be given as a string in Java, C# or C++ and as an array of arrays in other languages.
Example:
I = [12, 15] # result = [[2, 12], [3, 27], [5, 15]]
[2, 3, 5] is the list of all prime factors of the elements of I, hence the result.
Notes: It can happen that a sum is 0 if some numbers are negative!
Example: I = [15, 30, -45] 5 divides 15, 30 and (-45) so 5 appears in the result, the sum of the numbers for which 5 is a factor is 0 so we have [5, 0] in the result amongst others.
`
def sum_for_list(lst):
if len(lst) == 0:
return []
max = sorted(list(map(lambda x: abs(x), lst)), reverse = True)[0]
#create the list with the primes, already filtered
primes = []
for i in range (2, max + 1):
for j in range (2, i):
if i % j == 0:
break
else:
for x in lst:
if x % i == 0:
primes.append([i])
break
#i add the sums to the primes
for i in primes:
sum = 0
for j in lst:
if j % i[0] == 0:
sum += j
i.append(sum)
return primes
`
Image
I tried to simplyfy the code as much as I could but same result.
I also tried other ways to iterate in the first step:
# Find the maximum value in the list
from functools import reduce
max = reduce(lambda x,y: abs(x) if abs(x)>abs(y) else abs(y), lst)
| [
"One possible cause of timeouts in your code is the use of the sorted function with the reverse = True argument. This sorts the input list in reverse order, which can be inefficient for large lists.\nInstead of sorting the list in reverse order, you can use the built-in max function to find the maximum value in the list. This will avoid the need to sort the entire list, which can improve the performance of your code.\nHere is an example of how you could modify your code to use the max function instead of sorting the list:\nimport math\n\ndef sum_for_list(lst):\n if len(lst) == 0:\n return []\n\n # Find the greatest common divisor of all numbers in the list\n maxItem = abs(lst[0])\n for i in range(1, len(lst)):\n maxItem = math.gcd(maxItem, abs(lst[i]))\n\n # Create the list with the primes, already filtered\n primes = []\n for i in range (2, maxItem + 1): \n for j in range (2, i): \n if i % j == 0: \n break \n else:\n for x in lst:\n if x % i == 0: \n primes.append(i)\n break\n\n # Add the sums to the primes\n sums = []\n for i in primes:\n sum = 0\n for j in lst:\n if j % i == 0:\n sum += j\n sums.append(sum)\n\n return sums\n\nThis code should be more efficient than the original code, and should be able to run without timing out on larger input lists. However, it is still not optimized for large lists, and you may need to consider further improvements if you need to handle very large inputs.\n"
] | [
0
] | [] | [] | [
"performance",
"python",
"time"
] | stackoverflow_0074675160_performance_python_time.txt |
Q:
Python Selenium with Salesforce - Cannot Seem to Access Certain Form Elements
Using Selenium to try and automate a bit of data entry with Salesforce. I have gotten my script to load a webpage, allow me to login, and click an "edit" button.
My next step is to enter data into a field. However, I keep getting an error about the field not being found. I've tried to identify it by XPATH, NAME, and ID and continue to get the error. For reference, my script works with a simple webpage like Google. I have a feeling that clicking the edit button in Salesforce opens either another window or frame (sorry if I'm using the wrong terminology). Things I've tried:
Looking for other frames (can't seem to find any in the HTML)
Having my script wait until the element is present (doesn't seem to work)
Any other options? Thank you!
A:
Salesforce's Lighting Experience (the new white-blue UI) is built with web components that hide their internal implementation details. You'd need to read up a bit about "shadow DOM", it's not a "happy soup" of html and JS all chucked into top page's html. Means that CSS is limited to that one component, there's no risk of spilling over or overwriting another page area's JS function if you both declare function with same name - but it also means it's much harder to get into element's internals.
You'll have to read up about how Selenium deals with Shadow DOM. Some companies claim they have working Lightning UI automated tests/ Heard good stuff about Provar, haven't used it myself.
For custom UI components SF developer has option to use "light dom", for standard UI you'll struggle a bit. If you're looking for some automation without fighting with Lighting Experience (especially that with 3 releases/year SF sometimes changes the structure of generated html, breaking old tests) - you could consider switching over to classic UI for the test? It'll be more accessible for Selenium, won't be exactly same thing the user does - but server-side errors like required fields, validation rules should fire all the same.
| Python Selenium with Salesforce - Cannot Seem to Access Certain Form Elements | Using Selenium to try and automate a bit of data entry with Salesforce. I have gotten my script to load a webpage, allow me to login, and click an "edit" button.
My next step is to enter data into a field. However, I keep getting an error about the field not being found. I've tried to identify it by XPATH, NAME, and ID and continue to get the error. For reference, my script works with a simple webpage like Google. I have a feeling that clicking the edit button in Salesforce opens either another window or frame (sorry if I'm using the wrong terminology). Things I've tried:
Looking for other frames (can't seem to find any in the HTML)
Having my script wait until the element is present (doesn't seem to work)
Any other options? Thank you!
| [
"Salesforce's Lighting Experience (the new white-blue UI) is built with web components that hide their internal implementation details. You'd need to read up a bit about \"shadow DOM\", it's not a \"happy soup\" of html and JS all chucked into top page's html. Means that CSS is limited to that one component, there's no risk of spilling over or overwriting another page area's JS function if you both declare function with same name - but it also means it's much harder to get into element's internals.\nYou'll have to read up about how Selenium deals with Shadow DOM. Some companies claim they have working Lightning UI automated tests/ Heard good stuff about Provar, haven't used it myself.\nFor custom UI components SF developer has option to use \"light dom\", for standard UI you'll struggle a bit. If you're looking for some automation without fighting with Lighting Experience (especially that with 3 releases/year SF sometimes changes the structure of generated html, breaking old tests) - you could consider switching over to classic UI for the test? It'll be more accessible for Selenium, won't be exactly same thing the user does - but server-side errors like required fields, validation rules should fire all the same.\n"
] | [
0
] | [] | [] | [
"frames",
"html",
"python",
"salesforce",
"selenium"
] | stackoverflow_0074674569_frames_html_python_salesforce_selenium.txt |
Q:
ModuleNotFoundError: No module named 'translate' , even after "pip install translate"
I am having this error, even after "pip install translate" multiple times.
I am running my application in a docker container. I am a beginner , so please let me know, what mistake i am doing.
`
Traceback (most recent call last):
File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/usr/local/lib/python3.10/dist-packages/uvicorn/_subprocess.py", line 76, in subprocess_started
target(sockets=sockets)
File "/usr/local/lib/python3.10/dist-packages/uvicorn/server.py", line 60, in run
return asyncio.run(self.serve(sockets=sockets))
File "/usr/lib/python3.10/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "/usr/lib/python3.10/asyncio/base_events.py", line 646, in run_until_complete
return future.result()
File "/usr/local/lib/python3.10/dist-packages/uvicorn/server.py", line 67, in serve
config.load()
File "/usr/local/lib/python3.10/dist-packages/uvicorn/config.py", line 477, in load
self.loaded_app = import_from_string(self.app)
File "/usr/local/lib/python3.10/dist-packages/uvicorn/importer.py", line 24, in import_from_string
raise exc from None
File "/usr/local/lib/python3.10/dist-packages/uvicorn/importer.py", line 21, in import_from_string
module = importlib.import_module(module_str)
File "/usr/lib/python3.10/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/usr/./main.py", line 8, in <module>
from translate import Translator
ModuleNotFoundError: No module named 'translate'
when I am running "python3 -m pip install translate"
Requirement already satisfied: translate in /root/anaconda3/envs/newenvt1/lib/python3.9/site-packages (3.6.1)
Requirement already satisfied: click in /root/anaconda3/envs/newenvt1/lib/python3.9/site-packages (from translate) (8.0.4)
Requirement already satisfied: libretranslatepy==2.1.1 in /root/anaconda3/envs/newenvt1/lib/python3.9/site-packages (from translate) (2.1.1)
Requirement already satisfied: lxml in /root/anaconda3/envs/newenvt1/lib/python3.9/site-packages (from translate) (4.9.1)
Requirement already satisfied: requests in /root/anaconda3/envs/newenvt1/lib/python3.9/site-packages (from translate) (2.28.1)
Requirement already satisfied: charset-normalizer<3,>=2 in /root/anaconda3/envs/newenvt1/lib/python3.9/site-packages (from requests->translate) (2.0.4)
Requirement already satisfied: idna<4,>=2.5 in /root/anaconda3/envs/newenvt1/lib/python3.9/site-packages (from requests->translate) (2.10)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in /root/anaconda3/envs/newenvt1/lib/python3.9/site-packages (from requests->translate) (1.26.12)
Requirement already satisfied: certifi>=2017.4.17 in /root/anaconda3/envs/newenvt1/lib/python3.9/site-packages (from requests->translate) (2022.9.24)
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
`
When I did "pip list", translate is there but still this error is coming.
A:
You need not local pip install, but install in your docker.
Add to Dockerfile
python3 -m pip install translate
And rebuild your image
| ModuleNotFoundError: No module named 'translate' , even after "pip install translate" | I am having this error, even after "pip install translate" multiple times.
I am running my application in a docker container. I am a beginner , so please let me know, what mistake i am doing.
`
Traceback (most recent call last):
File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/usr/local/lib/python3.10/dist-packages/uvicorn/_subprocess.py", line 76, in subprocess_started
target(sockets=sockets)
File "/usr/local/lib/python3.10/dist-packages/uvicorn/server.py", line 60, in run
return asyncio.run(self.serve(sockets=sockets))
File "/usr/lib/python3.10/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "/usr/lib/python3.10/asyncio/base_events.py", line 646, in run_until_complete
return future.result()
File "/usr/local/lib/python3.10/dist-packages/uvicorn/server.py", line 67, in serve
config.load()
File "/usr/local/lib/python3.10/dist-packages/uvicorn/config.py", line 477, in load
self.loaded_app = import_from_string(self.app)
File "/usr/local/lib/python3.10/dist-packages/uvicorn/importer.py", line 24, in import_from_string
raise exc from None
File "/usr/local/lib/python3.10/dist-packages/uvicorn/importer.py", line 21, in import_from_string
module = importlib.import_module(module_str)
File "/usr/lib/python3.10/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/usr/./main.py", line 8, in <module>
from translate import Translator
ModuleNotFoundError: No module named 'translate'
when I am running "python3 -m pip install translate"
Requirement already satisfied: translate in /root/anaconda3/envs/newenvt1/lib/python3.9/site-packages (3.6.1)
Requirement already satisfied: click in /root/anaconda3/envs/newenvt1/lib/python3.9/site-packages (from translate) (8.0.4)
Requirement already satisfied: libretranslatepy==2.1.1 in /root/anaconda3/envs/newenvt1/lib/python3.9/site-packages (from translate) (2.1.1)
Requirement already satisfied: lxml in /root/anaconda3/envs/newenvt1/lib/python3.9/site-packages (from translate) (4.9.1)
Requirement already satisfied: requests in /root/anaconda3/envs/newenvt1/lib/python3.9/site-packages (from translate) (2.28.1)
Requirement already satisfied: charset-normalizer<3,>=2 in /root/anaconda3/envs/newenvt1/lib/python3.9/site-packages (from requests->translate) (2.0.4)
Requirement already satisfied: idna<4,>=2.5 in /root/anaconda3/envs/newenvt1/lib/python3.9/site-packages (from requests->translate) (2.10)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in /root/anaconda3/envs/newenvt1/lib/python3.9/site-packages (from requests->translate) (1.26.12)
Requirement already satisfied: certifi>=2017.4.17 in /root/anaconda3/envs/newenvt1/lib/python3.9/site-packages (from requests->translate) (2022.9.24)
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
`
When I did "pip list", translate is there but still this error is coming.
| [
"You need not local pip install, but install in your docker.\nAdd to Dockerfile\npython3 -m pip install translate\n\nAnd rebuild your image\n"
] | [
0
] | [] | [] | [
"docker",
"fastapi",
"pip",
"python",
"uvicorn"
] | stackoverflow_0074674226_docker_fastapi_pip_python_uvicorn.txt |
Q:
Why getting this Error selenium.common.exceptions.StaleElementReferenceException:
I know already upload answer to this same question but I try them they are not working for me because there is also some some update in selenium code too.
selenium.common.exceptions.StaleElementReferenceException: Message: stale element reference: element is not attached to the page document
(Session info: chrome=108.0.5359.95)
When trying to send my searching keyword in this input with label "Skills Search" in advance searching pop-pup form.
Here is the URL: https://www.upwork.com/nx/jobs/search/modals/advanced-search?sort=recency&pageTitle=Advanced%20Search&_navType=modal&_modalInfo=%5B%7B%22navType%22%3A%22modal%22,%22title%22%3A%22Advanced%20Search%22,%22modalId%22%3A%221670133126002%22,%22channelName%22%3A%22advanced-search-modal%22%7D%5D
Here is my code:
import time
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
options = Options()
options.add_argument("start-maximized")
webdriver_service = Service('F:\\work\\chromedriver_win32\\chromedriver.exe')
driver = webdriver.Chrome(options=options, service=webdriver_service)
wait = WebDriverWait(driver, 10)
url = "https://www.upwork.com/nx/jobs/search/?sort=recency"
driver.get(url)
key = ["Web Scraping","Selenium WebDriver", "Data Scraping", "selenium", "Web Crawling", "Beautiful Soup", "Scrapy", "Data Extraction", "Automation"]
wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, 'button#onetrust-accept-btn-handler')))
time.sleep(5)
wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, 'button#onetrust-accept-btn-handler'))).click()
for i in range(len(key)):
wait.until(EC.element_to_be_clickable((By.XPATH, '//button[contains(@title,"Advanced Search")]'))).click()
time.sleep(5)
advanced_search_input = driver.find_element(By.XPATH,'//input[contains(@aria-labelledby,"tokenizer-label")]')
# advanced_search_input.click()
advanced_search_input.send_keys(key[i])
result giving now
A:
By clicking '//input[contains(@aria-labelledby,"tokenizer-label")]' element it is re-built on the page (really strange approach they built that page).
To make this code working I added a delay after clearing and clicking that input and then get that element again.
The following code worked for me:
import time
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
options = Options()
options.add_argument("start-maximized")
webdriver_service = Service('C:\webdrivers\chromedriver.exe')
driver = webdriver.Chrome(options=options, service=webdriver_service)
wait = WebDriverWait(driver, 10)
url = "https://www.upwork.com/nx/jobs/search/?sort=recency"
driver.get(url)
keys = ["Web Scraping","Selenium WebDriver", "Data Scraping", "selenium", "Web Crawling", "Beautiful Soup", "Scrapy", "Data Extraction", "Automation"]
wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, 'button#onetrust-accept-btn-handler')))
time.sleep(5)
wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, 'button#onetrust-accept-btn-handler'))).click()
for i in range(len(keys)):
wait.until(EC.element_to_be_clickable((By.XPATH, '//button[contains(@title,"Advanced Search")]'))).click()
wait.until(EC.element_to_be_clickable((By.XPATH,'//input[contains(@aria-labelledby,"tokenizer-label")]'))).clear()
wait.until(EC.element_to_be_clickable((By.XPATH, '//input[contains(@aria-labelledby,"tokenizer-label")]'))).click()
time.sleep(3)
wait.until(EC.element_to_be_clickable((By.XPATH, '//input[contains(@aria-labelledby,"tokenizer-label")]'))).send_keys(keys[i])
wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR,'[data-test="modal-advanced-search-search-btn"]'))).click()
UPD
In order to select multiple search values you need to insert each value, select the appearing autocomplete option and continue, as in the code below:
import time
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
options = Options()
options.add_argument("start-maximized")
webdriver_service = Service('C:\webdrivers\chromedriver.exe')
driver = webdriver.Chrome(options=options, service=webdriver_service)
wait = WebDriverWait(driver, 10)
url = "https://www.upwork.com/nx/jobs/search/?sort=recency"
driver.get(url)
keys = ["Web Scraping", "Selenium WebDriver", "Data Scraping", "Selenium", "Beautiful Soup", "Scrapy", "Data Extraction", "Automation"] #
wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, 'button#onetrust-accept-btn-handler')))
time.sleep(5)
wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, 'button#onetrust-accept-btn-handler'))).click()
wait.until(EC.element_to_be_clickable((By.XPATH, '//button[contains(@title,"Advanced Search")]'))).click()
wait.until(EC.element_to_be_clickable((By.XPATH,'//input[contains(@aria-labelledby,"tokenizer-label")]'))).clear()
wait.until(EC.element_to_be_clickable((By.XPATH, '//input[contains(@aria-labelledby,"tokenizer-label")]'))).click()
time.sleep(3)
for i in range(len(keys)):
wait.until(EC.element_to_be_clickable((By.XPATH, '//input[contains(@aria-labelledby,"tokenizer-label")]'))).send_keys(keys[i])
time.sleep(2)
wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, "#typeahead-input-control-35 .up-menu-item-text"))).click()
time.sleep(4)
wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR,'[data-test="modal-advanced-search-search-btn"]'))).click()
UPD
Finally did it!
The problem with wrong inputs caused by too slow response time of that page.
To make it working I inserted a small delay between inserting each character of the input string. In this case the result is as expected.
This is the final working code:
import time
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
options = Options()
options.add_argument("start-maximized")
webdriver_service = Service('C:\webdrivers\chromedriver.exe')
driver = webdriver.Chrome(options=options, service=webdriver_service)
wait = WebDriverWait(driver, 10)
url = "https://www.upwork.com/nx/jobs/search/?sort=recency"
driver.get(url)
keys = ["Web Scraping", "Selenium Webdriver", "Data Scraping", "Selenium", "Beautiful Soup", "Scrapy", "Data Extraction", "Automation"]
wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, 'button#onetrust-accept-btn-handler')))
time.sleep(5)
wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, 'button#onetrust-accept-btn-handler'))).click()
wait.until(EC.element_to_be_clickable((By.XPATH, '//button[contains(@title,"Advanced Search")]'))).click()
wait.until(EC.element_to_be_clickable((By.XPATH,'//input[contains(@aria-labelledby,"tokenizer-label")]'))).clear()
wait.until(EC.element_to_be_clickable((By.XPATH, '//input[contains(@aria-labelledby,"tokenizer-label")]'))).click()
time.sleep(3)
for i in range(len(keys)):
search_field = wait.until(EC.element_to_be_clickable((By.XPATH, '//input[contains(@aria-labelledby,"tokenizer-label")]')))
search_field.click()
for character in keys[i]:
search_field.send_keys(character)
time.sleep(0.05)
time.sleep(2)
wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, "#typeahead-input-control-35 .up-menu-item-text"))).click()
time.sleep(2)
wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR,'[data-test="modal-advanced-search-search-btn"]'))).click()
The result is
| Why getting this Error selenium.common.exceptions.StaleElementReferenceException: | I know already upload answer to this same question but I try them they are not working for me because there is also some some update in selenium code too.
selenium.common.exceptions.StaleElementReferenceException: Message: stale element reference: element is not attached to the page document
(Session info: chrome=108.0.5359.95)
When trying to send my searching keyword in this input with label "Skills Search" in advance searching pop-pup form.
Here is the URL: https://www.upwork.com/nx/jobs/search/modals/advanced-search?sort=recency&pageTitle=Advanced%20Search&_navType=modal&_modalInfo=%5B%7B%22navType%22%3A%22modal%22,%22title%22%3A%22Advanced%20Search%22,%22modalId%22%3A%221670133126002%22,%22channelName%22%3A%22advanced-search-modal%22%7D%5D
Here is my code:
import time
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
options = Options()
options.add_argument("start-maximized")
webdriver_service = Service('F:\\work\\chromedriver_win32\\chromedriver.exe')
driver = webdriver.Chrome(options=options, service=webdriver_service)
wait = WebDriverWait(driver, 10)
url = "https://www.upwork.com/nx/jobs/search/?sort=recency"
driver.get(url)
key = ["Web Scraping","Selenium WebDriver", "Data Scraping", "selenium", "Web Crawling", "Beautiful Soup", "Scrapy", "Data Extraction", "Automation"]
wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, 'button#onetrust-accept-btn-handler')))
time.sleep(5)
wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, 'button#onetrust-accept-btn-handler'))).click()
for i in range(len(key)):
wait.until(EC.element_to_be_clickable((By.XPATH, '//button[contains(@title,"Advanced Search")]'))).click()
time.sleep(5)
advanced_search_input = driver.find_element(By.XPATH,'//input[contains(@aria-labelledby,"tokenizer-label")]')
# advanced_search_input.click()
advanced_search_input.send_keys(key[i])
result giving now
| [
"By clicking '//input[contains(@aria-labelledby,\"tokenizer-label\")]' element it is re-built on the page (really strange approach they built that page).\nTo make this code working I added a delay after clearing and clicking that input and then get that element again.\nThe following code worked for me:\nimport time\n\nfrom selenium import webdriver\nfrom selenium.webdriver.chrome.service import Service\nfrom selenium.webdriver.chrome.options import Options\nfrom selenium.webdriver.support.ui import WebDriverWait\nfrom selenium.webdriver.common.by import By\nfrom selenium.webdriver.support import expected_conditions as EC\n\noptions = Options()\noptions.add_argument(\"start-maximized\")\n\nwebdriver_service = Service('C:\\webdrivers\\chromedriver.exe')\ndriver = webdriver.Chrome(options=options, service=webdriver_service)\nwait = WebDriverWait(driver, 10)\n\nurl = \"https://www.upwork.com/nx/jobs/search/?sort=recency\"\ndriver.get(url)\n\nkeys = [\"Web Scraping\",\"Selenium WebDriver\", \"Data Scraping\", \"selenium\", \"Web Crawling\", \"Beautiful Soup\", \"Scrapy\", \"Data Extraction\", \"Automation\"]\nwait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, 'button#onetrust-accept-btn-handler')))\ntime.sleep(5)\nwait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, 'button#onetrust-accept-btn-handler'))).click()\nfor i in range(len(keys)):\n wait.until(EC.element_to_be_clickable((By.XPATH, '//button[contains(@title,\"Advanced Search\")]'))).click()\n wait.until(EC.element_to_be_clickable((By.XPATH,'//input[contains(@aria-labelledby,\"tokenizer-label\")]'))).clear()\n wait.until(EC.element_to_be_clickable((By.XPATH, '//input[contains(@aria-labelledby,\"tokenizer-label\")]'))).click()\n time.sleep(3)\n wait.until(EC.element_to_be_clickable((By.XPATH, '//input[contains(@aria-labelledby,\"tokenizer-label\")]'))).send_keys(keys[i])\n wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR,'[data-test=\"modal-advanced-search-search-btn\"]'))).click()\n\nUPD\nIn order to select multiple search values you need to insert each value, select the appearing autocomplete option and continue, as in the code below:\nimport time\n\nfrom selenium import webdriver\nfrom selenium.webdriver.chrome.service import Service\nfrom selenium.webdriver.chrome.options import Options\nfrom selenium.webdriver.support.ui import WebDriverWait\nfrom selenium.webdriver.common.by import By\nfrom selenium.webdriver.support import expected_conditions as EC\n\noptions = Options()\noptions.add_argument(\"start-maximized\")\n\nwebdriver_service = Service('C:\\webdrivers\\chromedriver.exe')\ndriver = webdriver.Chrome(options=options, service=webdriver_service)\nwait = WebDriverWait(driver, 10)\n\nurl = \"https://www.upwork.com/nx/jobs/search/?sort=recency\"\ndriver.get(url)\n\nkeys = [\"Web Scraping\", \"Selenium WebDriver\", \"Data Scraping\", \"Selenium\", \"Beautiful Soup\", \"Scrapy\", \"Data Extraction\", \"Automation\"] #\nwait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, 'button#onetrust-accept-btn-handler')))\ntime.sleep(5)\nwait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, 'button#onetrust-accept-btn-handler'))).click()\nwait.until(EC.element_to_be_clickable((By.XPATH, '//button[contains(@title,\"Advanced Search\")]'))).click()\nwait.until(EC.element_to_be_clickable((By.XPATH,'//input[contains(@aria-labelledby,\"tokenizer-label\")]'))).clear()\nwait.until(EC.element_to_be_clickable((By.XPATH, '//input[contains(@aria-labelledby,\"tokenizer-label\")]'))).click()\ntime.sleep(3)\nfor i in range(len(keys)):\n wait.until(EC.element_to_be_clickable((By.XPATH, '//input[contains(@aria-labelledby,\"tokenizer-label\")]'))).send_keys(keys[i])\n time.sleep(2)\n wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, \"#typeahead-input-control-35 .up-menu-item-text\"))).click()\n time.sleep(4)\nwait.until(EC.element_to_be_clickable((By.CSS_SELECTOR,'[data-test=\"modal-advanced-search-search-btn\"]'))).click()\n\nUPD\nFinally did it!\nThe problem with wrong inputs caused by too slow response time of that page.\nTo make it working I inserted a small delay between inserting each character of the input string. In this case the result is as expected.\nThis is the final working code:\nimport time\n\nfrom selenium import webdriver\nfrom selenium.webdriver.chrome.service import Service\nfrom selenium.webdriver.chrome.options import Options\nfrom selenium.webdriver.support.ui import WebDriverWait\nfrom selenium.webdriver.common.by import By\nfrom selenium.webdriver.support import expected_conditions as EC\n\noptions = Options()\noptions.add_argument(\"start-maximized\")\n\nwebdriver_service = Service('C:\\webdrivers\\chromedriver.exe')\ndriver = webdriver.Chrome(options=options, service=webdriver_service)\nwait = WebDriverWait(driver, 10)\n\nurl = \"https://www.upwork.com/nx/jobs/search/?sort=recency\"\ndriver.get(url)\n\nkeys = [\"Web Scraping\", \"Selenium Webdriver\", \"Data Scraping\", \"Selenium\", \"Beautiful Soup\", \"Scrapy\", \"Data Extraction\", \"Automation\"]\nwait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, 'button#onetrust-accept-btn-handler')))\ntime.sleep(5)\nwait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, 'button#onetrust-accept-btn-handler'))).click()\nwait.until(EC.element_to_be_clickable((By.XPATH, '//button[contains(@title,\"Advanced Search\")]'))).click()\nwait.until(EC.element_to_be_clickable((By.XPATH,'//input[contains(@aria-labelledby,\"tokenizer-label\")]'))).clear()\nwait.until(EC.element_to_be_clickable((By.XPATH, '//input[contains(@aria-labelledby,\"tokenizer-label\")]'))).click()\ntime.sleep(3)\nfor i in range(len(keys)):\n search_field = wait.until(EC.element_to_be_clickable((By.XPATH, '//input[contains(@aria-labelledby,\"tokenizer-label\")]')))\n search_field.click()\n for character in keys[i]:\n search_field.send_keys(character)\n time.sleep(0.05)\n time.sleep(2)\n wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, \"#typeahead-input-control-35 .up-menu-item-text\"))).click()\n time.sleep(2)\nwait.until(EC.element_to_be_clickable((By.CSS_SELECTOR,'[data-test=\"modal-advanced-search-search-btn\"]'))).click()\n\nThe result is\n\n"
] | [
1
] | [] | [] | [
"python",
"selenium",
"selenium_webdriver",
"staleelementreferenceexception",
"xpath"
] | stackoverflow_0074675192_python_selenium_selenium_webdriver_staleelementreferenceexception_xpath.txt |
Q:
parse list containing html-like elements into nested json using Python
I'm not the best at converting certain sections of a list to nested Json and was hoping for some guidance. I have a list containing data like below:
['<h5> 1|',
'<h6>Type of Care|',
'<h6>SA|Substance use treatment|',
'<h6>DT|Detoxification |',
'<h6>HH|Transitional housing, halfway house, or sober home|',
'<h6>SUMH|Treatment for co-occurring serious mental health | illness/serious emotional disturbance and substance | use disorders|',
'',
'<h5> 2|',
'<h6>Telemedicine|',
'<h6>TELE|TelemedicineTelemedicine/telehealth|',
'']
I want to first remove all records in the list that have no content, then I want to convert the records that contain a tag like "<H5>" into the key and group the records that contain "<h6>" into values like this json output:
"codekey": [
{
"category": [
{
"key": 1,
"value": "Type of Care"
}
],
"codes": [
{
"key": "SA",
"value": "Substance use treatment"
},
{
"key": "DT",
"value": "Detoxification"
},
{
"key": "HH",
"value": "Transitional housing, halfway house, or sober home"
},
{
"key": "SUMH",
"value": "Treatment for co-occurring serious mental health | illness/serious emotional disturbance and substance | use disorders|"
}
]
},
{
"category": [
{
"key": 2,
"value": "Telemedicine"
}
],
"codes": [
{
"key": "TELE",
"value": "TelemedicineTelemedicine/telehealth"
}
]
}
]
I think I need to perform a loop but I'm getting stuck on how to create the 'key/value' relationship. I think I also need to use a regex but I'm just not the best at Python to conceptually convert the data to the required output. Any advice on training I could look up to do this OR any preliminary suggestions on how to get started? Thank you!
A:
Considering your format remains constant. Here's a flexible solution that is configurable:
class Separator():
def __init__(self, data, title, sep, splitter):
self.data = data # the data
self.title = title # the starting in your case "<h5>"
self.sep = sep # the point where you want to update res
self.splitter = splitter # the separator between key | value
self.res = [] # final res
self.tempDict = {} # tempDict to append
def clearString(self, string, *args):
for arg in args:
string = string.replace(arg, '') # replace every arg to ''
return string.strip()
def updateDict(self, val):
if val == self.sep:
self.res.append(self.tempDict) # update res
self.tempDict = {} # renew tempDict to append
else:
try:
if self.title in val: # check if it "<h5>" in your case
self.tempDict["category"] = [{"key": self.clearString(val, self.title, self.splitter), "value": self.clearString(self.data[self.data.index(val)+1],'<h6>', '|')}] # get the next value
elif self.tempDict["category"][0]["value"] != self.clearString(val, '<h6>', '|'): # check if it is not the "value" of h6 in "category"
val = self.clearString(val,"<h6>").split("|")
if "codes" not in self.tempDict.keys(): self.tempDict["codes"] = [] # create key if not there
self.tempDict["codes"].append({"key": val[0], "value": val[1]})
except: # avoid Exceptions
pass
return self.res
object = Separator(data, '<h5>', '', '|')
for val in data:
res = object.updateDict(val)
print(res)
Output for your Sample Input Provided:
[
{
'category': [{'key': '1', 'value': 'Type of Care'}],
'codes': [
{'key': 'SA', 'value': 'Substance use treatment'},
{'key': 'DT', 'value': 'Detoxification '},
{
'key': 'HH',
'value': 'Transitional housing, halfway house, or sober home',
},
{
'key': 'SUMH',
'value': 'Treatment for co-occurring serious mental health ',
},
],
},
{
'category': [{'key': '2', 'value': 'Telemedicine'}],
'codes': [
{'key': 'TELE', 'value': 'TelemedicineTelemedicine/telehealth'},
],
},
]
| parse list containing html-like elements into nested json using Python | I'm not the best at converting certain sections of a list to nested Json and was hoping for some guidance. I have a list containing data like below:
['<h5> 1|',
'<h6>Type of Care|',
'<h6>SA|Substance use treatment|',
'<h6>DT|Detoxification |',
'<h6>HH|Transitional housing, halfway house, or sober home|',
'<h6>SUMH|Treatment for co-occurring serious mental health | illness/serious emotional disturbance and substance | use disorders|',
'',
'<h5> 2|',
'<h6>Telemedicine|',
'<h6>TELE|TelemedicineTelemedicine/telehealth|',
'']
I want to first remove all records in the list that have no content, then I want to convert the records that contain a tag like "<H5>" into the key and group the records that contain "<h6>" into values like this json output:
"codekey": [
{
"category": [
{
"key": 1,
"value": "Type of Care"
}
],
"codes": [
{
"key": "SA",
"value": "Substance use treatment"
},
{
"key": "DT",
"value": "Detoxification"
},
{
"key": "HH",
"value": "Transitional housing, halfway house, or sober home"
},
{
"key": "SUMH",
"value": "Treatment for co-occurring serious mental health | illness/serious emotional disturbance and substance | use disorders|"
}
]
},
{
"category": [
{
"key": 2,
"value": "Telemedicine"
}
],
"codes": [
{
"key": "TELE",
"value": "TelemedicineTelemedicine/telehealth"
}
]
}
]
I think I need to perform a loop but I'm getting stuck on how to create the 'key/value' relationship. I think I also need to use a regex but I'm just not the best at Python to conceptually convert the data to the required output. Any advice on training I could look up to do this OR any preliminary suggestions on how to get started? Thank you!
| [
"Considering your format remains constant. Here's a flexible solution that is configurable:\nclass Separator():\n def __init__(self, data, title, sep, splitter):\n self.data = data # the data\n self.title = title # the starting in your case \"<h5>\"\n self.sep = sep # the point where you want to update res\n self.splitter = splitter # the separator between key | value\n self.res = [] # final res\n self.tempDict = {} # tempDict to append\n def clearString(self, string, *args):\n for arg in args:\n string = string.replace(arg, '') # replace every arg to ''\n return string.strip()\n def updateDict(self, val):\n if val == self.sep:\n self.res.append(self.tempDict) # update res\n self.tempDict = {} # renew tempDict to append\n else:\n try:\n if self.title in val: # check if it \"<h5>\" in your case\n self.tempDict[\"category\"] = [{\"key\": self.clearString(val, self.title, self.splitter), \"value\": self.clearString(self.data[self.data.index(val)+1],'<h6>', '|')}] # get the next value\n elif self.tempDict[\"category\"][0][\"value\"] != self.clearString(val, '<h6>', '|'): # check if it is not the \"value\" of h6 in \"category\"\n val = self.clearString(val,\"<h6>\").split(\"|\")\n if \"codes\" not in self.tempDict.keys(): self.tempDict[\"codes\"] = [] # create key if not there\n self.tempDict[\"codes\"].append({\"key\": val[0], \"value\": val[1]})\n except: # avoid Exceptions\n pass\n return self.res\nobject = Separator(data, '<h5>', '', '|')\nfor val in data:\n res = object.updateDict(val)\nprint(res)\n\nOutput for your Sample Input Provided:\n[\n {\n 'category': [{'key': '1', 'value': 'Type of Care'}],\n 'codes': [\n {'key': 'SA', 'value': 'Substance use treatment'},\n {'key': 'DT', 'value': 'Detoxification '},\n {\n 'key': 'HH',\n 'value': 'Transitional housing, halfway house, or sober home',\n },\n {\n 'key': 'SUMH',\n 'value': 'Treatment for co-occurring serious mental health ',\n },\n ],\n },\n {\n 'category': [{'key': '2', 'value': 'Telemedicine'}],\n 'codes': [\n {'key': 'TELE', 'value': 'TelemedicineTelemedicine/telehealth'},\n ],\n },\n]\n\n"
] | [
0
] | [] | [] | [
"json",
"python"
] | stackoverflow_0074661204_json_python.txt |
Q:
Transform and fill a dataframe depending on occurence of values within the dataframe
I have a dataframe such as :
Names1 Gene_name Status
SP1 GENE1 0
SP1 GENE1 1
SP1 GENE1 1
SP1 GENE1 2
SP1 GENE1 2
SP1 GENE2 0
SP3 GENE2 0
SP1 GENE2 1
SP2 GENE2 2
SP4 GENE3 1
SP4 GENE3 2
SP5 GENE3 0
SP5 GENE3 0
Then I would like to fill a new dataframe where each Gene_name is a column, and each Names is a row :
Names GENE1 GENE2 GENE3
SP1
SP2
SP3
SP4
SP5
and fill cells Values depending on the Satus for each Names groups
if only 0 > value = 0
if only 1 > value = 1
if both 0 & 1 > value = 0-1
if both 0 & 2 > value = 0-2
if both 1 & 2 > value = 1-2
if both 0 & 1 & 2 > value = 0-1-2
So for example GENE1 in SP1 both present a 0,1 and 2 status, so I fill 0-1-2 within the cell:
Names GENE1 GENE2 GENE3
SP1 0-1-2
SP2
SP3
SP4
SP5
then, SP2,SP3,SP4 and SP5 do not have value for the GENE1, so I put NA :
Names GENE1 GENE2 GENE3
SP1 0-1-2
SP2 NA
SP3 NA
SP4 NA
SP5 NA
Then for the GENE2:
GENE2 in SP1 both present a 0 and 1 status, so I fill 0-1 within the cell:
Names GENE1 GENE2 GENE3
SP1 0-1-2 0-1
SP2 NA
SP3 NA
SP4 NA
SP5 NA
GENE2 in SP2 present only a value 2 status, so I fill 2 within the cell:
Names GENE1 GENE2 GENE3
SP1 0-1-2 0-1
SP2 NA 2
SP3 NA
SP4 NA
SP5 NA
GENE2 in SP3 present only a value 0 status, so I fill 0 within the cell:
Names GENE1 GENE2 GENE3
SP1 0-1-2 0-1
SP2 NA 2
SP3 NA 0
SP4 NA
SP5 NA
and the other Names have no GENE2 values, so I put NA:
Names GENE1 GENE2 GENE3
SP1 0-1-2 0-1
SP2 NA 2
SP3 NA 0
SP4 NA NA
SP5 NA NA
and so on...
At the end I should get a full dataframe such as :
Names GENE1 GENE2 GENE3
SP1 0-1-2 0-1 NA
SP2 NA 2 NA
SP3 NA 0 NA
SP4 NA NA 0-2
SP5 NA NA 0
Does someone have an idea please ?
Here is the dict format of the dataframe if it can helps :
{'Names1': {0: 'SP1', 1: 'SP1', 2: 'SP1', 3: 'SP1', 4: 'SP1', 5: 'SP1', 6: 'SP3', 7: 'SP1', 8: 'SP2', 9: 'SP4', 10: 'SP4', 11: 'SP5', 12: 'SP5'}, 'Gene_name': {0: 'GENE1', 1: 'GENE1', 2: 'GENE1', 3: 'GENE1', 4: 'GENE1', 5: 'GENE2', 6: 'GENE2', 7: 'GENE2', 8: 'GENE2', 9: 'GENE3', 10: 'GENE3', 11: 'GENE3', 12: 'GENE3'}, 'Status': {0: 0, 1: 1, 2: 1, 3: 2, 4: 2, 5: 0, 6: 0, 7: 1, 8: 2, 9: 1, 10: 2, 11: 0, 12: 0}}
A:
Code
g = df.groupby(['Names1', 'Gene_name'])
g['Status'].agg(lambda x: '-'.join(x.astype('str').sort_values().unique())).unstack()
output
Gene_name GENE1 GENE2 GENE3
Names1
SP1 0-1-2 0-1 NaN
SP2 NaN 2 NaN
SP3 NaN 0 NaN
SP4 NaN NaN 1-2
SP5 NaN NaN 0
make desired output
(g['Status'].agg(lambda x: '-'.join(x.astype('str').sort_values().unique()))
.unstack().rename_axis(index='Name', columns=''))
result:
GENE1 GENE2 GENE3
Name
SP1 0-1-2 0-1 NaN
SP2 NaN 2 NaN
SP3 NaN 0 NaN
SP4 NaN NaN 1-2
SP5 NaN NaN 0
A:
The above solution would be neater, but just wanted to put out an alternative solution to the same:
import numpy as np
names = df['Names1'].unique()
genes = df['Gene_name'].unique()
result_df = pd.DataFrame({'Names': names})
for gene in genes:
values = []
for name in names:
result = '-'.join(map(str, count_df.loc[(count_df['Names1'] == name) & (count_df['Gene_name'] == gene), ['Status']]['Status'].to_numpy()))
if result == '':
values.append(np.nan)
else:
values.append(result)
result_df[gene] = values
result_df
Output
GENE1 GENE2 GENE3
Names
SP1 0-1-2 0-1 NaN
SP2 NaN 2 NaN
SP3 NaN 0 NaN
SP4 NaN NaN 1-2
SP5 NaN NaN 0
A:
with using pivot table the solutiont can looks like this:
df.pivot_table('Status','Names1','Gene_name',
aggfunc=lambda x: '-'.join(x.astype(str).unique())).rename_axis(columns=None)
>>>
'''
GENE1 GENE2 GENE3
Names1
SP1 0-1-2 0-1 NaN
SP2 NaN 2 NaN
SP3 NaN 0 NaN
SP4 NaN NaN 1-2
SP5 NaN NaN 0
| Transform and fill a dataframe depending on occurence of values within the dataframe | I have a dataframe such as :
Names1 Gene_name Status
SP1 GENE1 0
SP1 GENE1 1
SP1 GENE1 1
SP1 GENE1 2
SP1 GENE1 2
SP1 GENE2 0
SP3 GENE2 0
SP1 GENE2 1
SP2 GENE2 2
SP4 GENE3 1
SP4 GENE3 2
SP5 GENE3 0
SP5 GENE3 0
Then I would like to fill a new dataframe where each Gene_name is a column, and each Names is a row :
Names GENE1 GENE2 GENE3
SP1
SP2
SP3
SP4
SP5
and fill cells Values depending on the Satus for each Names groups
if only 0 > value = 0
if only 1 > value = 1
if both 0 & 1 > value = 0-1
if both 0 & 2 > value = 0-2
if both 1 & 2 > value = 1-2
if both 0 & 1 & 2 > value = 0-1-2
So for example GENE1 in SP1 both present a 0,1 and 2 status, so I fill 0-1-2 within the cell:
Names GENE1 GENE2 GENE3
SP1 0-1-2
SP2
SP3
SP4
SP5
then, SP2,SP3,SP4 and SP5 do not have value for the GENE1, so I put NA :
Names GENE1 GENE2 GENE3
SP1 0-1-2
SP2 NA
SP3 NA
SP4 NA
SP5 NA
Then for the GENE2:
GENE2 in SP1 both present a 0 and 1 status, so I fill 0-1 within the cell:
Names GENE1 GENE2 GENE3
SP1 0-1-2 0-1
SP2 NA
SP3 NA
SP4 NA
SP5 NA
GENE2 in SP2 present only a value 2 status, so I fill 2 within the cell:
Names GENE1 GENE2 GENE3
SP1 0-1-2 0-1
SP2 NA 2
SP3 NA
SP4 NA
SP5 NA
GENE2 in SP3 present only a value 0 status, so I fill 0 within the cell:
Names GENE1 GENE2 GENE3
SP1 0-1-2 0-1
SP2 NA 2
SP3 NA 0
SP4 NA
SP5 NA
and the other Names have no GENE2 values, so I put NA:
Names GENE1 GENE2 GENE3
SP1 0-1-2 0-1
SP2 NA 2
SP3 NA 0
SP4 NA NA
SP5 NA NA
and so on...
At the end I should get a full dataframe such as :
Names GENE1 GENE2 GENE3
SP1 0-1-2 0-1 NA
SP2 NA 2 NA
SP3 NA 0 NA
SP4 NA NA 0-2
SP5 NA NA 0
Does someone have an idea please ?
Here is the dict format of the dataframe if it can helps :
{'Names1': {0: 'SP1', 1: 'SP1', 2: 'SP1', 3: 'SP1', 4: 'SP1', 5: 'SP1', 6: 'SP3', 7: 'SP1', 8: 'SP2', 9: 'SP4', 10: 'SP4', 11: 'SP5', 12: 'SP5'}, 'Gene_name': {0: 'GENE1', 1: 'GENE1', 2: 'GENE1', 3: 'GENE1', 4: 'GENE1', 5: 'GENE2', 6: 'GENE2', 7: 'GENE2', 8: 'GENE2', 9: 'GENE3', 10: 'GENE3', 11: 'GENE3', 12: 'GENE3'}, 'Status': {0: 0, 1: 1, 2: 1, 3: 2, 4: 2, 5: 0, 6: 0, 7: 1, 8: 2, 9: 1, 10: 2, 11: 0, 12: 0}}
| [
"Code\ng = df.groupby(['Names1', 'Gene_name'])\ng['Status'].agg(lambda x: '-'.join(x.astype('str').sort_values().unique())).unstack()\n\noutput\nGene_name GENE1 GENE2 GENE3\nNames1 \nSP1 0-1-2 0-1 NaN\nSP2 NaN 2 NaN\nSP3 NaN 0 NaN\nSP4 NaN NaN 1-2\nSP5 NaN NaN 0\n\n\nmake desired output\n(g['Status'].agg(lambda x: '-'.join(x.astype('str').sort_values().unique()))\n .unstack().rename_axis(index='Name', columns=''))\n\nresult:\n GENE1 GENE2 GENE3\nName \nSP1 0-1-2 0-1 NaN\nSP2 NaN 2 NaN\nSP3 NaN 0 NaN\nSP4 NaN NaN 1-2\nSP5 NaN NaN 0\n\n",
"The above solution would be neater, but just wanted to put out an alternative solution to the same:\nimport numpy as np\n\nnames = df['Names1'].unique() \ngenes = df['Gene_name'].unique() \nresult_df = pd.DataFrame({'Names': names}) \n\nfor gene in genes: \n values = []\n for name in names: \n result = '-'.join(map(str, count_df.loc[(count_df['Names1'] == name) & (count_df['Gene_name'] == gene), ['Status']]['Status'].to_numpy()))\n if result == '':\n values.append(np.nan) \n else:\n values.append(result) \n\n result_df[gene] = values \n\nresult_df \n\nOutput\n GENE1 GENE2 GENE3\nNames \nSP1 0-1-2 0-1 NaN\nSP2 NaN 2 NaN\nSP3 NaN 0 NaN\nSP4 NaN NaN 1-2\nSP5 NaN NaN 0\n\n",
"with using pivot table the solutiont can looks like this:\ndf.pivot_table('Status','Names1','Gene_name',\n aggfunc=lambda x: '-'.join(x.astype(str).unique())).rename_axis(columns=None)\n>>>\n'''\n GENE1 GENE2 GENE3\nNames1 \nSP1 0-1-2 0-1 NaN\nSP2 NaN 2 NaN\nSP3 NaN 0 NaN\nSP4 NaN NaN 1-2\nSP5 NaN NaN 0\n\n"
] | [
3,
1,
0
] | [] | [] | [
"pandas",
"python",
"python_3.x"
] | stackoverflow_0074674654_pandas_python_python_3.x.txt |
Q:
limiting the number of decimal places in python pandas table
I was trying to rewrite a CSV file using pandas module in python. I tried to multiply the first column (excluding the title) by 60 as below,
f=001.csv
Urbs_Data=pd.read_csv(f,header=None)
Urbs_Data=Urbs_Data.replace("Time_hrs","Time_min")
Urbs_Data.loc[1:,0]=Urbs_Data.loc[1:,0].astype(float)
Urbs_Data.loc[1:,0]*=60
It gives me some funny number for the first column, as
124.98000000000002,462.67
130.01999999999998,460.34
135.0,454.36
139.98000000000002,443.29
Is there any way to limit the number of decimal places for those numbers (to 2)? I tried to use the normal round function, it does not work for me.
A:
The DataFrame round method should work...
import numpy as np
import pandas as pd
some_numbers = np.random.ranf(5)
df = pd.DataFrame({'random_numbers':some_numbers})
rounded_df = df.round(decimals=2)
A:
import numpy as np
import pandas as pd
#fileName
f=001.csv
#Load File to Df
Urbs_Data=pd.read_csv(f,header=None)
#Round of all the numeric values to the specified decimal value
Urbs_Data= Urbs_Data.round(decimals=3)
This rounding off will be applied on all the Numeric columns
| limiting the number of decimal places in python pandas table | I was trying to rewrite a CSV file using pandas module in python. I tried to multiply the first column (excluding the title) by 60 as below,
f=001.csv
Urbs_Data=pd.read_csv(f,header=None)
Urbs_Data=Urbs_Data.replace("Time_hrs","Time_min")
Urbs_Data.loc[1:,0]=Urbs_Data.loc[1:,0].astype(float)
Urbs_Data.loc[1:,0]*=60
It gives me some funny number for the first column, as
124.98000000000002,462.67
130.01999999999998,460.34
135.0,454.36
139.98000000000002,443.29
Is there any way to limit the number of decimal places for those numbers (to 2)? I tried to use the normal round function, it does not work for me.
| [
"The DataFrame round method should work...\nimport numpy as np\nimport pandas as pd \n\nsome_numbers = np.random.ranf(5)\n\ndf = pd.DataFrame({'random_numbers':some_numbers})\n\nrounded_df = df.round(decimals=2)\n\n",
"import numpy as np\nimport pandas as pd \n\n#fileName\nf=001.csv\n\n#Load File to Df\nUrbs_Data=pd.read_csv(f,header=None)\n\n#Round of all the numeric values to the specified decimal value\nUrbs_Data= Urbs_Data.round(decimals=3)\n\nThis rounding off will be applied on all the Numeric columns\n"
] | [
31,
0
] | [] | [] | [
"dataframe",
"pandas",
"python"
] | stackoverflow_0054509060_dataframe_pandas_python.txt |
Q:
How to implement a smooth clamp function in python?
The clamp function is clamp(x, min, max) = min if x < min, max if x > max, else x
I need a function that behaves like the clamp function, but is smooth (i.e. has a continuous derivative).
A:
What you are looking for is something like the Smoothstep function, which has a free parameter N, giving the "smoothness", i.e. how many derivatives should be continuous. It is defined as such:
This is used in several libraries and can be implemented in numpy as
import numpy as np
from scipy.special import comb
def smoothstep(x, x_min=0, x_max=1, N=1):
x = np.clip((x - x_min) / (x_max - x_min), 0, 1)
result = 0
for n in range(0, N + 1):
result += comb(N + n, n) * comb(2 * N + 1, N - n) * (-x) ** n
result *= x ** (N + 1)
return result
It reduces to the regular clamp function given N=0 (0 times differentiable), and gives increasing smoothness as you increase N. You can visualize it like this:
import matplotlib.pyplot as plt
x = np.linspace(-0.5, 1.5, 1000)
for N in range(0, 5):
y = smoothstep(x, N=N)
plt.plot(x, y, label=str(N))
plt.legend()
which gives this result:
A:
Normal clamp:
np.clip(x, mi, mx)
Smoothclamp (guaranteed to agree with normal clamp for x < min and x > max):
def smoothclamp(x, mi, mx): return mi + (mx-mi)*(lambda t: np.where(t < 0 , 0, np.where( t <= 1 , 3*t**2-2*t**3, 1 ) ) )( (x-mi)/(mx-mi) )
Sigmoid (Approximates clamp, never smaller than min, never larger than max)
def sigmoid(x,mi, mx): return mi + (mx-mi)*(lambda t: (1+200**(-t+0.5))**(-1) )( (x-mi)/(mx-mi) )
For some purposes Sigmoid will be better than Smoothclamp because Sigmoid is an invertible function - no information is lost.
For other purposes, you may need to be certain that f(x) = xmax for all x > xmax - in that case Smoothclamp is better. Also, as mentioned in another answer, there is a whole family of Smoothclamp functions, though the one given here is adequate for my purposes (no special properties other than a smooth derivative needed)
Plot them:
import numpy as np
import matplotlib.pyplot as plt
fig, ax = plt.subplots(1, 1)
x = np.linspace(-4,7,1000)
ax.plot(x, np.clip(x, -1, 4),'k-', lw=2, alpha=0.8, label='clamp')
ax.plot(x, smoothclamp(x, -1, 4),'g-', lw=3, alpha=0.5, label='smoothclamp')
ax.plot(x, sigmoid(x, -1, 4),'b-', lw=3, alpha=0.5, label='sigmoid')
plt.legend(loc='upper left')
plt.show()
Also of potential use is the arithmetic mean of these two:
def clampoid(x, mi, mx): return mi + (mx-mi)*(lambda t: 0.5*(1+200**(-t+0.5))**(-1) + 0.5*np.where(t < 0 , 0, np.where( t <= 1 , 3*t**2-2*t**3, 1 ) ) )( (x-mi)/(mx-mi) )
A:
As an option, if you want to make sure that there is a correspondence with the clamp function, you can convolve the normal clamp function with a smooth bell-like function such as Lorentzian or Gaussian.
This will guarantee the correspondence between the normal clamp function and its smoothed version. The smoothness itself will be defined by the underlying smooth function you choose to use in the convolution.
| How to implement a smooth clamp function in python? | The clamp function is clamp(x, min, max) = min if x < min, max if x > max, else x
I need a function that behaves like the clamp function, but is smooth (i.e. has a continuous derivative).
| [
"What you are looking for is something like the Smoothstep function, which has a free parameter N, giving the \"smoothness\", i.e. how many derivatives should be continuous. It is defined as such:\n\nThis is used in several libraries and can be implemented in numpy as\nimport numpy as np\nfrom scipy.special import comb\n\ndef smoothstep(x, x_min=0, x_max=1, N=1):\n x = np.clip((x - x_min) / (x_max - x_min), 0, 1)\n\n result = 0\n for n in range(0, N + 1):\n result += comb(N + n, n) * comb(2 * N + 1, N - n) * (-x) ** n\n\n result *= x ** (N + 1)\n\n return result\n\nIt reduces to the regular clamp function given N=0 (0 times differentiable), and gives increasing smoothness as you increase N. You can visualize it like this:\nimport matplotlib.pyplot as plt\n\nx = np.linspace(-0.5, 1.5, 1000)\n\nfor N in range(0, 5):\n y = smoothstep(x, N=N)\n plt.plot(x, y, label=str(N))\n\nplt.legend()\n\nwhich gives this result:\n\n",
"Normal clamp:\nnp.clip(x, mi, mx)\n\nSmoothclamp (guaranteed to agree with normal clamp for x < min and x > max):\ndef smoothclamp(x, mi, mx): return mi + (mx-mi)*(lambda t: np.where(t < 0 , 0, np.where( t <= 1 , 3*t**2-2*t**3, 1 ) ) )( (x-mi)/(mx-mi) )\n\nSigmoid (Approximates clamp, never smaller than min, never larger than max)\ndef sigmoid(x,mi, mx): return mi + (mx-mi)*(lambda t: (1+200**(-t+0.5))**(-1) )( (x-mi)/(mx-mi) )\n\nFor some purposes Sigmoid will be better than Smoothclamp because Sigmoid is an invertible function - no information is lost. \nFor other purposes, you may need to be certain that f(x) = xmax for all x > xmax - in that case Smoothclamp is better. Also, as mentioned in another answer, there is a whole family of Smoothclamp functions, though the one given here is adequate for my purposes (no special properties other than a smooth derivative needed)\nPlot them:\nimport numpy as np\nimport matplotlib.pyplot as plt\nfig, ax = plt.subplots(1, 1)\nx = np.linspace(-4,7,1000)\nax.plot(x, np.clip(x, -1, 4),'k-', lw=2, alpha=0.8, label='clamp')\nax.plot(x, smoothclamp(x, -1, 4),'g-', lw=3, alpha=0.5, label='smoothclamp')\nax.plot(x, sigmoid(x, -1, 4),'b-', lw=3, alpha=0.5, label='sigmoid')\nplt.legend(loc='upper left')\nplt.show()\n\n\nAlso of potential use is the arithmetic mean of these two: \ndef clampoid(x, mi, mx): return mi + (mx-mi)*(lambda t: 0.5*(1+200**(-t+0.5))**(-1) + 0.5*np.where(t < 0 , 0, np.where( t <= 1 , 3*t**2-2*t**3, 1 ) ) )( (x-mi)/(mx-mi) )\n\n",
"As an option, if you want to make sure that there is a correspondence with the clamp function, you can convolve the normal clamp function with a smooth bell-like function such as Lorentzian or Gaussian.\nThis will guarantee the correspondence between the normal clamp function and its smoothed version. The smoothness itself will be defined by the underlying smooth function you choose to use in the convolution.\n"
] | [
12,
10,
1
] | [] | [] | [
"clamp",
"numpy",
"pandas",
"python",
"smoothstep"
] | stackoverflow_0045165452_clamp_numpy_pandas_python_smoothstep.txt |