content
stringlengths 85
101k
| title
stringlengths 0
150
| question
stringlengths 15
48k
| answers
sequence | answers_scores
sequence | non_answers
sequence | non_answers_scores
sequence | tags
sequence | name
stringlengths 35
137
|
---|---|---|---|---|---|---|---|---|
Q:
Why do pythons tkinter grid buttons stretch when a label in the grid changes size?
I am making a simple application with 1 button and 1 label where pressing the button changes the text on the label. Both label and button and placed using the tkinter's grid system however when I press the button the label's text changes size as expected but the button becomes stretched to the label's length too. Why?
This is my code.
import tkinter as tk
def change():
label1.config(text = "example text that is really big which shows the button stretching")
window = tk.Tk()
window.config(bg="black")
b1=tk.Button(window,text="button1",font=("Segoe UI",40),command=change).grid(row=1,column=1,sticky="news")
label1=tk.Label(text = "text",font=("Segoe UI",20),bg="black",fg="white")
label1.grid(row=2,column=1,sticky="news")
window.mainloop()
The expected result was the label changing and the button's size staying the same but instead the button becomes stretched to the labels length. I have tried a lot of things trying to make this work but I can't figure it out.
Before pressing the button
After pressing the button
A:
Setting sticky="news" for the button, will expand the button to fill the available space in the four directions North, East, West, South. Try changing it to sticky="w" to make it stick to the West. So, the line of code for creating the button will become:
b1=tk.Button(window,text="button1",font=("Segoe UI",40),command=change).grid(row=1,column=1,sticky="w")
| Why do pythons tkinter grid buttons stretch when a label in the grid changes size? | I am making a simple application with 1 button and 1 label where pressing the button changes the text on the label. Both label and button and placed using the tkinter's grid system however when I press the button the label's text changes size as expected but the button becomes stretched to the label's length too. Why?
This is my code.
import tkinter as tk
def change():
label1.config(text = "example text that is really big which shows the button stretching")
window = tk.Tk()
window.config(bg="black")
b1=tk.Button(window,text="button1",font=("Segoe UI",40),command=change).grid(row=1,column=1,sticky="news")
label1=tk.Label(text = "text",font=("Segoe UI",20),bg="black",fg="white")
label1.grid(row=2,column=1,sticky="news")
window.mainloop()
The expected result was the label changing and the button's size staying the same but instead the button becomes stretched to the labels length. I have tried a lot of things trying to make this work but I can't figure it out.
Before pressing the button
After pressing the button
| [
"Setting sticky=\"news\" for the button, will expand the button to fill the available space in the four directions North, East, West, South. Try changing it to sticky=\"w\" to make it stick to the West. So, the line of code for creating the button will become:\nb1=tk.Button(window,text=\"button1\",font=(\"Segoe UI\",40),command=change).grid(row=1,column=1,sticky=\"w\")\n\n\n"
] | [
0
] | [] | [] | [
"python",
"tkinter"
] | stackoverflow_0074665903_python_tkinter.txt |
Q:
How can I specify which Python toolchain to use in Bazel?
How can I configure Bazel to pick one toolchain over the other? I am okay with defining which toolchain to use via command-line argument or specifying which should be used in a specific target.
There are currently two toolchains being defined in my WORKSPACE file. I have two Python toolchains. One of them builds Python from source and includes it in the executable .zip output, and the other one does not.
When building, the toolchain that gets used is always the first toolchain which is registered. In this case, python3_tooolchain is used even though the build target imports requirement from hermetic_python3_toolchain.
# WORKSPACE
load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
load("@rules_python//python:pip.bzl", "pip_install")
http_archive(
name = "rules_python",
url = "https://github.com/bazelbuild/rules_python/releases/download/0.5.0/rules_python-0.5.0.tar.gz",
sha256 = "cd6730ed53a002c56ce4e2f396ba3b3be262fd7cb68339f0377a45e8227fe332",
)
# Non-hermetic toolchain
register_toolchains("//src:python3_toolchain")
pip_install(
quiet = False,
name = "python_dependencies",
requirements = "//:requirements.txt",
python_interpreter = "/usr/bin/python3"
)
load("@python_dependencies//:requirements.bzl", "requirement")
# Hermetic toolchain
_py_configure = """
if [[ "$OSTYPE" == "darwin"* ]]; then
./configure --prefix=$(pwd)/bazel_install --with-openssl=$(brew --prefix openssl)
else
./configure --prefix=$(pwd)/bazel_install
fi
"""
http_archive(
name = "hermetic_interpreter",
urls = ["https://www.python.org/ftp/python/3.11.0/Python-3.11.0.tar.xz"],
sha256 = "a57dc82d77358617ba65b9841cee1e3b441f386c3789ddc0676eca077f2951c3",
strip_prefix = "Python-3.11.0",
patch_cmds = [
"mkdir $(pwd)/bazel_install",
_py_configure,
"make",
"make install",
"ln -s bazel_install/bin/python3 python_bin",
],
build_file_content = """
exports_files(["python_bin"])
filegroup(
name = "files",
srcs = glob(["bazel_install/**"], exclude = ["**/* *"]),
visibility = ["//visibility:public"],
)
""",
)
pip_install(
name = "hermetic_python3_dependencies",
requirements = "//:requirements.txt",
python_interpreter_target = "@hermetic_interpreter//:python_bin",
)
load("@hermetic_python3_dependencies//:requirements.bzl", "requirement")
load("@rules_python//python:defs.bzl", "py_binary")
load("@rules_python//python:defs.bzl", "py_library")
register_toolchains("//src:hermetic_python3_toolchain")
# src/BUILD
load("@bazel_tools//tools/python:toolchain.bzl", "py_runtime_pair")
# Non-hermetic toolchain
py_runtime(
name = "python3_runtime",
interpreter_path = "/usr/bin/python3",
python_version = "PY3",
visibility = ["//visibility:public"],
)
py_runtime_pair(
name = "python3_runtime_pair",
py2_runtime = None,
py3_runtime = ":python3_runtime",
)
toolchain(
name = "python3_toolchain",
toolchain = ":python3_runtime_pair",
toolchain_type = "@bazel_tools//tools/python:toolchain_type",
)
# Hermetic toolchain
py_runtime(
name = "hermetic_python3_runtime",
files = ["@hermetic_interpreter//:files"],
interpreter = "@hermetic_interpreter//:python_bin",
python_version = "PY3",
visibility = ["//visibility:public"],
)
py_runtime_pair(
name = "hermetic_python3_runtime_pair",
py2_runtime = None,
py3_runtime = ":hermetic_python3_runtime",
)
toolchain(
name = "hermetic_python3_toolchain",
toolchain = ":hermetic_python3_runtime_pair",
toolchain_type = "@bazel_tools//tools/python:toolchain_type",
)
package(default_visibility = ["//visibility:public"])
# /src/some_tool/BUILD
load("@hermetic_python3_dependencies//:requirements.bzl", "requirement") # Can load this rule from either `hermetic_python3_dependencies` or `python3_dependencies`, but does not seem to make a difference
py_binary(
name = "some-tool",
main = "some_tool.py",
srcs = ["some_tool_file.py"],
python_version = "PY3",
srcs_version = "PY3",
deps = [
requirement("requests"),
"//src/common/some-library:library",
]
)
package(default_visibility = ["//visibility:public"])
A:
Consider upgrading rules_python, as that ruleset includes a hermetic python toolchain since https://github.com/bazelbuild/rules_python/releases/tag/0.7.0.
If that is not an option:
Currently you are registering two toolchains in your WORKSPACE.bazel file and bazel will use its toolchain resolution to pick one of them. You can debug that resolution with the --toolchain_resolution_debug=regex flag to see what is going on.
If you want to force the entire build to use one of the toolchains, remove registering the toolchains from the WORKSPACE.bazel file and create a .bazelrc:
build:hermetic_python --extra_toolchains=//src:hermetic_python3_toolchain
build:system_python --extra_toolchains=//src:python3_toolchain
Now you can switch between these toolchains by using bazel build --config=hermetic_python or bazel build --config=system_python.
Beware however, that this does not influence which of the python toolchains was used to run the pip_parse(). You need to take extra care from which you load the requirement() function. Simply by load()ing the function you force the evaluation of the pip_parse() and therefor the fetching/compilation of the corresponding python interpreter.
| How can I specify which Python toolchain to use in Bazel? | How can I configure Bazel to pick one toolchain over the other? I am okay with defining which toolchain to use via command-line argument or specifying which should be used in a specific target.
There are currently two toolchains being defined in my WORKSPACE file. I have two Python toolchains. One of them builds Python from source and includes it in the executable .zip output, and the other one does not.
When building, the toolchain that gets used is always the first toolchain which is registered. In this case, python3_tooolchain is used even though the build target imports requirement from hermetic_python3_toolchain.
# WORKSPACE
load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
load("@rules_python//python:pip.bzl", "pip_install")
http_archive(
name = "rules_python",
url = "https://github.com/bazelbuild/rules_python/releases/download/0.5.0/rules_python-0.5.0.tar.gz",
sha256 = "cd6730ed53a002c56ce4e2f396ba3b3be262fd7cb68339f0377a45e8227fe332",
)
# Non-hermetic toolchain
register_toolchains("//src:python3_toolchain")
pip_install(
quiet = False,
name = "python_dependencies",
requirements = "//:requirements.txt",
python_interpreter = "/usr/bin/python3"
)
load("@python_dependencies//:requirements.bzl", "requirement")
# Hermetic toolchain
_py_configure = """
if [[ "$OSTYPE" == "darwin"* ]]; then
./configure --prefix=$(pwd)/bazel_install --with-openssl=$(brew --prefix openssl)
else
./configure --prefix=$(pwd)/bazel_install
fi
"""
http_archive(
name = "hermetic_interpreter",
urls = ["https://www.python.org/ftp/python/3.11.0/Python-3.11.0.tar.xz"],
sha256 = "a57dc82d77358617ba65b9841cee1e3b441f386c3789ddc0676eca077f2951c3",
strip_prefix = "Python-3.11.0",
patch_cmds = [
"mkdir $(pwd)/bazel_install",
_py_configure,
"make",
"make install",
"ln -s bazel_install/bin/python3 python_bin",
],
build_file_content = """
exports_files(["python_bin"])
filegroup(
name = "files",
srcs = glob(["bazel_install/**"], exclude = ["**/* *"]),
visibility = ["//visibility:public"],
)
""",
)
pip_install(
name = "hermetic_python3_dependencies",
requirements = "//:requirements.txt",
python_interpreter_target = "@hermetic_interpreter//:python_bin",
)
load("@hermetic_python3_dependencies//:requirements.bzl", "requirement")
load("@rules_python//python:defs.bzl", "py_binary")
load("@rules_python//python:defs.bzl", "py_library")
register_toolchains("//src:hermetic_python3_toolchain")
# src/BUILD
load("@bazel_tools//tools/python:toolchain.bzl", "py_runtime_pair")
# Non-hermetic toolchain
py_runtime(
name = "python3_runtime",
interpreter_path = "/usr/bin/python3",
python_version = "PY3",
visibility = ["//visibility:public"],
)
py_runtime_pair(
name = "python3_runtime_pair",
py2_runtime = None,
py3_runtime = ":python3_runtime",
)
toolchain(
name = "python3_toolchain",
toolchain = ":python3_runtime_pair",
toolchain_type = "@bazel_tools//tools/python:toolchain_type",
)
# Hermetic toolchain
py_runtime(
name = "hermetic_python3_runtime",
files = ["@hermetic_interpreter//:files"],
interpreter = "@hermetic_interpreter//:python_bin",
python_version = "PY3",
visibility = ["//visibility:public"],
)
py_runtime_pair(
name = "hermetic_python3_runtime_pair",
py2_runtime = None,
py3_runtime = ":hermetic_python3_runtime",
)
toolchain(
name = "hermetic_python3_toolchain",
toolchain = ":hermetic_python3_runtime_pair",
toolchain_type = "@bazel_tools//tools/python:toolchain_type",
)
package(default_visibility = ["//visibility:public"])
# /src/some_tool/BUILD
load("@hermetic_python3_dependencies//:requirements.bzl", "requirement") # Can load this rule from either `hermetic_python3_dependencies` or `python3_dependencies`, but does not seem to make a difference
py_binary(
name = "some-tool",
main = "some_tool.py",
srcs = ["some_tool_file.py"],
python_version = "PY3",
srcs_version = "PY3",
deps = [
requirement("requests"),
"//src/common/some-library:library",
]
)
package(default_visibility = ["//visibility:public"])
| [
"Consider upgrading rules_python, as that ruleset includes a hermetic python toolchain since https://github.com/bazelbuild/rules_python/releases/tag/0.7.0.\nIf that is not an option:\nCurrently you are registering two toolchains in your WORKSPACE.bazel file and bazel will use its toolchain resolution to pick one of them. You can debug that resolution with the --toolchain_resolution_debug=regex flag to see what is going on.\nIf you want to force the entire build to use one of the toolchains, remove registering the toolchains from the WORKSPACE.bazel file and create a .bazelrc:\nbuild:hermetic_python --extra_toolchains=//src:hermetic_python3_toolchain\nbuild:system_python --extra_toolchains=//src:python3_toolchain\n\nNow you can switch between these toolchains by using bazel build --config=hermetic_python or bazel build --config=system_python.\nBeware however, that this does not influence which of the python toolchains was used to run the pip_parse(). You need to take extra care from which you load the requirement() function. Simply by load()ing the function you force the evaluation of the pip_parse() and therefor the fetching/compilation of the corresponding python interpreter.\n"
] | [
0
] | [] | [] | [
"bazel",
"build",
"python"
] | stackoverflow_0074512774_bazel_build_python.txt |
Q:
add a comma after every "%" symbol using regex in a dataframe
if i have a value like this :
C:100% B:90% A:80%
i want to add comma after every % so the output is like this :
C:100%,B:90%,A:80%
i've tried somthing like :
data['Final'] = data['Final'].str.replace(r'(%)\n\b', r'\1,', regex=True)
A:
You can use the re.sub method from the re module in Python to achieve this.
import re
# Your original string
string = "C:100% B:90% A:80%"
# Use regex to replace all occurrences of '%' with ',%'
string = re.sub("%", ",%", string)
# The resulting string will be: "C:100%, B:90%, A:80%"
If you want to apply this to a column in a DataFrame, you can use the apply method to apply the regex substitution to each value in the column. For example:
import pandas as pd
import re
# Create a DataFrame with a column of strings
df = pd.DataFrame({"values": ["C:100% B:90% A:80%", "D:70% E:60% F:50%"]})
# Use the apply method to apply the regex substitution to each value in the column
df["values"] = df["values"].apply(lambda x: re.sub("% ", "%,", x))
This will result in a DataFrame with the following values in the values column:
0 C:100%,B:90%,A:80%
1 D:70%,E:60%,F:50%
A:
You can use this :
df['final']= df['final'].str.replace(r'%\s*\b', r'%,', regex=True)
Output :
print(df)
final
0 C:100%,B:90%,A:80%
A:
There is no newline in your example data, so you could write the pattern matching just a space, or 1 or more whitespace chars \s+
data = pd.DataFrame({"Final": ["C:100% B:90% A:80%"]})
data['Final'] = data['Final'].str.replace(r'(%) \b', r'\1,', regex=True)
print(data)
Output
Final
0 C:100%,B:90%,A:80%
| add a comma after every "%" symbol using regex in a dataframe | if i have a value like this :
C:100% B:90% A:80%
i want to add comma after every % so the output is like this :
C:100%,B:90%,A:80%
i've tried somthing like :
data['Final'] = data['Final'].str.replace(r'(%)\n\b', r'\1,', regex=True)
| [
"You can use the re.sub method from the re module in Python to achieve this.\nimport re\n\n# Your original string\nstring = \"C:100% B:90% A:80%\"\n\n# Use regex to replace all occurrences of '%' with ',%'\nstring = re.sub(\"%\", \",%\", string)\n\n# The resulting string will be: \"C:100%, B:90%, A:80%\"\n\nIf you want to apply this to a column in a DataFrame, you can use the apply method to apply the regex substitution to each value in the column. For example:\nimport pandas as pd\nimport re\n\n# Create a DataFrame with a column of strings\ndf = pd.DataFrame({\"values\": [\"C:100% B:90% A:80%\", \"D:70% E:60% F:50%\"]})\n\n# Use the apply method to apply the regex substitution to each value in the column\ndf[\"values\"] = df[\"values\"].apply(lambda x: re.sub(\"% \", \"%,\", x))\n\nThis will result in a DataFrame with the following values in the values column:\n0 C:100%,B:90%,A:80%\n1 D:70%,E:60%,F:50%\n\n",
"You can use this :\ndf['final']= df['final'].str.replace(r'%\\s*\\b', r'%,', regex=True)\n\nOutput :\nprint(df)\n\n final\n0 C:100%,B:90%,A:80%\n\n",
"There is no newline in your example data, so you could write the pattern matching just a space, or 1 or more whitespace chars \\s+\ndata = pd.DataFrame({\"Final\": [\"C:100% B:90% A:80%\"]})\ndata['Final'] = data['Final'].str.replace(r'(%) \\b', r'\\1,', regex=True)\nprint(data)\n\nOutput\n Final\n0 C:100%,B:90%,A:80%\n\n"
] | [
1,
0,
0
] | [] | [] | [
"dataframe",
"python",
"regex",
"string"
] | stackoverflow_0074665703_dataframe_python_regex_string.txt |
Q:
Python Dataframe fillna with value on left column
I have an excel spreadsheet where there are merged cells.
I would like to build a dictionary of Product_ID - Category - Country.
But for that I need to get, I believe, Python to be able to read an excel file with horizontally merged cells.
import pandas as pd
excel_sheet = pd.read_excel(r'C:\Users\myusername\Documents\Product_Sales_Database.xlsx, 'Product IDs')
However the returned dataframe is this:
My question is, how can I fill the nan values in the dataframe, with the values on the left column?
Thank you!
A:
This should do:
df.iloc[0] = df.iloc[0].ffill()
A:
I understand, that the question is more than 2 yo, and the best answer is coorect, but if you want to fill NaNs of the whole Frame, you can use:
df.T.ffill().T
| Python Dataframe fillna with value on left column | I have an excel spreadsheet where there are merged cells.
I would like to build a dictionary of Product_ID - Category - Country.
But for that I need to get, I believe, Python to be able to read an excel file with horizontally merged cells.
import pandas as pd
excel_sheet = pd.read_excel(r'C:\Users\myusername\Documents\Product_Sales_Database.xlsx, 'Product IDs')
However the returned dataframe is this:
My question is, how can I fill the nan values in the dataframe, with the values on the left column?
Thank you!
| [
"This should do:\ndf.iloc[0] = df.iloc[0].ffill() \n\n",
"I understand, that the question is more than 2 yo, and the best answer is coorect, but if you want to fill NaNs of the whole Frame, you can use:\ndf.T.ffill().T\n\n"
] | [
1,
0
] | [] | [] | [
"dataframe",
"fillna",
"pandas",
"python"
] | stackoverflow_0063303810_dataframe_fillna_pandas_python.txt |
Q:
Connect to a remote sqlite3 database with Python
I am able to create a connection to a local sqlite3 database ( Using Mac OS X 10.5 and Python 2.5.1 ) with this:
conn = sqlite3.connect('/db/MyDb')
How can I connect to this database if it is located on a server ( for example on a server running Ubuntu 8.04 with an IP address of 10.7.1.71 ) , and is not stored locally?
e.g. this does not seem to work:
conn = sqlite3.connect('10.7.1.71./db/MyDb')
A:
SQLite is embedded-only. You'll need to mount the remote filesystem before you can access it. And don't try to have more than one machine accessing the SQLite database at a time; SQLite is not built for that. Use something like PostgreSQL instead if you need that.
A:
The sqlite FAQ has an answer relevant to your question. It points out that although multi-machine network access is theoretically possible (using a remote filesystem) it likely won't be reliable unless the filesystem properly supports locks.
If you're accessing it from only one machine and process at a time, however, it should work acceptably, as that page notes (and dependent on the remote filesystem you're using).
A:
SQLite is embedded.Either go for other database or use api for deployed version
| Connect to a remote sqlite3 database with Python | I am able to create a connection to a local sqlite3 database ( Using Mac OS X 10.5 and Python 2.5.1 ) with this:
conn = sqlite3.connect('/db/MyDb')
How can I connect to this database if it is located on a server ( for example on a server running Ubuntu 8.04 with an IP address of 10.7.1.71 ) , and is not stored locally?
e.g. this does not seem to work:
conn = sqlite3.connect('10.7.1.71./db/MyDb')
| [
"SQLite is embedded-only. You'll need to mount the remote filesystem before you can access it. And don't try to have more than one machine accessing the SQLite database at a time; SQLite is not built for that. Use something like PostgreSQL instead if you need that.\n",
"The sqlite FAQ has an answer relevant to your question. It points out that although multi-machine network access is theoretically possible (using a remote filesystem) it likely won't be reliable unless the filesystem properly supports locks. \nIf you're accessing it from only one machine and process at a time, however, it should work acceptably, as that page notes (and dependent on the remote filesystem you're using).\n",
"SQLite is embedded.Either go for other database or use api for deployed version\n"
] | [
12,
2,
0
] | [] | [] | [
"macos",
"python",
"sqlite"
] | stackoverflow_0002318315_macos_python_sqlite.txt |
Q:
Getting position data from UBX protocol
I am working on a project which is use ublox .ubx protocol to getting position information. I'm using serial communication to connect my GPS module and getting position information to python sketch. I used Serial and pyubx2 libraries my sketch as follows,
from serial import Serial
from pyubx2 import UBXReader
stream = Serial('COM8', 38400)
while True:
ubr = UBXReader(stream)
(raw_data, parsed_data) = ubr.read()
print(parsed_data)
Then I have received information from GPS module as follows. It is continuously sending many of information in every second like as follows,
<UBX(NAV-SOL, iTOW=00:11:43, fTOW=-215069, week=0, gpsFix=0, gpsfixOK=0, diffSoln=0, wknSet=0, towSet=0, ecefX=637813700, ecefY=0, ecefZ=0, pAcc=649523840, ecefVX=0, ecefVY=0, ecefVZ=0, sAcc=2000, pDOP=99.99, reserved1=2, numSV=0, reserved2=215800)>
<UBX(NAV-PVT, iTOW=00:11:43, year=2015, month=10, day=18, hour=0, min=12, second=1, validDate=0, validTime=0, fullyResolved=0, validMag=0, tAcc=4294967295, nano=-215068, fixType=0, gnssFixOk=0, difSoln=0, psmState=0, headVehValid=0, carrSoln=0, confirmedAvai=0, confirmedDate=0, confirmedTime=0, numSV=0, lon=0.0, lat=0.0, height=0, hMSL=-17000, hAcc=4294967295, vAcc=3750027776, velN=0, velE=0, velD=0, gSpeed=0, headMot=0.0, sAcc=20000, headAcc=180.0, pDOP=99.99, invalidLlh=0, lastCorrectionAge=0, reserved0=2312952, headVeh=0.0, magDec=0.0, magAcc=0.0)>
I want to assign those position information (latitude, longitude, altitude etc.) into variables and hope to do some analysis part in further. So how can I derive positional information individually from this type of sentences.
A:
Try something like this (press CTRL-C to terminate) ...
from serial import Serial
from pyubx2 import UBXReader
try:
stream = Serial('COM8', 38400)
while True:
ubr = UBXReader(stream)
(raw_data, parsed_data) = ubr.read()
# print(parsed_data)
if parsed_data.identity == "NAV-PVT":
lat, lon, alt = parsed_data.lat, parsed_data.lon, parsed_data.hMSL
print(f"lat = {lat}, lon = {lon}, alt = {alt/1000} m")
except KeyboardInterrupt:
print("Terminated by user")
For further assistance, refer to https://github.com/semuconsulting/pyubx2 (there are several example Python scripts in the /examples folder).
| Getting position data from UBX protocol | I am working on a project which is use ublox .ubx protocol to getting position information. I'm using serial communication to connect my GPS module and getting position information to python sketch. I used Serial and pyubx2 libraries my sketch as follows,
from serial import Serial
from pyubx2 import UBXReader
stream = Serial('COM8', 38400)
while True:
ubr = UBXReader(stream)
(raw_data, parsed_data) = ubr.read()
print(parsed_data)
Then I have received information from GPS module as follows. It is continuously sending many of information in every second like as follows,
<UBX(NAV-SOL, iTOW=00:11:43, fTOW=-215069, week=0, gpsFix=0, gpsfixOK=0, diffSoln=0, wknSet=0, towSet=0, ecefX=637813700, ecefY=0, ecefZ=0, pAcc=649523840, ecefVX=0, ecefVY=0, ecefVZ=0, sAcc=2000, pDOP=99.99, reserved1=2, numSV=0, reserved2=215800)>
<UBX(NAV-PVT, iTOW=00:11:43, year=2015, month=10, day=18, hour=0, min=12, second=1, validDate=0, validTime=0, fullyResolved=0, validMag=0, tAcc=4294967295, nano=-215068, fixType=0, gnssFixOk=0, difSoln=0, psmState=0, headVehValid=0, carrSoln=0, confirmedAvai=0, confirmedDate=0, confirmedTime=0, numSV=0, lon=0.0, lat=0.0, height=0, hMSL=-17000, hAcc=4294967295, vAcc=3750027776, velN=0, velE=0, velD=0, gSpeed=0, headMot=0.0, sAcc=20000, headAcc=180.0, pDOP=99.99, invalidLlh=0, lastCorrectionAge=0, reserved0=2312952, headVeh=0.0, magDec=0.0, magAcc=0.0)>
I want to assign those position information (latitude, longitude, altitude etc.) into variables and hope to do some analysis part in further. So how can I derive positional information individually from this type of sentences.
| [
"Try something like this (press CTRL-C to terminate) ...\nfrom serial import Serial\nfrom pyubx2 import UBXReader\n\ntry:\n stream = Serial('COM8', 38400)\n while True:\n ubr = UBXReader(stream)\n (raw_data, parsed_data) = ubr.read()\n # print(parsed_data)\n if parsed_data.identity == \"NAV-PVT\":\n lat, lon, alt = parsed_data.lat, parsed_data.lon, parsed_data.hMSL\n print(f\"lat = {lat}, lon = {lon}, alt = {alt/1000} m\")\nexcept KeyboardInterrupt:\n print(\"Terminated by user\")\n\nFor further assistance, refer to https://github.com/semuconsulting/pyubx2 (there are several example Python scripts in the /examples folder).\n"
] | [
0
] | [] | [] | [
"gps",
"location",
"python"
] | stackoverflow_0073864028_gps_location_python.txt |
Q:
How to convert string to number in python?
I have list of numbers as str
li = ['1', '4', '8.6']
if I use int to convert the result is [1, 4, 8].
If I use float to convert the result is [1.0, 4.0, 8.6]
I want to convert them to [1, 4, 8.6]
I've tried this:
li = [1, 4, 8.6]
intli = list(map(lambda x: int(x),li))
floatli = list(map(lambda x: float(x),li))
print(intli)
print(floatli)
>> [1, 4, 8]
>> [1.0, 4.0, 8.6]
A:
Convert the items to a integer if isdigit() returns True, else to a float. This can be done by a list generator:
li = ['1', '4', '8.6']
lst = [int(x) if x.isdigit() else float(x) for x in li]
print(lst)
To check if it actually worked, you can check for the types using another list generator:
types = [type(i) for i in lst]
print(types)
A:
One way is to use ast.literal_eval
>>> from ast import literal_eval
>>> spam = ['1', '4', '8.6']
>>> [literal_eval(item) for item in spam]
[1, 4, 8.6]
Word of caution - there are values which return True with str.isdigit() but not convertible to int or float and in case of literal_eval will raise SyntaxError.
>>> '1²'.isdigit()
True
A:
You can use ast.literal_eval to convert an string to a literal:
from ast import literal_eval
li = ['1', '4', '8.6']
numbers = list(map(literal_eval, li))
As @Muhammad Akhlaq Mahar noted in his comment, str.isidigit does not return True for negative integers:
>>> '-3'.isdigit()
False
A:
You're going to need a small utility function:
def to_float_or_int(s):
n = float(s)
return int(n) if n.is_integer() else n
Then,
result = [to_float_or_int(s) for s in li]
A:
You can try map each element using loads from json:
from json import loads
li = ['1', '4', '8.6']
li = [*map(loads,li)]
print(li)
# [1, 4, 8.6]
Or using eval():
print(li:=[*map(eval,['1','4','8.6','-1','-2.3'])])
# [1, 4, 8.6, -1, -2.3]
Notes:
Using json.loads() or ast.literal_eval is safer than eval() when
the string to be evaluated comes from an unknown source
A:
In Python, you can convert a string to a number using the int() or float() functions. For example, if you have a string like "123", you can convert it to the integer number 123 using the int() function like this:
string = "123"
number = int(string)
Or, if you have a string like "3.1415", you can convert it to the float number 3.1415 using the float() function like this:
string = "3.1415"
number = float(string)
These functions will raise a ValueError if the string cannot be converted to a number, so you should make sure to check for that and handle it appropriately in your code. For example:
string = "hello"
try:
number = int(string)
except ValueError:
print("The string cannot be converted to a number.")
I hope that helps! Let me know if you have any other questions.
| How to convert string to number in python? | I have list of numbers as str
li = ['1', '4', '8.6']
if I use int to convert the result is [1, 4, 8].
If I use float to convert the result is [1.0, 4.0, 8.6]
I want to convert them to [1, 4, 8.6]
I've tried this:
li = [1, 4, 8.6]
intli = list(map(lambda x: int(x),li))
floatli = list(map(lambda x: float(x),li))
print(intli)
print(floatli)
>> [1, 4, 8]
>> [1.0, 4.0, 8.6]
| [
"Convert the items to a integer if isdigit() returns True, else to a float. This can be done by a list generator:\nli = ['1', '4', '8.6']\nlst = [int(x) if x.isdigit() else float(x) for x in li]\nprint(lst)\n\nTo check if it actually worked, you can check for the types using another list generator:\ntypes = [type(i) for i in lst]\nprint(types)\n\n",
"One way is to use ast.literal_eval\n>>> from ast import literal_eval\n>>> spam = ['1', '4', '8.6']\n>>> [literal_eval(item) for item in spam]\n[1, 4, 8.6]\n\nWord of caution - there are values which return True with str.isdigit() but not convertible to int or float and in case of literal_eval will raise SyntaxError.\n>>> '1²'.isdigit()\nTrue\n\n",
"You can use ast.literal_eval to convert an string to a literal:\nfrom ast import literal_eval\n\nli = ['1', '4', '8.6']\nnumbers = list(map(literal_eval, li))\n\nAs @Muhammad Akhlaq Mahar noted in his comment, str.isidigit does not return True for negative integers:\n>>> '-3'.isdigit()\nFalse\n\n",
"You're going to need a small utility function:\ndef to_float_or_int(s):\n n = float(s)\n return int(n) if n.is_integer() else n\n\nThen,\nresult = [to_float_or_int(s) for s in li]\n\n",
"You can try map each element using loads from json:\nfrom json import loads\nli = ['1', '4', '8.6']\nli = [*map(loads,li)]\nprint(li)\n\n# [1, 4, 8.6]\n\nOr using eval():\nprint(li:=[*map(eval,['1','4','8.6','-1','-2.3'])])\n\n# [1, 4, 8.6, -1, -2.3]\n\nNotes:\n\nUsing json.loads() or ast.literal_eval is safer than eval() when\nthe string to be evaluated comes from an unknown source\n\n",
"In Python, you can convert a string to a number using the int() or float() functions. For example, if you have a string like \"123\", you can convert it to the integer number 123 using the int() function like this:\nstring = \"123\"\nnumber = int(string)\n\nOr, if you have a string like \"3.1415\", you can convert it to the float number 3.1415 using the float() function like this:\nstring = \"3.1415\"\nnumber = float(string)\n\nThese functions will raise a ValueError if the string cannot be converted to a number, so you should make sure to check for that and handle it appropriately in your code. For example:\nstring = \"hello\"\ntry:\n number = int(string)\nexcept ValueError:\n print(\"The string cannot be converted to a number.\")\n\nI hope that helps! Let me know if you have any other questions.\n"
] | [
2,
0,
0,
0,
0,
0
] | [] | [] | [
"converters",
"integer",
"numbers",
"python",
"string"
] | stackoverflow_0074665788_converters_integer_numbers_python_string.txt |
Q:
Pattern finder in python
Let's say I have a list with a bunch of numbers in it, I'm looking to make a function that will list and return the numbers that are being repeated in most of them.
Example code:
—ListOfNumbers = [1234, 9912349, 578]
-print(GetPatern(ListOfNumbers))
1234
A:
Here is an example of a function that could do this:
def get_pattern(numbers):
# First, we will create a dictionary where the keys are the numbers in our list,
# and the values are the number of times those numbers appear in the list
number_count = {}
for number in numbers:
if number not in number_count:
number_count[number] = 1
else:
number_count[number] += 1
# Next, we will find the number that appears the most times in the list
# by looping through the dictionary and finding the key with the highest value
max_count = 0
max_number = 0
for number, count in number_count.items():
if count > max_count:
max_count = count
max_number = number
# Finally, we will return the numbers that appear the most times in the list
# by checking each number in the list and adding it to the result if it matches
# the number with the highest count
result = []
for number in numbers:
if number == max_number:
result.append(number)
return result
Here is an example of how you could use this function:
ListOfNumbers = [1234, 9912349, 578]
print(get_pattern(ListOfNumbers)) # This should print [1234]
This function will return a list of numbers that appear the most times in the input list. In the example above, the number 1234 appears twice in the list, so it is returned as the result.
A:
If I understand you correctly kevinjohnson, than the output of your example should be 1234, because 1234 is repeated twice in the numbers of your list (once in 1234, and the other time inside of the 9912349). So, you are looking for subpatterns inside of the numbers.
If this is the case, the solution of user7347835 will not work, because he is iterating over full numbers instead of iterating over subpatterns. Therefore, one should change the datatype. This should work, you can define the length of the pattern as a function input (though one could add functionality that returns the biggest pattern if the number of occurence is equal to a smaller pattern).
from collections import Counter
def pattern_finder(lst_of_numbers, length_of_pattern):
pattern_lst = []
# loop over numbers and transform them to strings
for number in lst_of_numbers:
num_str = str(number)
# iterate over the string looking for subtrings that match the
# length specified in the input and append them to list
for idx, val in enumerate(num_str):
pat = num_str[idx:idx+length_of_pattern]
if len(pat) == length_of_pattern:
pattern_lst.append(pat)
# extract the subtrings with the max occurence
count_lst = Counter(pattern_lst)
lst_max_pattern = [pattern for pattern, count in count_lst.items() if count==max(count_lst.values())]
return lst_max_pattern
Test:
lst_of_numbers = [1234, 9912349, 9578, 929578]
pattern_finder(lst_of_numbers, length_of_pattern=4)
Output:
['1234', '9578']
| Pattern finder in python | Let's say I have a list with a bunch of numbers in it, I'm looking to make a function that will list and return the numbers that are being repeated in most of them.
Example code:
—ListOfNumbers = [1234, 9912349, 578]
-print(GetPatern(ListOfNumbers))
1234
| [
"Here is an example of a function that could do this:\ndef get_pattern(numbers):\n # First, we will create a dictionary where the keys are the numbers in our list,\n # and the values are the number of times those numbers appear in the list\n number_count = {}\n for number in numbers:\n if number not in number_count:\n number_count[number] = 1\n else:\n number_count[number] += 1\n\n # Next, we will find the number that appears the most times in the list\n # by looping through the dictionary and finding the key with the highest value\n max_count = 0\n max_number = 0\n for number, count in number_count.items():\n if count > max_count:\n max_count = count\n max_number = number\n\n # Finally, we will return the numbers that appear the most times in the list\n # by checking each number in the list and adding it to the result if it matches\n # the number with the highest count\n result = []\n for number in numbers:\n if number == max_number:\n result.append(number)\n\n return result\n\nHere is an example of how you could use this function:\nListOfNumbers = [1234, 9912349, 578]\nprint(get_pattern(ListOfNumbers)) # This should print [1234]\n\nThis function will return a list of numbers that appear the most times in the input list. In the example above, the number 1234 appears twice in the list, so it is returned as the result.\n",
"If I understand you correctly kevinjohnson, than the output of your example should be 1234, because 1234 is repeated twice in the numbers of your list (once in 1234, and the other time inside of the 9912349). So, you are looking for subpatterns inside of the numbers.\nIf this is the case, the solution of user7347835 will not work, because he is iterating over full numbers instead of iterating over subpatterns. Therefore, one should change the datatype. This should work, you can define the length of the pattern as a function input (though one could add functionality that returns the biggest pattern if the number of occurence is equal to a smaller pattern).\nfrom collections import Counter \ndef pattern_finder(lst_of_numbers, length_of_pattern):\n pattern_lst = []\n\n # loop over numbers and transform them to strings\n for number in lst_of_numbers:\n num_str = str(number) \n \n # iterate over the string looking for subtrings that match the \n # length specified in the input and append them to list \n for idx, val in enumerate(num_str): \n pat = num_str[idx:idx+length_of_pattern]\n if len(pat) == length_of_pattern: \n pattern_lst.append(pat)\n\n # extract the subtrings with the max occurence\n count_lst = Counter(pattern_lst)\n lst_max_pattern = [pattern for pattern, count in count_lst.items() if count==max(count_lst.values())]\n return lst_max_pattern\n\nTest:\nlst_of_numbers = [1234, 9912349, 9578, 929578]\npattern_finder(lst_of_numbers, length_of_pattern=4)\n\nOutput:\n['1234', '9578']\n\n"
] | [
1,
0
] | [] | [] | [
"python"
] | stackoverflow_0074665726_python.txt |
Q:
Will anyone help me with this company specific question :
There are two types of liquid: type 1 and type 2. Initially, we have n ml of each type of liquid. There are four kinds of operations:
Serve 25 ml of liquid 1 and 75 ml of liquid 2.
Serve 75 ml of liquid 1 and 25 ml of liquid 2.
Serve 100 ml of liquid 1 and 0 ml of liquid 2, and
Serve 50 ml of liquid 1 and 50 ml of liquid 2.
When we serve some liquid, we give it to someone, and we no longer have it. Each turn, we will choose from the four operations with an equal probability 0.25. If the remaining volume of liquid is not enough to complete the operation, we will serve as much as possible. We stop once we no longer have some quantity of both types of liquid.
Note that we do not have an operation where all 100 ml's of liquid 2 are used first.
Return the probability that liquid 1 will be empty first, plus half the probability that 1 and 2 become empty at the same time. Answers within 105 of the actual answer will be accepted.
Input :
50
Output :
0.62500
Explanation:
If we choose the 2nd and 3rd operations,1 will become empty first.
For the fourth operation, 1 and 2 will become empty at the same time.
For the first operation, 2 will become empty first.
So, the total probability of 1 becoming empty first plus half the probability that 1 and 2 become empty at the same time, is 0.25*(1+1+ 0.5+0)=0.625.(changes required)
This is a company specific coding question. Could anyone anyone kindly help me with solving this question using python language ? It will be really helpful
| Will anyone help me with this company specific question : | There are two types of liquid: type 1 and type 2. Initially, we have n ml of each type of liquid. There are four kinds of operations:
Serve 25 ml of liquid 1 and 75 ml of liquid 2.
Serve 75 ml of liquid 1 and 25 ml of liquid 2.
Serve 100 ml of liquid 1 and 0 ml of liquid 2, and
Serve 50 ml of liquid 1 and 50 ml of liquid 2.
When we serve some liquid, we give it to someone, and we no longer have it. Each turn, we will choose from the four operations with an equal probability 0.25. If the remaining volume of liquid is not enough to complete the operation, we will serve as much as possible. We stop once we no longer have some quantity of both types of liquid.
Note that we do not have an operation where all 100 ml's of liquid 2 are used first.
Return the probability that liquid 1 will be empty first, plus half the probability that 1 and 2 become empty at the same time. Answers within 105 of the actual answer will be accepted.
Input :
50
Output :
0.62500
Explanation:
If we choose the 2nd and 3rd operations,1 will become empty first.
For the fourth operation, 1 and 2 will become empty at the same time.
For the first operation, 2 will become empty first.
So, the total probability of 1 becoming empty first plus half the probability that 1 and 2 become empty at the same time, is 0.25*(1+1+ 0.5+0)=0.625.(changes required)
This is a company specific coding question. Could anyone anyone kindly help me with solving this question using python language ? It will be really helpful
| [] | [] | [
"Here is a possible solution in Python:\ndef probability(n: int) -> float:\n # if there is no liquid, return 0\n if n == 0:\n return 0\n \n # if there is only 1 type of liquid, return 1\n if n == 50:\n return 1\n \n # calculate the probability of each operation\n p1 = 0.25 * probability(n - 25) # serve 25 ml of liquid 1\n p2 = 0.25 * probability(n - 25) # serve 75 ml of liquid 1\n p3 = 0.25 * probability(n - 50) # serve 100 ml of liquid 1\n p4 = 0.25 * probability(n - 50) # serve 50 ml of liquid 1\n \n # return the total probability\n return p1 + p2 + p3 + p4\n\n# test the function\nprint(probability(50)) # should return 0.625\n\nThis solution uses a recursive approach to calculate the probability of each operation, until there is no more liquid left. The base cases are when there is no more liquid (return 0) or when there is only 1 type of liquid (return 1). The probability of each operation is calculated by calling the function again with the updated amount of liquid, and then the probabilities are added together to get the total probability.\n"
] | [
-1
] | [
"python"
] | stackoverflow_0074666121_python.txt |
Q:
Chaining Telethon start methods
I have been using telethon for a long time with two clients, one for a bot (with bot token) and another for my user (using phone).
I always thought two separate clients were necessary (are them?) but I recently saw this in the documentation:
https://docs.telethon.dev/en/stable/modules/client.html#telethon.client.auth.AuthMethods.start
But when I go to test it I got:
UserWarning:
the session already had an authorized user so it did not login to the user account using the provided phone (it may not be using the user you expect)
So I don't understand if the example indicates that I can have a single client to control a bot and a userbot, if one start(...) overrides the other or if the documentation example is wrong directly.
On the other hand, if I use that example code (including the last with part) I get:
RuntimeError: You must use "async with" if the event loop is running (i.e. you are inside an "async def")
And lastly, my ide was warning me when passing a phone as a string because it expected typing.Callable[[], str].
A:
The documentation says "initialization can be chained". Initialization is this line:
client = TelegramClient(...)
and you can chain .start() there:
client = await TelegramClient(...).start(...)
but it doesn't mean you can chain multiple calls to start(). Indeed, if you want to control more than one account, you will need separate clients.
| Chaining Telethon start methods | I have been using telethon for a long time with two clients, one for a bot (with bot token) and another for my user (using phone).
I always thought two separate clients were necessary (are them?) but I recently saw this in the documentation:
https://docs.telethon.dev/en/stable/modules/client.html#telethon.client.auth.AuthMethods.start
But when I go to test it I got:
UserWarning:
the session already had an authorized user so it did not login to the user account using the provided phone (it may not be using the user you expect)
So I don't understand if the example indicates that I can have a single client to control a bot and a userbot, if one start(...) overrides the other or if the documentation example is wrong directly.
On the other hand, if I use that example code (including the last with part) I get:
RuntimeError: You must use "async with" if the event loop is running (i.e. you are inside an "async def")
And lastly, my ide was warning me when passing a phone as a string because it expected typing.Callable[[], str].
| [
"The documentation says \"initialization can be chained\". Initialization is this line:\nclient = TelegramClient(...)\n\nand you can chain .start() there:\nclient = await TelegramClient(...).start(...)\n\nbut it doesn't mean you can chain multiple calls to start(). Indeed, if you want to control more than one account, you will need separate clients.\n"
] | [
1
] | [] | [] | [
"python",
"telethon"
] | stackoverflow_0074665837_python_telethon.txt |
Q:
RaggedTensor becomes Tensor in loss function
I have a sequence-to-sequence model in which I am attempting to predict the output sequence following a transformation. In doing so, I need to compute the MSE between elements in a ragged tensor:
def cpu_bce(y_value, y_pred):
with tf.device('/CPU:0'):
y_v = y_value.to_tensor()
y_p = y_pred.to_tensor()
return tf.keras.losses.MeanSquaredError()(y_v, y_p)
Yet, when executing it encounters the error:
AttributeError: 'Tensor' object has no attribute 'to_tensor'
What causes this issue? The GRU seems to return a RaggedTensor when called directly. Yet at runtime, the arguments to the loss functions are normal Tensors.
import tensorflow as tf
import numpy as np
import functools
def generate_example(n):
for i in range(n):
dims = np.random.randint(7, 11)
x = np.random.random((dims, ))
y = 2 * x.cumsum()
yield tf.constant(x), tf.constant(y)
N = 200
ds = tf.data.Dataset.from_generator(
functools.partial(generate_example, N),
output_signature=(
tf.TensorSpec(shape=(None,), dtype=tf.float32),
tf.TensorSpec(shape=(None,), dtype=tf.float32),
),
)
def rag(x, y):
x1 = tf.expand_dims(x, 0)
y1 = tf.expand_dims(y, 0)
x1 = tf.expand_dims(x1, -1)
y1 = tf.expand_dims(y1, -1)
return (
tf.RaggedTensor.from_tensor(x1),
tf.RaggedTensor.from_tensor(y1),
)
def unexp(x, y):
return (
tf.squeeze(x, axis=1),
tf.squeeze(y, axis=1)
)
ds = ds.map(rag).batch(32).map(unexp)
model = tf.keras.Sequential([
tf.keras.Input(
type_spec=tf.RaggedTensorSpec(shape=[None, None, 1],
dtype=tf.float32)),
tf.keras.layers.GRU(1, return_sequences=True),
])
def cpu_bce(y_value, y_pred):
with tf.device('/CPU:0'):
y_v = y_value.to_tensor()
y_p = y_pred.to_tensor()
return tf.keras.losses.MeanSquaredError()(y_v, y_p)
model.compile(loss=cpu_bce, optimizer="adam", metrics=[cpu_bce])
model.fit(ds, epochs=3)
A:
In your loss function, you can re-write it in the following ways to make it work.
def cpu_bce(y_value, y_pred):
with tf.device('/CPU:0'):
if isinstance(y_value, tf.RaggedTensor):
y_value = y_value.to_tensor()
if isinstance(y_pred, tf.RaggedTensor):
y_pred = y_pred.to_tensor()
return tf.keras.losses.MeanSquaredError()(y_value, y_pred)
model.compile(loss=cpu_bce, optimizer="adam", metrics=[cpu_bce])
model.fit(ds, epochs=3) # loss & metrics will vary
Or, you don't need to convert ragged tensor, keep as it is.
def cpu_bce(y_value, y_pred):
with tf.device('/CPU:0'):
return tf.keras.losses.MeanSquaredError()(y_value, y_pred)
model.compile(loss=cpu_bce, optimizer="adam", metrics=[cpu_bce])
model.fit(ds, epochs=3) # loss & metrics will alike
The reason you got AttributeError is because in metrics=[cpu_bce], the target and prediction tensor get converts to tesnor internally. You can inspect by printing your target and prediction in loss function. You would find that for loss function it's ragged but for metric function it's tensor. It may not feel convenient, in that case feel free to raise ticket in GitHub.
| RaggedTensor becomes Tensor in loss function | I have a sequence-to-sequence model in which I am attempting to predict the output sequence following a transformation. In doing so, I need to compute the MSE between elements in a ragged tensor:
def cpu_bce(y_value, y_pred):
with tf.device('/CPU:0'):
y_v = y_value.to_tensor()
y_p = y_pred.to_tensor()
return tf.keras.losses.MeanSquaredError()(y_v, y_p)
Yet, when executing it encounters the error:
AttributeError: 'Tensor' object has no attribute 'to_tensor'
What causes this issue? The GRU seems to return a RaggedTensor when called directly. Yet at runtime, the arguments to the loss functions are normal Tensors.
import tensorflow as tf
import numpy as np
import functools
def generate_example(n):
for i in range(n):
dims = np.random.randint(7, 11)
x = np.random.random((dims, ))
y = 2 * x.cumsum()
yield tf.constant(x), tf.constant(y)
N = 200
ds = tf.data.Dataset.from_generator(
functools.partial(generate_example, N),
output_signature=(
tf.TensorSpec(shape=(None,), dtype=tf.float32),
tf.TensorSpec(shape=(None,), dtype=tf.float32),
),
)
def rag(x, y):
x1 = tf.expand_dims(x, 0)
y1 = tf.expand_dims(y, 0)
x1 = tf.expand_dims(x1, -1)
y1 = tf.expand_dims(y1, -1)
return (
tf.RaggedTensor.from_tensor(x1),
tf.RaggedTensor.from_tensor(y1),
)
def unexp(x, y):
return (
tf.squeeze(x, axis=1),
tf.squeeze(y, axis=1)
)
ds = ds.map(rag).batch(32).map(unexp)
model = tf.keras.Sequential([
tf.keras.Input(
type_spec=tf.RaggedTensorSpec(shape=[None, None, 1],
dtype=tf.float32)),
tf.keras.layers.GRU(1, return_sequences=True),
])
def cpu_bce(y_value, y_pred):
with tf.device('/CPU:0'):
y_v = y_value.to_tensor()
y_p = y_pred.to_tensor()
return tf.keras.losses.MeanSquaredError()(y_v, y_p)
model.compile(loss=cpu_bce, optimizer="adam", metrics=[cpu_bce])
model.fit(ds, epochs=3)
| [
"In your loss function, you can re-write it in the following ways to make it work.\ndef cpu_bce(y_value, y_pred):\n with tf.device('/CPU:0'):\n if isinstance(y_value, tf.RaggedTensor):\n y_value = y_value.to_tensor()\n \n if isinstance(y_pred, tf.RaggedTensor): \n y_pred = y_pred.to_tensor()\n \n return tf.keras.losses.MeanSquaredError()(y_value, y_pred)\n\nmodel.compile(loss=cpu_bce, optimizer=\"adam\", metrics=[cpu_bce])\nmodel.fit(ds, epochs=3) # loss & metrics will vary\n\nOr, you don't need to convert ragged tensor, keep as it is.\ndef cpu_bce(y_value, y_pred):\n with tf.device('/CPU:0'):\n return tf.keras.losses.MeanSquaredError()(y_value, y_pred)\n\nmodel.compile(loss=cpu_bce, optimizer=\"adam\", metrics=[cpu_bce])\nmodel.fit(ds, epochs=3) # loss & metrics will alike\n\n\nThe reason you got AttributeError is because in metrics=[cpu_bce], the target and prediction tensor get converts to tesnor internally. You can inspect by printing your target and prediction in loss function. You would find that for loss function it's ragged but for metric function it's tensor. It may not feel convenient, in that case feel free to raise ticket in GitHub.\n"
] | [
0
] | [] | [] | [
"keras",
"loss",
"python",
"tensorflow"
] | stackoverflow_0074665549_keras_loss_python_tensorflow.txt |
Q:
Python Pandas - Assign Values to Rows based on Top x% Values found in a Column
Take this mockup dataframe for example:
CustomerID Number of Purchases
ABC 5
DEF 24
GHI 85
JKL 2
MNO 100
Assume this dataframe is first sorted by Number of Purchases (descending).
How do I add a new column to it called Score, and have values assigned to it as follows:
Out of the top 60% customers (meaning the first 3 rows after sorting), 3 should be assigned to Score.
Out of the next top 20% customers (row 4 after sorting), 2 should be assigned to Score.
Out of the next and last top 20% customers (row 5 after sorting), 1 should be assigned to Score.
How do I do this in a large dataframe?
A:
import pandas as pd
import numpy as np
df = pd.DataFrame({'CustomerID': ['ABC', 'DEF', 'GHI', 'JKL', 'MNO'],
'Number of Purchases': [5, 24, 85, 2, 100]})
df = df.sort_values(by=['Number of Purchases'], ascending=False)
proc = len(df) / 100
aaa = [[0, int(60 * proc), 3], [int(60 * proc), int(80 * proc), 2], [int(80 * proc), len(df), 1]]
df['Score'] = np.nan
df = df.reset_index()
for i in aaa:
df.loc[i[0]:i[1] - 1, 'Score'] = i[2]
print(df)
Output
index CustomerID Number of Purchases Score
0 4 MNO 100 3.0
1 2 GHI 85 3.0
2 1 DEF 24 3.0
3 0 ABC 5 2.0
4 3 JKL 2 1.0
First comes sorting, where it is indicated: ascending=False, so that the sorting is in reverse order.
The proc variable is how many rows in 1%.
A nested list aaa is created. In which the first element is the start index, the second is the end index of the range of strings. And the third element is evaluation. A 'Score' column is created with empty values and dataframe indexes are reset.
In the loop, the rows are accessed by loc through slices (this is from which index the rows are selected). Since loc is accessed inclusively (for example, the end index is 3, then the data will be selected by index 3), so i[1] - 1 is used.
A:
Once the dataframe (df) has been sorted by Number_of_Purchases,
you can generate the Score column by using the:
.rank(pct=True) - function: which calculates the rank (percentage)
and then apply a lambda function to convert this rank to a score.
Code:
import pandas as pd
# Create dataframe
df = pd.DataFrame({ 'CustomerID': ['ABC', 'DEF', 'GHI', 'JKL', 'MNO'],
'Number of Purchases': [5, 24, 85, 2, 100]})
# Sort and then create 'Score' column
df = df.sort_values(by=['Number of Purchases'], ascending=False).reset_index(drop=True)
df['Score'] = df['Number of Purchases'].rank(pct=True).apply(lambda x: 1 if x<=0.2 else 2 if x<=0.4 else 3)
print(df)
Output:
CustomerID Number of Purchases Score
0 MNO 100 3
1 GHI 85 3
2 DEF 24 3
3 ABC 5 2
4 JKL 2 1
| Python Pandas - Assign Values to Rows based on Top x% Values found in a Column | Take this mockup dataframe for example:
CustomerID Number of Purchases
ABC 5
DEF 24
GHI 85
JKL 2
MNO 100
Assume this dataframe is first sorted by Number of Purchases (descending).
How do I add a new column to it called Score, and have values assigned to it as follows:
Out of the top 60% customers (meaning the first 3 rows after sorting), 3 should be assigned to Score.
Out of the next top 20% customers (row 4 after sorting), 2 should be assigned to Score.
Out of the next and last top 20% customers (row 5 after sorting), 1 should be assigned to Score.
How do I do this in a large dataframe?
| [
"import pandas as pd\nimport numpy as np\n\ndf = pd.DataFrame({'CustomerID': ['ABC', 'DEF', 'GHI', 'JKL', 'MNO'],\n 'Number of Purchases': [5, 24, 85, 2, 100]})\n\ndf = df.sort_values(by=['Number of Purchases'], ascending=False)\n\n\nproc = len(df) / 100\naaa = [[0, int(60 * proc), 3], [int(60 * proc), int(80 * proc), 2], [int(80 * proc), len(df), 1]]\ndf['Score'] = np.nan\ndf = df.reset_index()\n\n\nfor i in aaa:\n df.loc[i[0]:i[1] - 1, 'Score'] = i[2]\n\nprint(df)\n\nOutput\n index CustomerID Number of Purchases Score\n0 4 MNO 100 3.0\n1 2 GHI 85 3.0\n2 1 DEF 24 3.0\n3 0 ABC 5 2.0\n4 3 JKL 2 1.0\n\nFirst comes sorting, where it is indicated: ascending=False, so that the sorting is in reverse order.\nThe proc variable is how many rows in 1%.\nA nested list aaa is created. In which the first element is the start index, the second is the end index of the range of strings. And the third element is evaluation. A 'Score' column is created with empty values and dataframe indexes are reset.\nIn the loop, the rows are accessed by loc through slices (this is from which index the rows are selected). Since loc is accessed inclusively (for example, the end index is 3, then the data will be selected by index 3), so i[1] - 1 is used.\n",
"Once the dataframe (df) has been sorted by Number_of_Purchases, \nyou can generate the Score column by using the:\n\n.rank(pct=True) - function: which calculates the rank (percentage)\nand then apply a lambda function to convert this rank to a score.\n\nCode:\nimport pandas as pd\n\n# Create dataframe\ndf = pd.DataFrame({ 'CustomerID': ['ABC', 'DEF', 'GHI', 'JKL', 'MNO'],\n 'Number of Purchases': [5, 24, 85, 2, 100]})\n\n# Sort and then create 'Score' column\ndf = df.sort_values(by=['Number of Purchases'], ascending=False).reset_index(drop=True)\n\ndf['Score'] = df['Number of Purchases'].rank(pct=True).apply(lambda x: 1 if x<=0.2 else 2 if x<=0.4 else 3)\n\nprint(df)\n\nOutput:\n CustomerID Number of Purchases Score\n0 MNO 100 3\n1 GHI 85 3\n2 DEF 24 3\n3 ABC 5 2\n4 JKL 2 1\n\n"
] | [
0,
0
] | [] | [] | [
"pandas",
"python"
] | stackoverflow_0074641326_pandas_python.txt |
Q:
How to handle abbreviation when reading nltk corpus
I am reading nltk corpus using
def read_corpus(package, category):
""" Read files from corpus(package)'s category.
Params:
package (nltk.corpus): corpus
category (string): category name
Return:
list of lists, with words from each of the processed files assigned with start and end tokens
"""
files = package.fileids(category)
return [[START_TOKEN] + [w.lower() for w in list(package.words(f))] + [END_TOKEN] for f in files]
But I find that it process 'U.S.' to ['U','.','S','.'] and 'I'm' to ['I', "'", 'm'].
How can I get an abbreviation as a whole or restore it?
A:
To treat abbreviations such as "U.S." and contractions such as "I'm" as a single token when processing text, you can use the TreebankWordTokenizer from the NLTK library. This tokenizer is designed to tokenize text in a way that is similar to how humans would naturally write and speak, so it will treat abbreviations and contractions as single tokens.
| How to handle abbreviation when reading nltk corpus | I am reading nltk corpus using
def read_corpus(package, category):
""" Read files from corpus(package)'s category.
Params:
package (nltk.corpus): corpus
category (string): category name
Return:
list of lists, with words from each of the processed files assigned with start and end tokens
"""
files = package.fileids(category)
return [[START_TOKEN] + [w.lower() for w in list(package.words(f))] + [END_TOKEN] for f in files]
But I find that it process 'U.S.' to ['U','.','S','.'] and 'I'm' to ['I', "'", 'm'].
How can I get an abbreviation as a whole or restore it?
| [
"To treat abbreviations such as \"U.S.\" and contractions such as \"I'm\" as a single token when processing text, you can use the TreebankWordTokenizer from the NLTK library. This tokenizer is designed to tokenize text in a way that is similar to how humans would naturally write and speak, so it will treat abbreviations and contractions as single tokens.\n"
] | [
0
] | [] | [] | [
"nltk",
"python"
] | stackoverflow_0074666233_nltk_python.txt |
Q:
Can anyone explain why this code on python is not working?
def n(a):
a = str(a)
if "0" in a:
b = str((a).replace("0", ''))
a = b[::-1]
a = a[::-1]
a = int(a)
return a
else:
a = a[::-1]
a = a[::-1]
a = int(a)
return a
N = int(input())
des = 10**9 + 7
summa = 0
for a in range():
print(n(a))
b = n(a)
summa = summa + b
summa = summa % des
print(summa)
gives such an error : 'invalid literal for int() with base 10: '' '
If I pass the value to the variable a without the for i in loop, then everything works
I just need to understand what is wrong with the code. I'm new to programming and can't figure it out right away
A:
The input function waits for user input. If none is given, it will return an empty string, i.e., ''. As a result, you are casting '' to an integer. This is not possible and results in the error you mention.
int('')` # returns `ValueError: invalid literal for int() with base 10: ''
You can also see this already in the end of the error. That's what the '' mean at the end of the error. That's what being passed to int()
I'm guesting that you might be copy-pasting the code above directly into a terminal. This results in Python not waiting for any actual input for input.
If you first only run this line
N = int(input())
and then hit enter, it will wait for user input. Then you can copy the rest of the code. The rest of the code also contains some issues. Specifically, range should have some input, like range(N)
def n(a):
a = str(a)
if "0" in a: # this also happen when a == '0'
b = str((a).replace("0", ''))
a = b[::-1]
a = a[::-1]
a = int(a) # and if a == '0', this resolved to int('')
....
You can add the following
def n(a):
if not a: # ifa is anything beside 0
return 0 # then there is no sense in flipping it around
a = str(a)
....
A:
The error you are seeing is because you are trying to convert an empty string to an integer using the int() function. This error is happening because you are using a range() function with no arguments in the for loop, which will create an empty range and cause the for loop to not execute at all.
To fix this error, you need to pass the correct arguments to the range() function in the for loop.
| Can anyone explain why this code on python is not working? | def n(a):
a = str(a)
if "0" in a:
b = str((a).replace("0", ''))
a = b[::-1]
a = a[::-1]
a = int(a)
return a
else:
a = a[::-1]
a = a[::-1]
a = int(a)
return a
N = int(input())
des = 10**9 + 7
summa = 0
for a in range():
print(n(a))
b = n(a)
summa = summa + b
summa = summa % des
print(summa)
gives such an error : 'invalid literal for int() with base 10: '' '
If I pass the value to the variable a without the for i in loop, then everything works
I just need to understand what is wrong with the code. I'm new to programming and can't figure it out right away
| [
"The input function waits for user input. If none is given, it will return an empty string, i.e., ''. As a result, you are casting '' to an integer. This is not possible and results in the error you mention.\nint('')` # returns `ValueError: invalid literal for int() with base 10: ''\n\nYou can also see this already in the end of the error. That's what the '' mean at the end of the error. That's what being passed to int()\nI'm guesting that you might be copy-pasting the code above directly into a terminal. This results in Python not waiting for any actual input for input.\nIf you first only run this line\nN = int(input())\n\nand then hit enter, it will wait for user input. Then you can copy the rest of the code. The rest of the code also contains some issues. Specifically, range should have some input, like range(N)\ndef n(a):\n a = str(a)\n if \"0\" in a: # this also happen when a == '0'\n b = str((a).replace(\"0\", '')) \n a = b[::-1]\n a = a[::-1]\n a = int(a) # and if a == '0', this resolved to int('')\n ....\n\nYou can add the following\ndef n(a):\n if not a: # ifa is anything beside 0\n return 0 # then there is no sense in flipping it around \n a = str(a)\n ....\n\n",
"The error you are seeing is because you are trying to convert an empty string to an integer using the int() function. This error is happening because you are using a range() function with no arguments in the for loop, which will create an empty range and cause the for loop to not execute at all.\nTo fix this error, you need to pass the correct arguments to the range() function in the for loop.\n"
] | [
2,
0
] | [] | [] | [
"python"
] | stackoverflow_0074666238_python.txt |
Q:
jinja2 in python and rendering
I am unable to decipher the error here. Can any one help ?
from jinja2 import Template
prefixes = {
"10.0.0.0/24" : {
"description": "Corporate NAS",
"region": "Europe",
"site": "Telehouse-West"
}
}
template = """
Details for 10.0.0.0/24 prefix:
Description: {{ prefixes['10.0.0.0/24'].description }}
Region: {{ prefixes['10.0.0.0/24'].region }}
Site: {{ prefixes['10.0.0.0/24'].site }}
"""
j2 = Template(template)
print(j2.render(prefixes))
Error:
File "c:\Users\verma\Documents\Python\jinja\jinja1.py", line 19, in <module>
print(j2.render(prefixes))
File "C:\Users\verma\AppData\Roaming\Python\Python310\site-packages\jinja2\environment.py", line 1301, in render
self.environment.handle_exception()
File "C:\Users\verma\AppData\Roaming\Python\Python310\site-packages\jinja2\environment.py", line 936, in handle_exception
raise rewrite_traceback_stack(source=source)
File "<template>", line 3, in top-level template code
File "C:\Users\verma\AppData\Roaming\Python\Python310\site-packages\jinja2\environment.py", line 466, in getitem
return obj[argument]
jinja2.exceptions.UndefinedError: 'prefixes' is undefined
I was expecting the jinja2 rendering to work.
A:
render uses keyword arguments. replace print(j2.render(prefixes)) with print(j2.render(prefixes=prefixes)) and it should work.
A:
If you want to pass prefixes as a positional argument, you should change the prefixes dictionary to be:
prefixes = {
"prefixes": {
"10.0.0.0/24": {
"description": "Corporate NAS",
"region": "Europe",
"site": "Telehouse-West"
}
}
}
A:
The error message indicates that the prefixes variable is not defined when the Jinja2 template is rendered. This is likely because the prefixes variable is defined within the scope of the script, but it is not passed to the render() method as a variable.
To fix this, you can pass the prefixes variable as a keyword argument to the render() method, like this:
print(j2.render(prefixes=prefixes))
This will make the prefixes variables available to the Jinja2 template, and the rendering should work as expected.
| jinja2 in python and rendering | I am unable to decipher the error here. Can any one help ?
from jinja2 import Template
prefixes = {
"10.0.0.0/24" : {
"description": "Corporate NAS",
"region": "Europe",
"site": "Telehouse-West"
}
}
template = """
Details for 10.0.0.0/24 prefix:
Description: {{ prefixes['10.0.0.0/24'].description }}
Region: {{ prefixes['10.0.0.0/24'].region }}
Site: {{ prefixes['10.0.0.0/24'].site }}
"""
j2 = Template(template)
print(j2.render(prefixes))
Error:
File "c:\Users\verma\Documents\Python\jinja\jinja1.py", line 19, in <module>
print(j2.render(prefixes))
File "C:\Users\verma\AppData\Roaming\Python\Python310\site-packages\jinja2\environment.py", line 1301, in render
self.environment.handle_exception()
File "C:\Users\verma\AppData\Roaming\Python\Python310\site-packages\jinja2\environment.py", line 936, in handle_exception
raise rewrite_traceback_stack(source=source)
File "<template>", line 3, in top-level template code
File "C:\Users\verma\AppData\Roaming\Python\Python310\site-packages\jinja2\environment.py", line 466, in getitem
return obj[argument]
jinja2.exceptions.UndefinedError: 'prefixes' is undefined
I was expecting the jinja2 rendering to work.
| [
"render uses keyword arguments. replace print(j2.render(prefixes)) with print(j2.render(prefixes=prefixes)) and it should work.\n",
"If you want to pass prefixes as a positional argument, you should change the prefixes dictionary to be:\nprefixes = {\n \"prefixes\": {\n \"10.0.0.0/24\": {\n \"description\": \"Corporate NAS\",\n \"region\": \"Europe\",\n \"site\": \"Telehouse-West\"\n }\n }\n}\n\n",
"The error message indicates that the prefixes variable is not defined when the Jinja2 template is rendered. This is likely because the prefixes variable is defined within the scope of the script, but it is not passed to the render() method as a variable.\nTo fix this, you can pass the prefixes variable as a keyword argument to the render() method, like this:\n\nprint(j2.render(prefixes=prefixes))\n\n\nThis will make the prefixes variables available to the Jinja2 template, and the rendering should work as expected.\n"
] | [
1,
0,
0
] | [] | [] | [
"jinja2",
"python"
] | stackoverflow_0074666184_jinja2_python.txt |
Q:
NameError: name 'username_entry' is not defined
So i'm trying to do a login gui using customtkinter
I want to have an window with 2 buttons first : Login and Exit
Then when I press Login to open another py script with the login label
If i execute the second script its all right but if I try from the first one I get this error
Exception in Tkinter callback
Traceback (most recent call last):
File "C:\Users\denis\AppData\Local\Programs\Python\Python311\Lib\tkinter\__init__.py", line 1948, in __call__
return self.func(*args)
^^^^^^^^^^^^^^^^
File "D:\test\venv\Lib\site-packages\customtkinter\windows\widgets\ctk_button.py", line 527, in _clicked
self._command()
File "<string>", line 21, in login
NameError: name 'username_entry' is not defined
This is the first code:
`
import tkinter
import customtkinter
customtkinter.set_appearance_mode("System")
customtkinter.set_default_color_theme("dark-blue")
app = customtkinter.CTk() # create CTk window like you do with the Tk window
app.title("Menu")
app.geometry("240x240")
app.config(bg="#242320")
def button_function():
exec(open('D:\test\login.py').read())
def Close():
app.destroy()
font1=('Arial', 15, 'bold')
button = customtkinter.CTkButton(master=app, text="Login", font=font1, command=button_function)
button.place(relx=0.5, rely=0.4, anchor=tkinter.CENTER)
button = customtkinter.CTkButton(master=app, text="Exit", font=font1, command=Close)
button.place(relx=0.5, rely=0.6, anchor=tkinter.CENTER)
app.mainloop()
`
and this is the login code:
`
import customtkinter
from tkinter import *
from tkinter import messagebox
app = customtkinter.CTk()
app.title("Login")
app.geometry("350x200")
app.config(bg="#242320")
font1=('Arial', 15, 'bold')
username="hello"
password="123"
trials=0
def login():
global username
global password
global trials
written_username = username_entry.get()
written_password = password_entry.get()
if(written_username == '' or written_password==''):
messagebox.showwarning(title="Error", message="Enter your username and password.")
elif(written_username==username and written_password==password):
new_window=Toplevel(app)
new_window.geometry("350x200")
new_window.config(bg="#242320")
welcome_label=customtkinter.CTkLabel(new_window, text="Welcome...", font=font1, text_color="#FFFFFF")
welcome_label.place(x=100, y=100)
elif((written_username != username or written_password != password) and trials<3):
messagebox.showerror(title="Error", message="Your username or password are not correct.")
trials=trials + 1
if (trials != 3):
trials_label = customtkinter.CTkLabel(app, text=f"You have {3-trials} trials", font=font1, text_color="#FFFFFF")
trials_label.place(x=100, y=160)
if(trials==3):
login_button.destroy()
locked_label = customtkinter.CTkLabel(app, text="Your account is locked.", font=font1, text_color="#FFFFFF")
locked_label.place(x=100, y=160)
username_label=customtkinter.CTkLabel(app, text="Username: ",font=font1, text_color="#FFFFFF")
username_label.place(x=10, y=25)
password_label=customtkinter.CTkLabel(app, text="Password: ",font=font1, text_color="#FFFFFF")
password_label.place(x=10, y=75)
username_entry=customtkinter.CTkEntry(app,fg_color="#FFFFFF", font=font1, text_color="#000000", border_color="#FFFFFF", width= 200, height= 1)
username_entry.place(x=100, y=25)
password_entry=customtkinter.CTkEntry(app,fg_color="#FFFFFF", font=font1, text_color="#000000", border_color="#FFFFFF", show="*", width= 200, height= 1)
password_entry.place(x=100, y=75)
login_button=customtkinter.CTkButton(app, command=login, text="Login", font=font1, text_color="#FFFFFF", fg_color="#1f538d", hover_color="#14375e", width=50)
login_button.place(x=165, y=120)
app.mainloop()
`
Tried to do a login box and got this error. Idk how to resolve it
A:
Obviously, the username_entry is not defined in the login function body. please add it to the function arguments and then use it properly.
| NameError: name 'username_entry' is not defined | So i'm trying to do a login gui using customtkinter
I want to have an window with 2 buttons first : Login and Exit
Then when I press Login to open another py script with the login label
If i execute the second script its all right but if I try from the first one I get this error
Exception in Tkinter callback
Traceback (most recent call last):
File "C:\Users\denis\AppData\Local\Programs\Python\Python311\Lib\tkinter\__init__.py", line 1948, in __call__
return self.func(*args)
^^^^^^^^^^^^^^^^
File "D:\test\venv\Lib\site-packages\customtkinter\windows\widgets\ctk_button.py", line 527, in _clicked
self._command()
File "<string>", line 21, in login
NameError: name 'username_entry' is not defined
This is the first code:
`
import tkinter
import customtkinter
customtkinter.set_appearance_mode("System")
customtkinter.set_default_color_theme("dark-blue")
app = customtkinter.CTk() # create CTk window like you do with the Tk window
app.title("Menu")
app.geometry("240x240")
app.config(bg="#242320")
def button_function():
exec(open('D:\test\login.py').read())
def Close():
app.destroy()
font1=('Arial', 15, 'bold')
button = customtkinter.CTkButton(master=app, text="Login", font=font1, command=button_function)
button.place(relx=0.5, rely=0.4, anchor=tkinter.CENTER)
button = customtkinter.CTkButton(master=app, text="Exit", font=font1, command=Close)
button.place(relx=0.5, rely=0.6, anchor=tkinter.CENTER)
app.mainloop()
`
and this is the login code:
`
import customtkinter
from tkinter import *
from tkinter import messagebox
app = customtkinter.CTk()
app.title("Login")
app.geometry("350x200")
app.config(bg="#242320")
font1=('Arial', 15, 'bold')
username="hello"
password="123"
trials=0
def login():
global username
global password
global trials
written_username = username_entry.get()
written_password = password_entry.get()
if(written_username == '' or written_password==''):
messagebox.showwarning(title="Error", message="Enter your username and password.")
elif(written_username==username and written_password==password):
new_window=Toplevel(app)
new_window.geometry("350x200")
new_window.config(bg="#242320")
welcome_label=customtkinter.CTkLabel(new_window, text="Welcome...", font=font1, text_color="#FFFFFF")
welcome_label.place(x=100, y=100)
elif((written_username != username or written_password != password) and trials<3):
messagebox.showerror(title="Error", message="Your username or password are not correct.")
trials=trials + 1
if (trials != 3):
trials_label = customtkinter.CTkLabel(app, text=f"You have {3-trials} trials", font=font1, text_color="#FFFFFF")
trials_label.place(x=100, y=160)
if(trials==3):
login_button.destroy()
locked_label = customtkinter.CTkLabel(app, text="Your account is locked.", font=font1, text_color="#FFFFFF")
locked_label.place(x=100, y=160)
username_label=customtkinter.CTkLabel(app, text="Username: ",font=font1, text_color="#FFFFFF")
username_label.place(x=10, y=25)
password_label=customtkinter.CTkLabel(app, text="Password: ",font=font1, text_color="#FFFFFF")
password_label.place(x=10, y=75)
username_entry=customtkinter.CTkEntry(app,fg_color="#FFFFFF", font=font1, text_color="#000000", border_color="#FFFFFF", width= 200, height= 1)
username_entry.place(x=100, y=25)
password_entry=customtkinter.CTkEntry(app,fg_color="#FFFFFF", font=font1, text_color="#000000", border_color="#FFFFFF", show="*", width= 200, height= 1)
password_entry.place(x=100, y=75)
login_button=customtkinter.CTkButton(app, command=login, text="Login", font=font1, text_color="#FFFFFF", fg_color="#1f538d", hover_color="#14375e", width=50)
login_button.place(x=165, y=120)
app.mainloop()
`
Tried to do a login box and got this error. Idk how to resolve it
| [
"Obviously, the username_entry is not defined in the login function body. please add it to the function arguments and then use it properly.\n"
] | [
1
] | [] | [] | [
"python"
] | stackoverflow_0074666259_python.txt |
Q:
Python get key of a value inside a nested dictionary
Let's say I have a dictionary called my_dic:
my_dict = {'a': {'spam': {'foo': None, 'bar': None, 'baz': None},'eggs': None}, 'b': {'ham': None}}
Then if I input spam, it should return a, and if I input bar it should return spam. If I input b, it should return None. Basically getting the parent of the dictionary.
How would I go about doing this?
A:
A simple recursive function, which returns the current key if needle in v is true; needle in v simply testing if the key exists in the associated value:
my_dict = {'a': {'spam': {'foo': None, 'bar': None, 'baz': None},'eggs': None}, 'b': {'ham': None}}
def get_parent_key(d: dict, needle: str):
for k, v in d.items():
if isinstance(v, dict):
if needle in v:
return k
if found := get_parent_key(v, needle):
return found
print(get_parent_key(my_dict, 'bar'))
| Python get key of a value inside a nested dictionary | Let's say I have a dictionary called my_dic:
my_dict = {'a': {'spam': {'foo': None, 'bar': None, 'baz': None},'eggs': None}, 'b': {'ham': None}}
Then if I input spam, it should return a, and if I input bar it should return spam. If I input b, it should return None. Basically getting the parent of the dictionary.
How would I go about doing this?
| [
"A simple recursive function, which returns the current key if needle in v is true; needle in v simply testing if the key exists in the associated value:\nmy_dict = {'a': {'spam': {'foo': None, 'bar': None, 'baz': None},'eggs': None}, 'b': {'ham': None}}\n\ndef get_parent_key(d: dict, needle: str):\n for k, v in d.items():\n if isinstance(v, dict):\n if needle in v:\n return k\n \n if found := get_parent_key(v, needle):\n return found\n \nprint(get_parent_key(my_dict, 'bar'))\n\n"
] | [
0
] | [
"To check if a key exists in a dictionary and get its corresponding value, you can use the in keyword and the .get() method.\nHere's an example:\nmy_dict = {'a': {'spam': {'foo': None, 'bar': None, 'baz': None},'eggs'}, 'b': {'ham'}}\n\n# Check if 'spam' is a key in my_dict and get its value\nif 'spam' in my_dict:\n print(my_dict['spam']) # Output: {'foo': None, 'bar': None, 'baz': None}\nelse:\n print('Key not found')\n\n# Check if 'bar' is a key in my_dict and get its value\nif 'bar' in my_dict:\n print(my_dict['bar']) # Output: Key not found\nelse:\n print('Key not found')\n\n# Use .get() to check if 'b' is a key in my_dict and get its value\nvalue = my_dict.get('b')\nif value is not None:\n print(value) # Output: {'ham'}\nelse:\n print('Key not found')\n\n"
] | [
-1
] | [
"dictionary",
"nested",
"python"
] | stackoverflow_0074666017_dictionary_nested_python.txt |
Q:
Getting distinct values from from a list comprised of lists containing a comma delimited string
Main list:
data = [
["629-2, text1, 12"],
["629-2, text2, 12"],
["407-3, text9, 6"],
["407-3, text4, 6"],
["000-5, text7, 0"],
["000-5, text6, 0"],
]
I want to get a list comprised of unique lists like so:
data_unique = [
["629-2, text1, 12"],
["407-3, text9, 6"],
["000-5, text6, 0"],
]
I've tried using numpy.unique but I need to pare it down further as I need the list to be populated by lists containing a single unique version of the numerical designator in the beginning of the string, ie. 629-2...
I've also tried using chain from itertools like this:
def get_unique(data):
return list(set(chain(*data)))
But that only got me as far as numpy.unique.
Thanks in advance.
A:
Code
from itertools import groupby
def get_unique(data):
def designated_version(item):
return item[0].split(',')[0]
return [list(v)[0]
for _, v in groupby(sorted(data,
key = designated_version),
designated_version)
]
Test
print(get_unique(data))
# Output
[['629-2, text1, 12'], ['407-3, text9, 6'], ['000-5, text7, 0']]
Explanation
Sorts data by designated number (in case not already sorted)
Uses groupby to group by the unique version of the numerical designator of each item in list i.e. lambda item: item[0].split(',')[0]
List comprehension keeps the first item in each grouped list i.e. list(v)[0]
A:
# Convert the list of lists to a set
data_set = set(tuple(x) for x in data)
# Convert the set back to a list
data_unique = [list(x) for x in data_set]
A:
I have used recursion to solve the problem!
def get_unique(lst):
if not lst:
return []
if lst[0] in lst[1:]:
return get_unique(lst[1:])
else:
return [lst[0]] + get_unique(lst[1:])
data = [
["629-2, text1, 12"],
["629-2, text2, 12"],
["407-3, text9, 6"],
["407-3, text4, 6"],
["000-5, text7, 0"],
["000-5, text6, 0"],
]
print(get_unique(data))
Here I am storing the last occurrence of the element in list.
| Getting distinct values from from a list comprised of lists containing a comma delimited string | Main list:
data = [
["629-2, text1, 12"],
["629-2, text2, 12"],
["407-3, text9, 6"],
["407-3, text4, 6"],
["000-5, text7, 0"],
["000-5, text6, 0"],
]
I want to get a list comprised of unique lists like so:
data_unique = [
["629-2, text1, 12"],
["407-3, text9, 6"],
["000-5, text6, 0"],
]
I've tried using numpy.unique but I need to pare it down further as I need the list to be populated by lists containing a single unique version of the numerical designator in the beginning of the string, ie. 629-2...
I've also tried using chain from itertools like this:
def get_unique(data):
return list(set(chain(*data)))
But that only got me as far as numpy.unique.
Thanks in advance.
| [
"Code\nfrom itertools import groupby\n\ndef get_unique(data):\n def designated_version(item):\n return item[0].split(',')[0]\n\n return [list(v)[0] \n for _, v in groupby(sorted(data, \n key = designated_version),\n designated_version)\n ]\n\n \n\nTest\nprint(get_unique(data))\n# Output\n[['629-2, text1, 12'], ['407-3, text9, 6'], ['000-5, text7, 0']]\n\nExplanation\n\nSorts data by designated number (in case not already sorted)\nUses groupby to group by the unique version of the numerical designator of each item in list i.e. lambda item: item[0].split(',')[0]\nList comprehension keeps the first item in each grouped list i.e. list(v)[0]\n\n",
"# Convert the list of lists to a set\ndata_set = set(tuple(x) for x in data)\n\n# Convert the set back to a list\ndata_unique = [list(x) for x in data_set]\n\n",
"I have used recursion to solve the problem!\ndef get_unique(lst):\n if not lst:\n return []\n if lst[0] in lst[1:]:\n return get_unique(lst[1:])\n else:\n return [lst[0]] + get_unique(lst[1:])\n\ndata = [\n[\"629-2, text1, 12\"],\n[\"629-2, text2, 12\"],\n[\"407-3, text9, 6\"],\n[\"407-3, text4, 6\"],\n[\"000-5, text7, 0\"],\n[\"000-5, text6, 0\"],\n]\nprint(get_unique(data))\n\nHere I am storing the last occurrence of the element in list.\n"
] | [
2,
0,
0
] | [] | [] | [
"numpy",
"python",
"python_itertools"
] | stackoverflow_0074666151_numpy_python_python_itertools.txt |
Q:
Python - How to make a circle made of 32 triangles
I would like to ask how you would make a triangle that is purely made of 32 triangles. I'm asking because I'm having trouble writing the code myself, so I thought I'd at least find some help here
I tried to write it but Python doesn't make much sense to me and every time I get somewhere I find out at the end that it was actually useless and that I'm very bad at python
| Python - How to make a circle made of 32 triangles | I would like to ask how you would make a triangle that is purely made of 32 triangles. I'm asking because I'm having trouble writing the code myself, so I thought I'd at least find some help here
I tried to write it but Python doesn't make much sense to me and every time I get somewhere I find out at the end that it was actually useless and that I'm very bad at python
| [] | [] | [
"To draw a circle made of triangles using Python, you can use the turtle module. The turtle module allows you to create simple graphics using a turtle that moves around the screen. You can use the turtle module to draw lines and shapes, and then fill them with color.\n"
] | [
-2
] | [
"python"
] | stackoverflow_0074666273_python.txt |
Q:
Convert RGB array to HSL
A disclaimer first, I'm not very skilled in Python, you guys have my admiration.
My problem:
I need to generate 10k+ images from templates (128px by 128px) with various hues and luminances.
I load the images and turn them into arrays
image = Image.open(dir + "/" + file).convert('RGBA')
arr=np.array(np.asarray(image).astype('float'))
From what I can understand, handling numpy arrays in this fashion is much faster than looping over every pixels and using colorsys.
Now, I've stumbled upon a couple functions to convert rgb to hsv.
This helped me generate my images with different hues, but I also need to play with the brightness so that some can be black, and others white.
def rgb_to_hsv(rgb):
# Translated from source of colorsys.rgb_to_hsv
hsv=np.empty_like(rgb)
hsv[...,3:]=rgb[...,3:]
r,g,b=rgb[...,0],rgb[...,1],rgb[...,2]
maxc = np.max(rgb[...,:2],axis=-1)
minc = np.min(rgb[...,:2],axis=-1)
hsv[...,2] = maxc
hsv[...,1] = (maxc-minc) / maxc
rc = (maxc-r) / (maxc-minc)
gc = (maxc-g) / (maxc-minc)
bc = (maxc-b) / (maxc-minc)
hsv[...,0] = np.select([r==maxc,g==maxc],[bc-gc,2.0+rc-bc],default=4.0+gc-rc)
hsv[...,0] = (hsv[...,0]/6.0) % 1.0
idx=(minc == maxc)
hsv[...,0][idx]=0.0
hsv[...,1][idx]=0.0
return hsv
def hsv_to_rgb(hsv):
# Translated from source of colorsys.hsv_to_rgb
rgb=np.empty_like(hsv)
rgb[...,3:]=hsv[...,3:]
h,s,v=hsv[...,0],hsv[...,1],hsv[...,2]
i = (h*6.0).astype('uint8')
f = (h*6.0) - i
p = v*(1.0 - s)
q = v*(1.0 - s*f)
t = v*(1.0 - s*(1.0-f))
i = i%6
conditions=[s==0.0,i==1,i==2,i==3,i==4,i==5]
rgb[...,0]=np.select(conditions,[v,q,p,p,t,v],default=v)
rgb[...,1]=np.select(conditions,[v,v,v,q,p,p],default=t)
rgb[...,2]=np.select(conditions,[v,p,t,v,v,q],default=p)
return rgb
How easy is it to modify these functions to convert to and from HSL?
Any trick to convert HSV to HSL?
Any info you can give me is greatly appreciated, thanks!
A:
Yes, numpy, namely the vectorised code, can speed-up color conversions.
The more, for massive production of 10k+ bitmaps, you may want to re-use a ready made professional conversion, or sub-class it, if it is not exactly matching your preferred Luminance model.
a Computer Vision library OpenCV, currently available for python as a cv2 module, can take care of the colorsystem conversion without any additional coding just with:
a ready-made conversion one-liner
out = cv2.cvtColor( anInputFRAME, cv2.COLOR_YUV2BGR ) # a bitmap conversion
A list of some color-systems available in cv2 ( you may notice RGB to be referred to as BRG due to OpenCV convention of a different ordering of an image's Blue-Red-Green color-planes ),
( symmetry applies COLOR_YCR_CB2BGR <-|-> COLOR_BGR2YCR_CB not all pairs shown )
>>> import cv2
>>> for key in dir( cv2 ): # show all ready conversions
... if key[:7] == 'COLOR_Y':
... print key
COLOR_YCR_CB2BGR
COLOR_YCR_CB2RGB
COLOR_YUV2BGR
COLOR_YUV2BGRA_I420
COLOR_YUV2BGRA_IYUV
COLOR_YUV2BGRA_NV12
COLOR_YUV2BGRA_NV21
COLOR_YUV2BGRA_UYNV
COLOR_YUV2BGRA_UYVY
COLOR_YUV2BGRA_Y422
COLOR_YUV2BGRA_YUNV
COLOR_YUV2BGRA_YUY2
COLOR_YUV2BGRA_YUYV
COLOR_YUV2BGRA_YV12
COLOR_YUV2BGRA_YVYU
COLOR_YUV2BGR_I420
COLOR_YUV2BGR_IYUV
COLOR_YUV2BGR_NV12
COLOR_YUV2BGR_NV21
COLOR_YUV2BGR_UYNV
COLOR_YUV2BGR_UYVY
COLOR_YUV2BGR_Y422
COLOR_YUV2BGR_YUNV
COLOR_YUV2BGR_YUY2
COLOR_YUV2BGR_YUYV
COLOR_YUV2BGR_YV12
COLOR_YUV2BGR_YVYU
COLOR_YUV2GRAY_420
COLOR_YUV2GRAY_I420
COLOR_YUV2GRAY_IYUV
COLOR_YUV2GRAY_NV12
COLOR_YUV2GRAY_NV21
COLOR_YUV2GRAY_UYNV
COLOR_YUV2GRAY_UYVY
COLOR_YUV2GRAY_Y422
COLOR_YUV2GRAY_YUNV
COLOR_YUV2GRAY_YUY2
COLOR_YUV2GRAY_YUYV
COLOR_YUV2GRAY_YV12
COLOR_YUV2GRAY_YVYU
COLOR_YUV2RGB
COLOR_YUV2RGBA_I420
COLOR_YUV2RGBA_IYUV
COLOR_YUV2RGBA_NV12
COLOR_YUV2RGBA_NV21
COLOR_YUV2RGBA_UYNV
COLOR_YUV2RGBA_UYVY
COLOR_YUV2RGBA_Y422
COLOR_YUV2RGBA_YUNV
COLOR_YUV2RGBA_YUY2
COLOR_YUV2RGBA_YUYV
COLOR_YUV2RGBA_YV12
COLOR_YUV2RGBA_YVYU
COLOR_YUV2RGB_I420
COLOR_YUV2RGB_IYUV
COLOR_YUV2RGB_NV12
COLOR_YUV2RGB_NV21
COLOR_YUV2RGB_UYNV
COLOR_YUV2RGB_UYVY
COLOR_YUV2RGB_Y422
COLOR_YUV2RGB_YUNV
COLOR_YUV2RGB_YUY2
COLOR_YUV2RGB_YUYV
COLOR_YUV2RGB_YV12
COLOR_YUV2RGB_YVYU
COLOR_YUV420P2BGR
COLOR_YUV420P2BGRA
COLOR_YUV420P2GRAY
COLOR_YUV420P2RGB
COLOR_YUV420P2RGBA
COLOR_YUV420SP2BGR
COLOR_YUV420SP2BGRA
COLOR_YUV420SP2GRAY
COLOR_YUV420SP2RGB
COLOR_YUV420SP2RGBA
I did some prototyping for Luminance conversions ( based on >>> http://en.wikipedia.org/wiki/HSL_and_HSV )
But not tested for release.
def get_YUV_V_Cr_Rec601_BRG_frame( brgFRAME ): # For the Rec. 601 primaries used in gamma-corrected sRGB, fast, VECTORISED MUL/ADD CODE
out = numpy.zeros( brgFRAME.shape[0:2] )
out += 0.615 / 255 * brgFRAME[:,:,1] # // Red # normalise to <0.0 - 1.0> before vectorised MUL/ADD, saves [usec] ... on 480x640 [px] faster goes about 2.2 [msec] instead of 5.4 [msec]
out -= 0.515 / 255 * brgFRAME[:,:,2] # // Green
out -= 0.100 / 255 * brgFRAME[:,:,0] # // Blue # normalise to <0.0 - 1.0> before vectorised MUL/ADD
return out
A:
# -*- coding: utf-8 -*-
# @File : rgb2hls.py
# @Info : @ TSMC
# @Desc :
import colorsys
import numpy as np
import scipy.misc
import tensorflow as tf
from PIL import Image
def rgb2hls(img):
""" note: elements in img is a float number less than 1.0 and greater than 0.
:param img: an numpy ndarray with shape NHWC
:return:
"""
assert len(img.shape) == 3
hue = np.zeros_like(img[:, :, 0])
luminance = np.zeros_like(img[:, :, 0])
saturation = np.zeros_like(img[:, :, 0])
for x in range(height):
for y in range(width):
r, g, b = img[x, y]
h, l, s = colorsys.rgb_to_hls(r, g, b)
hue[x, y] = h
luminance[x, y] = l
saturation[x, y] = s
return hue, luminance, saturation
def np_rgb2hls(img):
r, g, b = img[:, :, 0], img[:, :, 1], img[:, :, 2]
maxc = np.max(img, -1)
minc = np.min(img, -1)
l = (minc + maxc) / 2.0
if np.array_equal(minc, maxc):
return np.zeros_like(l), l, np.zeros_like(l)
smask = np.greater(l, 0.5).astype(np.float32)
s = (1.0 - smask) * ((maxc - minc) / (maxc + minc)) + smask * ((maxc - minc) / (2.001 - maxc - minc))
rc = (maxc - r) / (maxc - minc + 0.001)
gc = (maxc - g) / (maxc - minc + 0.001)
bc = (maxc - b) / (maxc - minc + 0.001)
rmask = np.equal(r, maxc).astype(np.float32)
gmask = np.equal(g, maxc).astype(np.float32)
rgmask = np.logical_or(rmask, gmask).astype(np.float32)
h = rmask * (bc - gc) + gmask * (2.0 + rc - bc) + (1.0 - rgmask) * (4.0 + gc - rc)
h = np.remainder(h / 6.0, 1.0)
return h, l, s
def tf_rgb2hls(img):
""" note: elements in img all in [0,1]
:param img: a tensor with shape NHWC
:return:
"""
assert img.get_shape()[-1] == 3
r, g, b = img[:, :, 0], img[:, :, 1], img[:, :, 2]
maxc = tf.reduce_max(img, -1)
minc = tf.reduce_min(img, -1)
l = (minc + maxc) / 2.0
# if tf.reduce_all(tf.equal(minc, maxc)):
# return tf.zeros_like(l), l, tf.zeros_like(l)
smask = tf.cast(tf.greater(l, 0.5), tf.float32)
s = (1.0 - smask) * ((maxc - minc) / (maxc + minc)) + smask * ((maxc - minc) / (2.001 - maxc - minc))
rc = (maxc - r) / (maxc - minc + 0.001)
gc = (maxc - g) / (maxc - minc + 0.001)
bc = (maxc - b) / (maxc - minc + 0.001)
rmask = tf.equal(r, maxc)
gmask = tf.equal(g, maxc)
rgmask = tf.cast(tf.logical_or(rmask, gmask), tf.float32)
rmask = tf.cast(rmask, tf.float32)
gmask = tf.cast(gmask, tf.float32)
h = rmask * (bc - gc) + gmask * (2.0 + rc - bc) + (1.0 - rgmask) * (4.0 + gc - rc)
h = tf.mod(h / 6.0, 1.0)
h = tf.expand_dims(h, -1)
l = tf.expand_dims(l, -1)
s = tf.expand_dims(s, -1)
x = tf.concat([tf.zeros_like(l), l, tf.zeros_like(l)], -1)
y = tf.concat([h, l, s], -1)
return tf.where(condition=tf.reduce_all(tf.equal(minc, maxc)), x=x, y=y)
if __name__ == '__main__':
"""
HLS: Hue, Luminance, Saturation
H: position in the spectrum
L: color lightness
S: color saturation
"""
avatar = Image.open("hue.jpg")
width, height = avatar.size
print("width: {}, height: {}".format(width, height))
img = np.array(avatar)
img = img / 255.0
print(img.shape)
# # hue, luminance, saturation = rgb2hls(img)
# hue, luminance, saturation = np_rgb2hls(img)
img_tensor = tf.convert_to_tensor(img, tf.float32)
hls = tf_rgb2hls(img_tensor)
h, l, s = hls[:, :, 0], hls[:, :, 1], hls[:, :, 2]
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
hue, luminance, saturation = sess.run([h, l, s])
scipy.misc.imsave("hls_h_.jpg", hue)
scipy.misc.imsave("hls_l_.jpg", luminance)
scipy.misc.imsave("hls_s_.jpg", saturation)
A:
In case someone is looking for a self-contained solution (I really didn't want to add OpenCV as a dependency), I rewrote the official python colorsys rgb_to_hls() and hls_to_rgb() functions to be usable for numpy:
import numpy as np
def rgb_to_hls(rgb_array: np.ndarray) -> np.ndarray:
"""
Expects an array of shape (X, 3), each row being RGB colours.
Returns an array of same size, each row being HLS colours.
Like `colorsys` python module, all values are between 0 and 1.
NOTE: like `colorsys`, this uses HLS rather than the more usual HSL
"""
assert rgb_array.ndim == 2
assert rgb_array.shape[1] == 3
assert np.max(rgb_array) <= 1
assert np.min(rgb_array) >= 0
r, g, b = rgb_array.T.reshape((3, -1, 1))
maxc = np.max(rgb_array, axis=1).reshape((-1, 1))
minc = np.min(rgb_array, axis=1).reshape((-1, 1))
sumc = (maxc+minc)
rangec = (maxc-minc)
with np.errstate(divide='ignore', invalid='ignore'):
rgb_c = (maxc - rgb_array) / rangec
rc, gc, bc = rgb_c.T.reshape((3, -1, 1))
h = (np.where(minc == maxc, 0, np.where(r == maxc, bc - gc, np.where(g == maxc, 2.0+rc-bc, 4.0+gc-rc)))
/ 6) % 1
l = sumc/2.0
with np.errstate(divide='ignore', invalid='ignore'):
s = np.where(minc == maxc, 0,
np.where(l < 0.5, rangec / sumc, rangec / (2.0-sumc)))
return np.concatenate((h, l, s), axis=1)
def hls_to_rgb(hls_array: np.ndarray) -> np.ndarray:
"""
Expects an array of shape (X, 3), each row being HLS colours.
Returns an array of same size, each row being RGB colours.
Like `colorsys` python module, all values are between 0 and 1.
NOTE: like `colorsys`, this uses HLS rather than the more usual HSL
"""
ONE_THIRD = 1 / 3
TWO_THIRD = 2 / 3
ONE_SIXTH = 1 / 6
def _v(m1, m2, h):
h = h % 1.0
return np.where(h < ONE_SIXTH, m1 + (m2 - m1) * h * 6,
np.where(h < .5, m2,
np.where(h < TWO_THIRD, m1 + (m2 - m1) * (TWO_THIRD - h) * 6,
m1)))
assert hls_array.ndim == 2
assert hls_array.shape[1] == 3
assert np.max(hls_array) <= 1
assert np.min(hls_array) >= 0
h, l, s = hls_array.T.reshape((3, -1, 1))
m2 = np.where(l < 0.5, l * (1 + s), l + s - (l * s))
m1 = 2 * l - m2
r = np.where(s == 0, l, _v(m1, m2, h + ONE_THIRD))
g = np.where(s == 0, l, _v(m1, m2, h))
b = np.where(s == 0, l, _v(m1, m2, h - ONE_THIRD))
return np.concatenate((r, g, b), axis=1)
def _test1():
import colorsys
rgb_array = np.array([[.5, .5, .8], [.3, .7, 1], [0, 0, 0], [1, 1, 1], [.5, .5, .5]])
hls_array = rgb_to_hls(rgb_array)
for rgb, hls in zip(rgb_array, hls_array):
assert np.all(abs(np.array(colorsys.rgb_to_hls(*rgb) - hls) < 0.001))
new_rgb_array = hls_to_rgb(hls_array)
for hls, rgb in zip(hls_array, new_rgb_array):
assert np.all(abs(np.array(colorsys.hls_to_rgb(*hls) - rgb) < 0.001))
assert np.all(abs(rgb_array - new_rgb_array) < 0.001)
print("tests part 1 done")
def _test2():
import colorsys
hls_array = np.array([[0.6456692913385826, 0.14960629921259844, 0.7480314960629921], [.3, .7, 1], [0, 0, 0], [0, 1, 0], [.5, .5, .5]])
rgb_array = hls_to_rgb(hls_array)
for hls, rgb in zip(hls_array, rgb_array):
assert np.all(abs(np.array(colorsys.hls_to_rgb(*hls) - rgb) < 0.001))
new_hls_array = rgb_to_hls(rgb_array)
for rgb, hls in zip(rgb_array, new_hls_array):
assert np.all(abs(np.array(colorsys.rgb_to_hls(*rgb) - hls) < 0.001))
assert np.all(abs(hls_array - new_hls_array) < 0.001)
print("All tests done")
def _test():
_test1()
_test2()
if __name__ == "__main__":
_test()
(see gist)
(off topic: converting the other functions in the same way is actually a great training for someone wanting to get their hands dirty with numpy (or other SIMD / GPU) programming). Let me know if you do so :)
edit: rgb_to_hsv and hsv_to_rgb now also in the gist.
| Convert RGB array to HSL | A disclaimer first, I'm not very skilled in Python, you guys have my admiration.
My problem:
I need to generate 10k+ images from templates (128px by 128px) with various hues and luminances.
I load the images and turn them into arrays
image = Image.open(dir + "/" + file).convert('RGBA')
arr=np.array(np.asarray(image).astype('float'))
From what I can understand, handling numpy arrays in this fashion is much faster than looping over every pixels and using colorsys.
Now, I've stumbled upon a couple functions to convert rgb to hsv.
This helped me generate my images with different hues, but I also need to play with the brightness so that some can be black, and others white.
def rgb_to_hsv(rgb):
# Translated from source of colorsys.rgb_to_hsv
hsv=np.empty_like(rgb)
hsv[...,3:]=rgb[...,3:]
r,g,b=rgb[...,0],rgb[...,1],rgb[...,2]
maxc = np.max(rgb[...,:2],axis=-1)
minc = np.min(rgb[...,:2],axis=-1)
hsv[...,2] = maxc
hsv[...,1] = (maxc-minc) / maxc
rc = (maxc-r) / (maxc-minc)
gc = (maxc-g) / (maxc-minc)
bc = (maxc-b) / (maxc-minc)
hsv[...,0] = np.select([r==maxc,g==maxc],[bc-gc,2.0+rc-bc],default=4.0+gc-rc)
hsv[...,0] = (hsv[...,0]/6.0) % 1.0
idx=(minc == maxc)
hsv[...,0][idx]=0.0
hsv[...,1][idx]=0.0
return hsv
def hsv_to_rgb(hsv):
# Translated from source of colorsys.hsv_to_rgb
rgb=np.empty_like(hsv)
rgb[...,3:]=hsv[...,3:]
h,s,v=hsv[...,0],hsv[...,1],hsv[...,2]
i = (h*6.0).astype('uint8')
f = (h*6.0) - i
p = v*(1.0 - s)
q = v*(1.0 - s*f)
t = v*(1.0 - s*(1.0-f))
i = i%6
conditions=[s==0.0,i==1,i==2,i==3,i==4,i==5]
rgb[...,0]=np.select(conditions,[v,q,p,p,t,v],default=v)
rgb[...,1]=np.select(conditions,[v,v,v,q,p,p],default=t)
rgb[...,2]=np.select(conditions,[v,p,t,v,v,q],default=p)
return rgb
How easy is it to modify these functions to convert to and from HSL?
Any trick to convert HSV to HSL?
Any info you can give me is greatly appreciated, thanks!
| [
"Yes, numpy, namely the vectorised code, can speed-up color conversions.\nThe more, for massive production of 10k+ bitmaps, you may want to re-use a ready made professional conversion, or sub-class it, if it is not exactly matching your preferred Luminance model.\na Computer Vision library OpenCV, currently available for python as a cv2 module, can take care of the colorsystem conversion without any additional coding just with:\na ready-made conversion one-liner\nout = cv2.cvtColor( anInputFRAME, cv2.COLOR_YUV2BGR ) # a bitmap conversion\n\nA list of some color-systems available in cv2 ( you may notice RGB to be referred to as BRG due to OpenCV convention of a different ordering of an image's Blue-Red-Green color-planes ), \n( symmetry applies COLOR_YCR_CB2BGR <-|-> COLOR_BGR2YCR_CB not all pairs shown )\n>>> import cv2\n>>> for key in dir( cv2 ): # show all ready conversions\n... if key[:7] == 'COLOR_Y':\n... print key\n\nCOLOR_YCR_CB2BGR\nCOLOR_YCR_CB2RGB\nCOLOR_YUV2BGR\nCOLOR_YUV2BGRA_I420\nCOLOR_YUV2BGRA_IYUV\nCOLOR_YUV2BGRA_NV12\nCOLOR_YUV2BGRA_NV21\nCOLOR_YUV2BGRA_UYNV\nCOLOR_YUV2BGRA_UYVY\nCOLOR_YUV2BGRA_Y422\nCOLOR_YUV2BGRA_YUNV\nCOLOR_YUV2BGRA_YUY2\nCOLOR_YUV2BGRA_YUYV\nCOLOR_YUV2BGRA_YV12\nCOLOR_YUV2BGRA_YVYU\nCOLOR_YUV2BGR_I420\nCOLOR_YUV2BGR_IYUV\nCOLOR_YUV2BGR_NV12\nCOLOR_YUV2BGR_NV21\nCOLOR_YUV2BGR_UYNV\nCOLOR_YUV2BGR_UYVY\nCOLOR_YUV2BGR_Y422\nCOLOR_YUV2BGR_YUNV\nCOLOR_YUV2BGR_YUY2\nCOLOR_YUV2BGR_YUYV\nCOLOR_YUV2BGR_YV12\nCOLOR_YUV2BGR_YVYU\nCOLOR_YUV2GRAY_420\nCOLOR_YUV2GRAY_I420\nCOLOR_YUV2GRAY_IYUV\nCOLOR_YUV2GRAY_NV12\nCOLOR_YUV2GRAY_NV21\nCOLOR_YUV2GRAY_UYNV\nCOLOR_YUV2GRAY_UYVY\nCOLOR_YUV2GRAY_Y422\nCOLOR_YUV2GRAY_YUNV\nCOLOR_YUV2GRAY_YUY2\nCOLOR_YUV2GRAY_YUYV\nCOLOR_YUV2GRAY_YV12\nCOLOR_YUV2GRAY_YVYU\nCOLOR_YUV2RGB\nCOLOR_YUV2RGBA_I420\nCOLOR_YUV2RGBA_IYUV\nCOLOR_YUV2RGBA_NV12\nCOLOR_YUV2RGBA_NV21\nCOLOR_YUV2RGBA_UYNV\nCOLOR_YUV2RGBA_UYVY\nCOLOR_YUV2RGBA_Y422\nCOLOR_YUV2RGBA_YUNV\nCOLOR_YUV2RGBA_YUY2\nCOLOR_YUV2RGBA_YUYV\nCOLOR_YUV2RGBA_YV12\nCOLOR_YUV2RGBA_YVYU\nCOLOR_YUV2RGB_I420\nCOLOR_YUV2RGB_IYUV\nCOLOR_YUV2RGB_NV12\nCOLOR_YUV2RGB_NV21\nCOLOR_YUV2RGB_UYNV\nCOLOR_YUV2RGB_UYVY\nCOLOR_YUV2RGB_Y422\nCOLOR_YUV2RGB_YUNV\nCOLOR_YUV2RGB_YUY2\nCOLOR_YUV2RGB_YUYV\nCOLOR_YUV2RGB_YV12\nCOLOR_YUV2RGB_YVYU\nCOLOR_YUV420P2BGR\nCOLOR_YUV420P2BGRA\nCOLOR_YUV420P2GRAY\nCOLOR_YUV420P2RGB\nCOLOR_YUV420P2RGBA\nCOLOR_YUV420SP2BGR\nCOLOR_YUV420SP2BGRA\nCOLOR_YUV420SP2GRAY\nCOLOR_YUV420SP2RGB\nCOLOR_YUV420SP2RGBA\n\nI did some prototyping for Luminance conversions ( based on >>> http://en.wikipedia.org/wiki/HSL_and_HSV )\nBut not tested for release.\ndef get_YUV_V_Cr_Rec601_BRG_frame( brgFRAME ): # For the Rec. 601 primaries used in gamma-corrected sRGB, fast, VECTORISED MUL/ADD CODE\n out = numpy.zeros( brgFRAME.shape[0:2] )\n out += 0.615 / 255 * brgFRAME[:,:,1] # // Red # normalise to <0.0 - 1.0> before vectorised MUL/ADD, saves [usec] ... on 480x640 [px] faster goes about 2.2 [msec] instead of 5.4 [msec]\n out -= 0.515 / 255 * brgFRAME[:,:,2] # // Green\n out -= 0.100 / 255 * brgFRAME[:,:,0] # // Blue # normalise to <0.0 - 1.0> before vectorised MUL/ADD\n return out\n\n",
"# -*- coding: utf-8 -*-\n# @File : rgb2hls.py\n# @Info : @ TSMC\n# @Desc :\n\n\nimport colorsys\n\nimport numpy as np\nimport scipy.misc\nimport tensorflow as tf\nfrom PIL import Image\n\n\ndef rgb2hls(img):\n \"\"\" note: elements in img is a float number less than 1.0 and greater than 0.\n :param img: an numpy ndarray with shape NHWC\n :return:\n \"\"\"\n assert len(img.shape) == 3\n hue = np.zeros_like(img[:, :, 0])\n luminance = np.zeros_like(img[:, :, 0])\n saturation = np.zeros_like(img[:, :, 0])\n for x in range(height):\n for y in range(width):\n r, g, b = img[x, y]\n h, l, s = colorsys.rgb_to_hls(r, g, b)\n hue[x, y] = h\n luminance[x, y] = l\n saturation[x, y] = s\n return hue, luminance, saturation\n\n\ndef np_rgb2hls(img):\n r, g, b = img[:, :, 0], img[:, :, 1], img[:, :, 2]\n\n maxc = np.max(img, -1)\n minc = np.min(img, -1)\n l = (minc + maxc) / 2.0\n if np.array_equal(minc, maxc):\n return np.zeros_like(l), l, np.zeros_like(l)\n smask = np.greater(l, 0.5).astype(np.float32)\n\n s = (1.0 - smask) * ((maxc - minc) / (maxc + minc)) + smask * ((maxc - minc) / (2.001 - maxc - minc))\n rc = (maxc - r) / (maxc - minc + 0.001)\n gc = (maxc - g) / (maxc - minc + 0.001)\n bc = (maxc - b) / (maxc - minc + 0.001)\n\n rmask = np.equal(r, maxc).astype(np.float32)\n gmask = np.equal(g, maxc).astype(np.float32)\n rgmask = np.logical_or(rmask, gmask).astype(np.float32)\n\n h = rmask * (bc - gc) + gmask * (2.0 + rc - bc) + (1.0 - rgmask) * (4.0 + gc - rc)\n h = np.remainder(h / 6.0, 1.0)\n return h, l, s\n\n\ndef tf_rgb2hls(img):\n \"\"\" note: elements in img all in [0,1]\n :param img: a tensor with shape NHWC\n :return:\n \"\"\"\n assert img.get_shape()[-1] == 3\n r, g, b = img[:, :, 0], img[:, :, 1], img[:, :, 2]\n maxc = tf.reduce_max(img, -1)\n minc = tf.reduce_min(img, -1)\n\n l = (minc + maxc) / 2.0\n\n # if tf.reduce_all(tf.equal(minc, maxc)):\n # return tf.zeros_like(l), l, tf.zeros_like(l)\n smask = tf.cast(tf.greater(l, 0.5), tf.float32)\n\n s = (1.0 - smask) * ((maxc - minc) / (maxc + minc)) + smask * ((maxc - minc) / (2.001 - maxc - minc))\n rc = (maxc - r) / (maxc - minc + 0.001)\n gc = (maxc - g) / (maxc - minc + 0.001)\n bc = (maxc - b) / (maxc - minc + 0.001)\n\n rmask = tf.equal(r, maxc)\n gmask = tf.equal(g, maxc)\n rgmask = tf.cast(tf.logical_or(rmask, gmask), tf.float32)\n rmask = tf.cast(rmask, tf.float32)\n gmask = tf.cast(gmask, tf.float32)\n\n h = rmask * (bc - gc) + gmask * (2.0 + rc - bc) + (1.0 - rgmask) * (4.0 + gc - rc)\n h = tf.mod(h / 6.0, 1.0)\n\n h = tf.expand_dims(h, -1)\n l = tf.expand_dims(l, -1)\n s = tf.expand_dims(s, -1)\n\n x = tf.concat([tf.zeros_like(l), l, tf.zeros_like(l)], -1)\n y = tf.concat([h, l, s], -1)\n\n return tf.where(condition=tf.reduce_all(tf.equal(minc, maxc)), x=x, y=y)\n\n\nif __name__ == '__main__':\n \"\"\"\n HLS: Hue, Luminance, Saturation\n H: position in the spectrum\n L: color lightness\n S: color saturation\n \"\"\"\n avatar = Image.open(\"hue.jpg\")\n width, height = avatar.size\n print(\"width: {}, height: {}\".format(width, height))\n img = np.array(avatar)\n img = img / 255.0\n print(img.shape)\n\n # # hue, luminance, saturation = rgb2hls(img)\n # hue, luminance, saturation = np_rgb2hls(img)\n\n img_tensor = tf.convert_to_tensor(img, tf.float32)\n hls = tf_rgb2hls(img_tensor)\n h, l, s = hls[:, :, 0], hls[:, :, 1], hls[:, :, 2]\n\n with tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n hue, luminance, saturation = sess.run([h, l, s])\n scipy.misc.imsave(\"hls_h_.jpg\", hue)\n scipy.misc.imsave(\"hls_l_.jpg\", luminance)\n scipy.misc.imsave(\"hls_s_.jpg\", saturation)\n\n",
"In case someone is looking for a self-contained solution (I really didn't want to add OpenCV as a dependency), I rewrote the official python colorsys rgb_to_hls() and hls_to_rgb() functions to be usable for numpy:\nimport numpy as np\n\ndef rgb_to_hls(rgb_array: np.ndarray) -> np.ndarray:\n \"\"\"\n Expects an array of shape (X, 3), each row being RGB colours.\n Returns an array of same size, each row being HLS colours.\n Like `colorsys` python module, all values are between 0 and 1.\n\n NOTE: like `colorsys`, this uses HLS rather than the more usual HSL\n \"\"\"\n assert rgb_array.ndim == 2\n assert rgb_array.shape[1] == 3\n assert np.max(rgb_array) <= 1\n assert np.min(rgb_array) >= 0\n\n r, g, b = rgb_array.T.reshape((3, -1, 1))\n maxc = np.max(rgb_array, axis=1).reshape((-1, 1))\n minc = np.min(rgb_array, axis=1).reshape((-1, 1))\n\n sumc = (maxc+minc)\n rangec = (maxc-minc)\n\n with np.errstate(divide='ignore', invalid='ignore'):\n rgb_c = (maxc - rgb_array) / rangec\n rc, gc, bc = rgb_c.T.reshape((3, -1, 1))\n\n h = (np.where(minc == maxc, 0, np.where(r == maxc, bc - gc, np.where(g == maxc, 2.0+rc-bc, 4.0+gc-rc)))\n / 6) % 1\n l = sumc/2.0\n with np.errstate(divide='ignore', invalid='ignore'):\n s = np.where(minc == maxc, 0,\n np.where(l < 0.5, rangec / sumc, rangec / (2.0-sumc)))\n\n return np.concatenate((h, l, s), axis=1)\n\n\ndef hls_to_rgb(hls_array: np.ndarray) -> np.ndarray:\n \"\"\"\n Expects an array of shape (X, 3), each row being HLS colours.\n Returns an array of same size, each row being RGB colours.\n Like `colorsys` python module, all values are between 0 and 1.\n\n NOTE: like `colorsys`, this uses HLS rather than the more usual HSL\n \"\"\"\n ONE_THIRD = 1 / 3\n TWO_THIRD = 2 / 3\n ONE_SIXTH = 1 / 6\n\n def _v(m1, m2, h):\n h = h % 1.0\n return np.where(h < ONE_SIXTH, m1 + (m2 - m1) * h * 6,\n np.where(h < .5, m2,\n np.where(h < TWO_THIRD, m1 + (m2 - m1) * (TWO_THIRD - h) * 6,\n m1)))\n\n\n assert hls_array.ndim == 2\n assert hls_array.shape[1] == 3\n assert np.max(hls_array) <= 1\n assert np.min(hls_array) >= 0\n\n h, l, s = hls_array.T.reshape((3, -1, 1))\n m2 = np.where(l < 0.5, l * (1 + s), l + s - (l * s))\n m1 = 2 * l - m2\n\n r = np.where(s == 0, l, _v(m1, m2, h + ONE_THIRD))\n g = np.where(s == 0, l, _v(m1, m2, h))\n b = np.where(s == 0, l, _v(m1, m2, h - ONE_THIRD))\n\n return np.concatenate((r, g, b), axis=1)\n\n\ndef _test1():\n import colorsys\n rgb_array = np.array([[.5, .5, .8], [.3, .7, 1], [0, 0, 0], [1, 1, 1], [.5, .5, .5]])\n hls_array = rgb_to_hls(rgb_array)\n for rgb, hls in zip(rgb_array, hls_array):\n assert np.all(abs(np.array(colorsys.rgb_to_hls(*rgb) - hls) < 0.001))\n new_rgb_array = hls_to_rgb(hls_array)\n for hls, rgb in zip(hls_array, new_rgb_array):\n assert np.all(abs(np.array(colorsys.hls_to_rgb(*hls) - rgb) < 0.001))\n assert np.all(abs(rgb_array - new_rgb_array) < 0.001)\n print(\"tests part 1 done\")\n\ndef _test2():\n import colorsys\n hls_array = np.array([[0.6456692913385826, 0.14960629921259844, 0.7480314960629921], [.3, .7, 1], [0, 0, 0], [0, 1, 0], [.5, .5, .5]])\n rgb_array = hls_to_rgb(hls_array)\n for hls, rgb in zip(hls_array, rgb_array):\n assert np.all(abs(np.array(colorsys.hls_to_rgb(*hls) - rgb) < 0.001))\n new_hls_array = rgb_to_hls(rgb_array)\n for rgb, hls in zip(rgb_array, new_hls_array):\n assert np.all(abs(np.array(colorsys.rgb_to_hls(*rgb) - hls) < 0.001))\n assert np.all(abs(hls_array - new_hls_array) < 0.001)\n print(\"All tests done\")\n\ndef _test():\n _test1()\n _test2()\n\nif __name__ == \"__main__\":\n _test()\n\n(see gist)\n(off topic: converting the other functions in the same way is actually a great training for someone wanting to get their hands dirty with numpy (or other SIMD / GPU) programming). Let me know if you do so :)\n\nedit: rgb_to_hsv and hsv_to_rgb now also in the gist.\n"
] | [
1,
0,
0
] | [] | [] | [
"hsl",
"numpy",
"python",
"rgb"
] | stackoverflow_0026292114_hsl_numpy_python_rgb.txt |
Q:
How to make early stopping in image classification pytorch
I'm new with Pytorch and machine learning I'm follow this tutorial in this tutorial https://www.learnopencv.com/image-classification-using-transfer-learning-in-pytorch/ and use my custom dataset. Then I have same problem in this tutorial but I dont know how to make early stopping in pytorch and if do you have better without create early stopping process please tell me.
A:
This is what I did in each epoch
val_loss += loss
val_loss = val_loss / len(trainloader)
if val_loss < min_val_loss:
#Saving the model
if min_loss > loss.item():
min_loss = loss.item()
best_model = copy.deepcopy(loaded_model.state_dict())
print('Min loss %0.2f' % min_loss)
epochs_no_improve = 0
min_val_loss = val_loss
else:
epochs_no_improve += 1
# Check early stopping condition
if epochs_no_improve == n_epochs_stop:
print('Early stopping!' )
loaded_model.load_state_dict(best_model)
Donno how correct it is (I took most parts of this code from a post on another website, but forgot where, so I can't put the reference link. I have just modified it a bit), hope you find it useful, in case I'm wrong, kindly point out the mistake. Thank you
A:
Try with below code.
# Check early stopping condition
if epochs_no_improve == n_epochs_stop:
print('Early stopping!' )
early_stop = True
break
else:
continue
break
if early_stop:
print("Stopped")
break
A:
The idea of early stopping is to avoid overfitting by stopping the training process if there is no sign of improvement upon a monitored quantity, e.g. validation loss stops decreasing after a few iterations. A minimal implementation of early stopping needs 3 components:
best_score variable to store the best value of validation loss
counter variable to keep track of the number of iteration running
patience variable defines the number of epochs allows to continue training without improvement upon the validation loss. If the counter exceeds this, we stop the training process.
A pseudocode looks like this
# Define best_score, counter, and patience for early stopping:
best_score = None
counter = 0
patience = 10
path = ./checkpoints # user_defined path to save model
# Training loop:
for epoch in range(num_epochs):
# Compute training loss
loss = model(features,labels,train_mask)
# Compute validation loss
val_loss = evaluate(model, features, labels, val_mask)
if best_score is None:
best_score = val_loss
else:
# Check if val_loss improves or not.
if val_loss < best_score:
# val_loss improves, we update the latest best_score,
# and save the current model
best_score = val_loss
torch.save({'state_dict':model.state_dict()}, path)
else:
# val_loss does not improve, we increase the counter,
# stop training if it exceeds the amount of patience
counter += 1
if counter >= patience:
break
# Load best model
print('loading model before testing.')
model_checkpoint = torch.load(path)
model.load_state_dict(model_checkpoint['state_dict'])
acc = evaluate_test(model, features, labels, test_mask)
I've implemented an generic early stopping class for Pytorch to use with my some of projects. It allows you to select any validation quantity of interest (loss, accuracy, etc.). If you prefer a fancier early stopping then feel free to check it out in the repo early-stopping. There's an example notebook for reference too
A:
One way to implement early stopping in PyTorch is to use a callback function that is called at the end of each epoch. This function can check the validation loss and stop training if the loss has not improved for a certain number of epochs.
Here is an example of how this could be implemented:
Define a function to check if the validation loss has improved
def check_validation_loss(model, best_loss, current_epoch):
Calculate the validation loss
val_loss = calculate_validation_loss(model)
# If the validation loss has not improved for 3 epochs, stop training
if current_epoch - best_loss['epoch'] >= 3:
print('Stopping training, validation loss has not improved for 3 epochs')
return True
# If the validation loss is better than the best loss, update the best loss
if val_loss < best_loss['loss']:
best_loss['loss'] = val_loss
best_loss['epoch'] = current_epoch
return False
Define a function to calculate the validation loss
def calculate_validation_loss(model):
TODO: Calculate the validation loss
Define the training loop
best_loss = {'loss': float('inf'), 'epoch': 0}
for epoch in range(1, num_epochs + 1):
Train the model for one epoch
train_model(model, epoch)
# Check if we should stop training
if check_validation_loss(model, best_loss, epoch):
break
This code uses a dictionary to track the best validation loss and the epoch when it occurred. The check_validation_loss function calculates the validation loss, compares it to the best loss, and returns True if the training should be stopped.
Note that the calculate_validation_loss function is not implemented in this code, so you would need to add your own implementation for this. The train_model function is also not implemented, but this could be replaced with your own training code.
Alternatively, instead of implementing your own early stopping, you could use one of the existing early stopping implementations in PyTorch, such as torch.optim.lr_scheduler.ReduceLROnPlateau or torch.utils.callbacks.EarlyStopping. These can be used in a similar way to the above code, but provide more flexibility and options for controlling the early stopping behavior.
| How to make early stopping in image classification pytorch | I'm new with Pytorch and machine learning I'm follow this tutorial in this tutorial https://www.learnopencv.com/image-classification-using-transfer-learning-in-pytorch/ and use my custom dataset. Then I have same problem in this tutorial but I dont know how to make early stopping in pytorch and if do you have better without create early stopping process please tell me.
| [
"This is what I did in each epoch\nval_loss += loss\nval_loss = val_loss / len(trainloader)\nif val_loss < min_val_loss:\n #Saving the model\n if min_loss > loss.item():\n min_loss = loss.item()\n best_model = copy.deepcopy(loaded_model.state_dict())\n print('Min loss %0.2f' % min_loss)\n epochs_no_improve = 0\n min_val_loss = val_loss\n\nelse:\n epochs_no_improve += 1\n # Check early stopping condition\n if epochs_no_improve == n_epochs_stop:\n print('Early stopping!' )\n loaded_model.load_state_dict(best_model)\n\nDonno how correct it is (I took most parts of this code from a post on another website, but forgot where, so I can't put the reference link. I have just modified it a bit), hope you find it useful, in case I'm wrong, kindly point out the mistake. Thank you\n",
"Try with below code.\n # Check early stopping condition\n if epochs_no_improve == n_epochs_stop:\n print('Early stopping!' )\n early_stop = True\n break\n else:\n continue\n break\nif early_stop:\n print(\"Stopped\")\n break\n\n",
"The idea of early stopping is to avoid overfitting by stopping the training process if there is no sign of improvement upon a monitored quantity, e.g. validation loss stops decreasing after a few iterations. A minimal implementation of early stopping needs 3 components:\n\nbest_score variable to store the best value of validation loss\ncounter variable to keep track of the number of iteration running\npatience variable defines the number of epochs allows to continue training without improvement upon the validation loss. If the counter exceeds this, we stop the training process.\n\nA pseudocode looks like this\n# Define best_score, counter, and patience for early stopping:\nbest_score = None\ncounter = 0\npatience = 10\npath = ./checkpoints # user_defined path to save model\n\n# Training loop:\nfor epoch in range(num_epochs):\n # Compute training loss\n loss = model(features,labels,train_mask)\n \n # Compute validation loss\n val_loss = evaluate(model, features, labels, val_mask)\n \n if best_score is None:\n best_score = val_loss\n else:\n # Check if val_loss improves or not.\n if val_loss < best_score:\n # val_loss improves, we update the latest best_score, \n # and save the current model\n best_score = val_loss\n torch.save({'state_dict':model.state_dict()}, path)\n else:\n # val_loss does not improve, we increase the counter, \n # stop training if it exceeds the amount of patience\n counter += 1\n if counter >= patience:\n break\n\n# Load best model \nprint('loading model before testing.')\nmodel_checkpoint = torch.load(path)\n\nmodel.load_state_dict(model_checkpoint['state_dict'])\n\nacc = evaluate_test(model, features, labels, test_mask) \n\nI've implemented an generic early stopping class for Pytorch to use with my some of projects. It allows you to select any validation quantity of interest (loss, accuracy, etc.). If you prefer a fancier early stopping then feel free to check it out in the repo early-stopping. There's an example notebook for reference too\n",
"One way to implement early stopping in PyTorch is to use a callback function that is called at the end of each epoch. This function can check the validation loss and stop training if the loss has not improved for a certain number of epochs.\nHere is an example of how this could be implemented:\nDefine a function to check if the validation loss has improved\ndef check_validation_loss(model, best_loss, current_epoch):\nCalculate the validation loss\nval_loss = calculate_validation_loss(model)\n# If the validation loss has not improved for 3 epochs, stop training\nif current_epoch - best_loss['epoch'] >= 3:\n print('Stopping training, validation loss has not improved for 3 epochs')\n return True\n\n# If the validation loss is better than the best loss, update the best loss\nif val_loss < best_loss['loss']:\n best_loss['loss'] = val_loss\n best_loss['epoch'] = current_epoch\n\nreturn False\n\n\nDefine a function to calculate the validation loss\ndef calculate_validation_loss(model):\nTODO: Calculate the validation loss\nDefine the training loop\nbest_loss = {'loss': float('inf'), 'epoch': 0}\n\nfor epoch in range(1, num_epochs + 1):\n\nTrain the model for one epoch\ntrain_model(model, epoch)\n# Check if we should stop training\nif check_validation_loss(model, best_loss, epoch):\n break\n\n\nThis code uses a dictionary to track the best validation loss and the epoch when it occurred. The check_validation_loss function calculates the validation loss, compares it to the best loss, and returns True if the training should be stopped.\nNote that the calculate_validation_loss function is not implemented in this code, so you would need to add your own implementation for this. The train_model function is also not implemented, but this could be replaced with your own training code.\nAlternatively, instead of implementing your own early stopping, you could use one of the existing early stopping implementations in PyTorch, such as torch.optim.lr_scheduler.ReduceLROnPlateau or torch.utils.callbacks.EarlyStopping. These can be used in a similar way to the above code, but provide more flexibility and options for controlling the early stopping behavior.\n"
] | [
3,
0,
0,
0
] | [] | [] | [
"early_stopping",
"python",
"pytorch"
] | stackoverflow_0060200088_early_stopping_python_pytorch.txt |
Q:
How to change matplotlib marker into a football icon?
I have visualization like this:
I want to change the marker icon into a football icon with the same color as the line
My code looks like this :
fig, ax = plt.subplots(figsize=(12,6))
ax.step(x = a_df['minute'], y = a_df['a_cum'], where = 'post', label= ateam, linewidth=2)
ax.step(x = h_df['minute'], y = h_df['h_cum'], where = 'post', color ='red', label= hteam,linewidth=2)
plt.scatter(x= a_goal['minute'], y = a_goal['a_cum'] , marker = 'o')
plt.scatter(x= h_goal['minute'], y = h_goal['h_cum'] , marker = 'o',color = 'red')
plt.xticks([0,15,30,45,60,75,90])
plt.yticks([0, 0.5, 1, 1.5, 2, 2.5, 3])
plt.grid()
ax.title.set_text('The Expected Goals(xG) Chart Final Champions League 2010/2011')
plt.ylabel("Expected Goals (xG)")
plt.xlabel("Minutes")
ax.legend()
plt.show()
I don't have any clue to do it.
A:
you can draw your own shapes by creating matplotlib Path objects.
You need 2 lists to create it.
1)shape's vertices(coordinates)
2)codes:describes the path from a vertice to the next (MOVETO,LINETO,CURVE3,CURVE4,CLOSEPOLY,...)
for example
import matplotlib.pyplot as plt
from matplotlib.path import Path
vertices=[[ 1.86622681e+00, -9.69864442e+01], [-5.36324682e+01, -9.69864442e+01],
[-9.86337733e+01, -5.19851396e+01], [-9.86337733e+01, 3.51356038e+00],
[-9.86337733e+01, 5.90122504e+01], [-5.36324682e+01, 1.04013560e+02],
[ 1.86622681e+00, 1.04013560e+02], [ 5.73649168e+01, 1.04013560e+02],
[ 1.02366227e+02, 5.90122504e+01], [ 1.02366227e+02, 3.51356038e+00],
[ 1.02366227e+02, -5.19851396e+01], [ 5.73649168e+01, -9.69864442e+01],
[ 1.86622681e+00, -9.69864442e+01], [ 1.86622681e+00, -9.69864442e+01],
[ 1.86622681e+00, -9.69864442e+01], [ 1.86622681e+00, -9.59864442e+01],
[ 1.49396568e+01, -9.59864442e+01], [ 2.74005268e+01, -9.34457032e+01],
[ 3.88349768e+01, -8.88614442e+01], [ 3.93477668e+01, -8.39473616e+01],
[ 3.91766768e+01, -7.84211406e+01], [ 3.83349768e+01, -7.24551946e+01],
[ 2.54705168e+01, -7.17582316e+01], [ 1.38598668e+01, -6.91771276e+01],
[ 3.49122681e+00, -6.47364446e+01], [-5.88483119e+00, -7.07454276e+01],
[-1.85084882e+01, -7.43878696e+01], [-3.31337732e+01, -7.44239446e+01],
[-3.31639232e+01, -8.07006846e+01], [-3.34889082e+01, -8.56747886e+01],
[-3.41025232e+01, -8.92676942e+01], [-2.29485092e+01, -9.35925582e+01],
[-1.08166852e+01, -9.59864442e+01], [ 1.86622681e+00, -9.59864442e+01],
[ 1.86622681e+00, -9.59864442e+01], [ 1.86622681e+00, -9.59864442e+01],
[ 3.98974768e+01, -8.84239444e+01], [ 6.30273268e+01, -7.88377716e+01],
[ 8.17782368e+01, -6.07995616e+01], [ 9.22412268e+01, -3.81426946e+01],
[ 8.94287268e+01, -3.42676946e+01], [ 8.27048568e+01, -3.89413496e+01],
[ 7.41977468e+01, -4.19580876e+01], [ 6.55537268e+01, -4.39551946e+01],
[ 6.55507268e+01, -4.39600946e+01], [ 6.55258268e+01, -4.39502946e+01],
[ 6.55225268e+01, -4.39551946e+01], [ 5.64622368e+01, -5.74584576e+01],
[ 4.77347768e+01, -6.68825886e+01], [ 3.93037768e+01, -7.22051946e+01],
[ 4.01409768e+01, -7.80795846e+01], [ 4.03596968e+01, -8.35092576e+01],
[ 3.98975268e+01, -8.84239444e+01], [ 3.98974768e+01, -8.84239444e+01],
[ 3.98974768e+01, -8.84239444e+01], [-3.33525232e+01, -7.34239446e+01],
[-3.33343532e+01, -7.34304446e+01], [-3.33081932e+01, -7.34174446e+01],
[-3.32900232e+01, -7.34239446e+01], [-1.87512102e+01, -7.34136546e+01],
[-6.26111319e+00, -6.98403626e+01], [ 2.95997681e+00, -6.39239446e+01],
[ 4.88356681e+00, -5.29429786e+01], [ 6.50358681e+00, -4.13393356e+01],
[ 7.80372681e+00, -2.91114446e+01], [-8.09469019e+00, -1.58596306e+01],
[-1.93481942e+01, -5.40333762e+00], [-2.47587732e+01, 1.32605538e+00],
[-3.69631432e+01, -2.50275662e+00], [-4.85465082e+01, -5.39578762e+00],
[-5.95087732e+01, -7.36144462e+00], [-6.28171902e+01, -1.66250136e+01],
[-6.52187002e+01, -2.98372096e+01], [-6.58837732e+01, -4.57989446e+01],
[-5.53582062e+01, -6.01863506e+01], [-4.45266302e+01, -6.94131916e+01],
[-3.33525232e+01, -7.34239446e+01], [-3.33525232e+01, -7.34239446e+01],
[-3.33525232e+01, -7.34239446e+01], [-7.57587732e+01, -4.67676946e+01],
[-7.29041812e+01, -4.67440446e+01], [-6.99334012e+01, -4.63526666e+01],
[-6.68837732e+01, -4.56426946e+01], [-6.62087282e+01, -2.96768106e+01],
[-6.37905682e+01, -1.64255576e+01], [-6.04462732e+01, -7.04894462e+00],
[-6.81326882e+01, 3.32535038e+00], [-7.26804032e+01, 1.40097104e+01],
[-7.40712732e+01, 2.50135604e+01], [-7.99916232e+01, 2.63222104e+01],
[-8.66133452e+01, 2.67559804e+01], [-9.31650233e+01, 2.54510604e+01],
[-9.31681733e+01, 2.54460604e+01], [-9.31931223e+01, 2.54560604e+01],
[-9.31962733e+01, 2.54510604e+01], [-9.44043873e+01, 2.37123804e+01],
[-9.54279373e+01, 2.17334704e+01], [-9.63212733e+01, 1.95448104e+01],
[-9.71662733e+01, 1.43262704e+01], [-9.76337733e+01, 8.97093038e+00],
[-9.76337733e+01, 3.51356038e+00], [-9.76337733e+01, -1.43647536e+01],
[-9.29174773e+01, -3.11438126e+01], [-8.46650232e+01, -4.56426946e+01],
[-8.18063532e+01, -4.64180796e+01], [-7.88476312e+01, -4.67932816e+01],
[-7.57587732e+01, -4.67676946e+01], [-7.57587732e+01, -4.67676946e+01],
[-7.57587732e+01, -4.67676946e+01], [ 6.55224768e+01, -4.28926946e+01],
[ 7.40107668e+01, -4.09146326e+01], [ 8.23640768e+01, -3.79999686e+01],
[ 8.88662268e+01, -3.34864446e+01], [ 9.61553068e+01, -1.55950616e+01],
[ 9.94808868e+01, -1.66158462e+00], [ 9.88662268e+01, 8.32606038e+00],
[ 9.42289868e+01, 2.15752904e+01], [ 8.77410868e+01, 3.15965604e+01],
[ 8.11474768e+01, 3.82010604e+01], [ 7.17659368e+01, 3.38334104e+01],
[ 6.38899668e+01, 3.03415204e+01], [ 5.74912268e+01, 2.77635604e+01],
[ 5.68036568e+01, 1.50717604e+01], [ 5.35581368e+01, -9.16606169e-02],
[ 4.82412268e+01, -1.60489446e+01], [ 5.52234668e+01, -2.62259056e+01],
[ 6.09897268e+01, -3.51652306e+01], [ 6.55224768e+01, -4.28926946e+01],
[ 6.55224768e+01, -4.28926946e+01], [ 6.55224768e+01, -4.28926946e+01],
[ 8.42872681e+00, -2.83614446e+01], [ 2.13772368e+01, -2.57261866e+01],
[ 3.43239568e+01, -2.15154036e+01], [ 4.72724768e+01, -1.57364446e+01],
[ 5.25849968e+01, 2.07647383e-01], [ 5.58247068e+01, 1.53619304e+01],
[ 5.64912268e+01, 2.79510604e+01], [ 5.64917568e+01, 2.79612604e+01],
[ 5.64906868e+01, 2.79721604e+01], [ 5.64912268e+01, 2.79822604e+01],
[ 4.74302668e+01, 3.88992704e+01], [ 3.74260968e+01, 4.79380604e+01],
[ 2.64912268e+01, 5.51072604e+01], [ 1.05529568e+01, 5.24508804e+01],
[-4.02431919e+00, 4.78459804e+01], [-1.52900232e+01, 4.18885104e+01],
[-1.91554652e+01, 2.63828404e+01], [-2.20678242e+01, 1.30703504e+01],
[-2.40400232e+01, 1.98226038e+00], [-1.87588732e+01, -4.60782062e+00],
[-7.49875919e+00, -1.50853886e+01], [ 8.42872681e+00, -2.83614946e+01],
[ 8.42872681e+00, -2.83614446e+01], [ 8.42872681e+00, -2.83614446e+01],
[ 9.97724768e+01, 8.82606038e+00], [ 1.01209977e+02, 9.29481038e+00],
[ 9.97891268e+01, 3.41125404e+01], [ 8.92576668e+01, 5.64775904e+01],
[ 7.29287268e+01, 7.31385604e+01], [ 7.01162268e+01, 7.01073104e+01],
[ 7.65398468e+01, 5.90945204e+01], [ 8.04306168e+01, 4.87012104e+01],
[ 8.18037268e+01, 3.89510604e+01], [ 8.85060268e+01, 3.22487504e+01],
[ 9.50869868e+01, 2.21436404e+01], [ 9.97724768e+01, 8.82606038e+00],
[ 9.97724768e+01, 8.82606038e+00], [ 9.97724768e+01, 8.82606038e+00],
[-7.39150232e+01, 2.60448104e+01], [-6.92374072e+01, 3.77382804e+01],
[-6.07391432e+01, 4.81501604e+01], [-4.84150232e+01, 5.72948104e+01],
[-4.77543102e+01, 6.78197404e+01], [-4.56607662e+01, 7.76814004e+01],
[-4.11025232e+01, 8.57010604e+01], [-4.52341512e+01, 8.65620704e+01],
[-4.97579362e+01, 8.64646604e+01], [-5.46650232e+01, 8.53885604e+01],
[-7.24317802e+01, 7.30970204e+01], [-8.60276902e+01, 5.51787904e+01],
[-9.28212733e+01, 3.42010604e+01], [-9.28243733e+01, 3.41920604e+01],
[-9.28181733e+01, 3.41792604e+01], [-9.28212733e+01, 3.41698604e+01],
[-9.30130013e+01, 3.14875704e+01], [-9.31144113e+01, 2.89274504e+01],
[-9.31337733e+01, 2.64511104e+01], [-8.65119202e+01, 2.77331304e+01],
[-7.98647022e+01, 2.73522904e+01], [-7.39150232e+01, 2.60448604e+01],
[-7.39150232e+01, 2.60448104e+01], [-7.39150232e+01, 2.60448104e+01],
[-1.56650232e+01, 4.27948104e+01], [-4.35766519e+00, 4.87636404e+01],
[ 1.01466668e+01, 5.33700304e+01], [ 2.60224768e+01, 5.60448104e+01],
[ 2.85590568e+01, 6.43435004e+01], [ 3.07827468e+01, 7.29492504e+01],
[ 3.27099768e+01, 8.18573104e+01], [ 2.55039768e+01, 9.03537704e+01],
[ 1.39714968e+01, 9.64983204e+01], [-1.13376819e+00, 9.85135604e+01],
[-1.57753392e+01, 9.71825004e+01], [-2.87516412e+01, 9.28553404e+01],
[-4.00712732e+01, 8.55448104e+01], [-4.46513912e+01, 7.76614604e+01],
[-4.67507882e+01, 6.78133804e+01], [-4.74150232e+01, 5.72323104e+01],
[-3.59060892e+01, 5.27285604e+01], [-2.53218622e+01, 4.79159104e+01],
[-1.56650232e+01, 4.27948104e+01], [-1.56650232e+01, 4.27948104e+01],
[ 6.94599768e+01, 7.08573104e+01], [ 7.22412268e+01, 7.38573104e+01],
[ 5.42332468e+01, 9.18657304e+01], [ 2.93485768e+01, 1.03013560e+02],
[ 1.86622681e+00, 1.03013560e+02], [ 1.03891181e+00, 1.03013560e+02],
[ 2.19951808e-01, 1.03002360e+02], [-6.02518192e-01, 1.02982360e+02],
[-1.00876819e+00, 9.94823604e+01], [ 1.43154268e+01, 9.74387404e+01],
[ 2.60994568e+01, 9.12180804e+01], [ 3.34912268e+01, 8.24823604e+01],
[ 4.89375568e+01, 8.17496704e+01], [ 6.09313968e+01, 7.78789204e+01],
[ 6.94599768e+01, 7.08573604e+01], [ 6.94599768e+01, 7.08573104e+01],
[ 6.94599768e+01, 7.08573104e+01]]
codes=[1,4,4,4,4,4,4,4,4,4,4,4,4,2,79,
1,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,2,79,
1,4,4,4,2,4,4,4,4,4,4,4,4,4,4,4,4,2,79,
1,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,2,79,
1,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,2,79,
1,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,2,79,
1,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,2,79,
1,2,4,4,4,2,4,4,4,4,4,4,2,79,
1,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,2,79,
1,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,79,
1,2,4,4,4,4,4,4,2,4,4,4,4,4,4,2, 79]
print(Path.MOVETO,Path.LINETO,Path.CURVE3,Path.CURVE4,Path.CLOSEPOLY)
ball=Path(vertices,codes)
fig, ax = plt.subplots(figsize=(12,6))
plt.plot(15,1,color='b',marker=ball,markersize=30)
plt.xticks([0,15,30,45,60,75,90])
plt.yticks([0, 0.5, 1, 1.5, 2, 2.5, 3])
plt.grid()
ax.title.set_text('The Expected Goals(xG) Chart Final Champions League 2010/2011')
plt.ylabel("Expected Goals (xG)")
plt.xlabel("Minutes")
ax.legend()
plt.show()
output
A:
I don't think matplotlib can draw custom markers. Therefore, I suggest the way to draw is to use the football image as a marker with the given coordinates.
import matplotlib.pyplot as plt
from matplotlib.offsetbox import OffsetImage, AnnotationBbox
def getImage(path):
return OffsetImage(plt.imread(path), zoom=.02)
x_coords = [8.2, 4.5, 3.3, 6.9]
y_coords = [5.4, 3.5, 4.7, 7.1]
fig, ax = plt.subplots()
for x0, y0 in zip(x_coords, y_coords):
ab = AnnotationBbox(getImage('football_icon.png'), (x0, y0), frameon=False)
ax.add_artist(ab)
plt.xticks(range(10))
plt.yticks(range(10))
plt.show()
Output
| How to change matplotlib marker into a football icon? | I have visualization like this:
I want to change the marker icon into a football icon with the same color as the line
My code looks like this :
fig, ax = plt.subplots(figsize=(12,6))
ax.step(x = a_df['minute'], y = a_df['a_cum'], where = 'post', label= ateam, linewidth=2)
ax.step(x = h_df['minute'], y = h_df['h_cum'], where = 'post', color ='red', label= hteam,linewidth=2)
plt.scatter(x= a_goal['minute'], y = a_goal['a_cum'] , marker = 'o')
plt.scatter(x= h_goal['minute'], y = h_goal['h_cum'] , marker = 'o',color = 'red')
plt.xticks([0,15,30,45,60,75,90])
plt.yticks([0, 0.5, 1, 1.5, 2, 2.5, 3])
plt.grid()
ax.title.set_text('The Expected Goals(xG) Chart Final Champions League 2010/2011')
plt.ylabel("Expected Goals (xG)")
plt.xlabel("Minutes")
ax.legend()
plt.show()
I don't have any clue to do it.
| [
"you can draw your own shapes by creating matplotlib Path objects.\nYou need 2 lists to create it.\n1)shape's vertices(coordinates)\n2)codes:describes the path from a vertice to the next (MOVETO,LINETO,CURVE3,CURVE4,CLOSEPOLY,...)\nfor example\nimport matplotlib.pyplot as plt\nfrom matplotlib.path import Path\n\nvertices=[[ 1.86622681e+00, -9.69864442e+01], [-5.36324682e+01, -9.69864442e+01],\n [-9.86337733e+01, -5.19851396e+01], [-9.86337733e+01, 3.51356038e+00],\n [-9.86337733e+01, 5.90122504e+01], [-5.36324682e+01, 1.04013560e+02],\n [ 1.86622681e+00, 1.04013560e+02], [ 5.73649168e+01, 1.04013560e+02],\n [ 1.02366227e+02, 5.90122504e+01], [ 1.02366227e+02, 3.51356038e+00],\n [ 1.02366227e+02, -5.19851396e+01], [ 5.73649168e+01, -9.69864442e+01],\n [ 1.86622681e+00, -9.69864442e+01], [ 1.86622681e+00, -9.69864442e+01],\n [ 1.86622681e+00, -9.69864442e+01], [ 1.86622681e+00, -9.59864442e+01], \n [ 1.49396568e+01, -9.59864442e+01], [ 2.74005268e+01, -9.34457032e+01],\n [ 3.88349768e+01, -8.88614442e+01], [ 3.93477668e+01, -8.39473616e+01],\n [ 3.91766768e+01, -7.84211406e+01], [ 3.83349768e+01, -7.24551946e+01],\n [ 2.54705168e+01, -7.17582316e+01], [ 1.38598668e+01, -6.91771276e+01],\n [ 3.49122681e+00, -6.47364446e+01], [-5.88483119e+00, -7.07454276e+01],\n [-1.85084882e+01, -7.43878696e+01], [-3.31337732e+01, -7.44239446e+01],\n [-3.31639232e+01, -8.07006846e+01], [-3.34889082e+01, -8.56747886e+01],\n [-3.41025232e+01, -8.92676942e+01], [-2.29485092e+01, -9.35925582e+01],\n [-1.08166852e+01, -9.59864442e+01], [ 1.86622681e+00, -9.59864442e+01],\n [ 1.86622681e+00, -9.59864442e+01], [ 1.86622681e+00, -9.59864442e+01],\n [ 3.98974768e+01, -8.84239444e+01], [ 6.30273268e+01, -7.88377716e+01],\n [ 8.17782368e+01, -6.07995616e+01], [ 9.22412268e+01, -3.81426946e+01],\n [ 8.94287268e+01, -3.42676946e+01], [ 8.27048568e+01, -3.89413496e+01],\n [ 7.41977468e+01, -4.19580876e+01], [ 6.55537268e+01, -4.39551946e+01],\n [ 6.55507268e+01, -4.39600946e+01], [ 6.55258268e+01, -4.39502946e+01],\n [ 6.55225268e+01, -4.39551946e+01], [ 5.64622368e+01, -5.74584576e+01],\n [ 4.77347768e+01, -6.68825886e+01], [ 3.93037768e+01, -7.22051946e+01],\n [ 4.01409768e+01, -7.80795846e+01], [ 4.03596968e+01, -8.35092576e+01],\n [ 3.98975268e+01, -8.84239444e+01], [ 3.98974768e+01, -8.84239444e+01],\n [ 3.98974768e+01, -8.84239444e+01], [-3.33525232e+01, -7.34239446e+01],\n [-3.33343532e+01, -7.34304446e+01], [-3.33081932e+01, -7.34174446e+01],\n [-3.32900232e+01, -7.34239446e+01], [-1.87512102e+01, -7.34136546e+01],\n [-6.26111319e+00, -6.98403626e+01], [ 2.95997681e+00, -6.39239446e+01],\n [ 4.88356681e+00, -5.29429786e+01], [ 6.50358681e+00, -4.13393356e+01],\n [ 7.80372681e+00, -2.91114446e+01], [-8.09469019e+00, -1.58596306e+01],\n [-1.93481942e+01, -5.40333762e+00], [-2.47587732e+01, 1.32605538e+00],\n [-3.69631432e+01, -2.50275662e+00], [-4.85465082e+01, -5.39578762e+00],\n [-5.95087732e+01, -7.36144462e+00], [-6.28171902e+01, -1.66250136e+01],\n [-6.52187002e+01, -2.98372096e+01], [-6.58837732e+01, -4.57989446e+01],\n [-5.53582062e+01, -6.01863506e+01], [-4.45266302e+01, -6.94131916e+01],\n [-3.33525232e+01, -7.34239446e+01], [-3.33525232e+01, -7.34239446e+01],\n [-3.33525232e+01, -7.34239446e+01], [-7.57587732e+01, -4.67676946e+01],\n [-7.29041812e+01, -4.67440446e+01], [-6.99334012e+01, -4.63526666e+01],\n [-6.68837732e+01, -4.56426946e+01], [-6.62087282e+01, -2.96768106e+01],\n [-6.37905682e+01, -1.64255576e+01], [-6.04462732e+01, -7.04894462e+00],\n [-6.81326882e+01, 3.32535038e+00], [-7.26804032e+01, 1.40097104e+01],\n [-7.40712732e+01, 2.50135604e+01], [-7.99916232e+01, 2.63222104e+01],\n [-8.66133452e+01, 2.67559804e+01], [-9.31650233e+01, 2.54510604e+01],\n [-9.31681733e+01, 2.54460604e+01], [-9.31931223e+01, 2.54560604e+01],\n [-9.31962733e+01, 2.54510604e+01], [-9.44043873e+01, 2.37123804e+01],\n [-9.54279373e+01, 2.17334704e+01], [-9.63212733e+01, 1.95448104e+01],\n [-9.71662733e+01, 1.43262704e+01], [-9.76337733e+01, 8.97093038e+00],\n [-9.76337733e+01, 3.51356038e+00], [-9.76337733e+01, -1.43647536e+01],\n [-9.29174773e+01, -3.11438126e+01], [-8.46650232e+01, -4.56426946e+01],\n [-8.18063532e+01, -4.64180796e+01], [-7.88476312e+01, -4.67932816e+01],\n [-7.57587732e+01, -4.67676946e+01], [-7.57587732e+01, -4.67676946e+01],\n [-7.57587732e+01, -4.67676946e+01], [ 6.55224768e+01, -4.28926946e+01],\n [ 7.40107668e+01, -4.09146326e+01], [ 8.23640768e+01, -3.79999686e+01],\n [ 8.88662268e+01, -3.34864446e+01], [ 9.61553068e+01, -1.55950616e+01],\n [ 9.94808868e+01, -1.66158462e+00], [ 9.88662268e+01, 8.32606038e+00],\n [ 9.42289868e+01, 2.15752904e+01], [ 8.77410868e+01, 3.15965604e+01],\n [ 8.11474768e+01, 3.82010604e+01], [ 7.17659368e+01, 3.38334104e+01],\n [ 6.38899668e+01, 3.03415204e+01], [ 5.74912268e+01, 2.77635604e+01],\n [ 5.68036568e+01, 1.50717604e+01], [ 5.35581368e+01, -9.16606169e-02],\n [ 4.82412268e+01, -1.60489446e+01], [ 5.52234668e+01, -2.62259056e+01],\n [ 6.09897268e+01, -3.51652306e+01], [ 6.55224768e+01, -4.28926946e+01],\n [ 6.55224768e+01, -4.28926946e+01], [ 6.55224768e+01, -4.28926946e+01],\n [ 8.42872681e+00, -2.83614446e+01], [ 2.13772368e+01, -2.57261866e+01],\n [ 3.43239568e+01, -2.15154036e+01], [ 4.72724768e+01, -1.57364446e+01],\n [ 5.25849968e+01, 2.07647383e-01], [ 5.58247068e+01, 1.53619304e+01],\n [ 5.64912268e+01, 2.79510604e+01], [ 5.64917568e+01, 2.79612604e+01],\n [ 5.64906868e+01, 2.79721604e+01], [ 5.64912268e+01, 2.79822604e+01],\n [ 4.74302668e+01, 3.88992704e+01], [ 3.74260968e+01, 4.79380604e+01],\n [ 2.64912268e+01, 5.51072604e+01], [ 1.05529568e+01, 5.24508804e+01],\n [-4.02431919e+00, 4.78459804e+01], [-1.52900232e+01, 4.18885104e+01],\n [-1.91554652e+01, 2.63828404e+01], [-2.20678242e+01, 1.30703504e+01],\n [-2.40400232e+01, 1.98226038e+00], [-1.87588732e+01, -4.60782062e+00],\n [-7.49875919e+00, -1.50853886e+01], [ 8.42872681e+00, -2.83614946e+01],\n [ 8.42872681e+00, -2.83614446e+01], [ 8.42872681e+00, -2.83614446e+01],\n [ 9.97724768e+01, 8.82606038e+00], [ 1.01209977e+02, 9.29481038e+00],\n [ 9.97891268e+01, 3.41125404e+01], [ 8.92576668e+01, 5.64775904e+01],\n [ 7.29287268e+01, 7.31385604e+01], [ 7.01162268e+01, 7.01073104e+01],\n [ 7.65398468e+01, 5.90945204e+01], [ 8.04306168e+01, 4.87012104e+01],\n [ 8.18037268e+01, 3.89510604e+01], [ 8.85060268e+01, 3.22487504e+01],\n [ 9.50869868e+01, 2.21436404e+01], [ 9.97724768e+01, 8.82606038e+00],\n [ 9.97724768e+01, 8.82606038e+00], [ 9.97724768e+01, 8.82606038e+00],\n [-7.39150232e+01, 2.60448104e+01], [-6.92374072e+01, 3.77382804e+01],\n [-6.07391432e+01, 4.81501604e+01], [-4.84150232e+01, 5.72948104e+01],\n [-4.77543102e+01, 6.78197404e+01], [-4.56607662e+01, 7.76814004e+01],\n [-4.11025232e+01, 8.57010604e+01], [-4.52341512e+01, 8.65620704e+01],\n [-4.97579362e+01, 8.64646604e+01], [-5.46650232e+01, 8.53885604e+01],\n [-7.24317802e+01, 7.30970204e+01], [-8.60276902e+01, 5.51787904e+01],\n [-9.28212733e+01, 3.42010604e+01], [-9.28243733e+01, 3.41920604e+01],\n [-9.28181733e+01, 3.41792604e+01], [-9.28212733e+01, 3.41698604e+01],\n [-9.30130013e+01, 3.14875704e+01], [-9.31144113e+01, 2.89274504e+01],\n [-9.31337733e+01, 2.64511104e+01], [-8.65119202e+01, 2.77331304e+01],\n [-7.98647022e+01, 2.73522904e+01], [-7.39150232e+01, 2.60448604e+01],\n [-7.39150232e+01, 2.60448104e+01], [-7.39150232e+01, 2.60448104e+01],\n [-1.56650232e+01, 4.27948104e+01], [-4.35766519e+00, 4.87636404e+01],\n [ 1.01466668e+01, 5.33700304e+01], [ 2.60224768e+01, 5.60448104e+01],\n [ 2.85590568e+01, 6.43435004e+01], [ 3.07827468e+01, 7.29492504e+01],\n [ 3.27099768e+01, 8.18573104e+01], [ 2.55039768e+01, 9.03537704e+01],\n [ 1.39714968e+01, 9.64983204e+01], [-1.13376819e+00, 9.85135604e+01],\n [-1.57753392e+01, 9.71825004e+01], [-2.87516412e+01, 9.28553404e+01],\n [-4.00712732e+01, 8.55448104e+01], [-4.46513912e+01, 7.76614604e+01],\n [-4.67507882e+01, 6.78133804e+01], [-4.74150232e+01, 5.72323104e+01],\n [-3.59060892e+01, 5.27285604e+01], [-2.53218622e+01, 4.79159104e+01],\n [-1.56650232e+01, 4.27948104e+01], [-1.56650232e+01, 4.27948104e+01],\n [ 6.94599768e+01, 7.08573104e+01], [ 7.22412268e+01, 7.38573104e+01],\n [ 5.42332468e+01, 9.18657304e+01], [ 2.93485768e+01, 1.03013560e+02],\n [ 1.86622681e+00, 1.03013560e+02], [ 1.03891181e+00, 1.03013560e+02],\n [ 2.19951808e-01, 1.03002360e+02], [-6.02518192e-01, 1.02982360e+02],\n [-1.00876819e+00, 9.94823604e+01], [ 1.43154268e+01, 9.74387404e+01],\n [ 2.60994568e+01, 9.12180804e+01], [ 3.34912268e+01, 8.24823604e+01],\n [ 4.89375568e+01, 8.17496704e+01], [ 6.09313968e+01, 7.78789204e+01],\n [ 6.94599768e+01, 7.08573604e+01], [ 6.94599768e+01, 7.08573104e+01],\n [ 6.94599768e+01, 7.08573104e+01]]\ncodes=[1,4,4,4,4,4,4,4,4,4,4,4,4,2,79,\n1,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,2,79,\n1,4,4,4,2,4,4,4,4,4,4,4,4,4,4,4,4,2,79,\n1,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,2,79,\n1,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,2,79,\n1,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,2,79,\n1,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,2,79,\n1,2,4,4,4,2,4,4,4,4,4,4,2,79,\n1,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,2,79,\n1,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,79,\n1,2,4,4,4,4,4,4,2,4,4,4,4,4,4,2, 79]\nprint(Path.MOVETO,Path.LINETO,Path.CURVE3,Path.CURVE4,Path.CLOSEPOLY)\nball=Path(vertices,codes)\nfig, ax = plt.subplots(figsize=(12,6))\nplt.plot(15,1,color='b',marker=ball,markersize=30)\nplt.xticks([0,15,30,45,60,75,90])\nplt.yticks([0, 0.5, 1, 1.5, 2, 2.5, 3])\nplt.grid()\nax.title.set_text('The Expected Goals(xG) Chart Final Champions League 2010/2011')\nplt.ylabel(\"Expected Goals (xG)\")\nplt.xlabel(\"Minutes\")\nax.legend()\nplt.show()\n\noutput\n\n",
"I don't think matplotlib can draw custom markers. Therefore, I suggest the way to draw is to use the football image as a marker with the given coordinates.\nimport matplotlib.pyplot as plt\nfrom matplotlib.offsetbox import OffsetImage, AnnotationBbox\n\ndef getImage(path):\n return OffsetImage(plt.imread(path), zoom=.02)\nx_coords = [8.2, 4.5, 3.3, 6.9]\ny_coords = [5.4, 3.5, 4.7, 7.1]\nfig, ax = plt.subplots()\nfor x0, y0 in zip(x_coords, y_coords):\n ab = AnnotationBbox(getImage('football_icon.png'), (x0, y0), frameon=False)\n ax.add_artist(ab)\n \nplt.xticks(range(10))\nplt.yticks(range(10))\nplt.show()\n\nOutput\n\n"
] | [
1,
0
] | [] | [] | [
"google_maps_markers",
"matplotlib",
"python",
"seaborn",
"visualization"
] | stackoverflow_0074664926_google_maps_markers_matplotlib_python_seaborn_visualization.txt |
Q:
Helix Convolution in Pytorch (Machine Learning)
I currently investigate the development of a convolutional neural network involving up to 5 or 6 dimensional arrays efficiently.
I was aware that many of the tools used for convolutional neural networks do not really deal with ND convolutions, so I decided to try and write an implementation of Helix Convolution, whereby the convolution can be treated as a large, 1D convolution (see Reference 1. http://sepwww.stanford.edu/public/docs/sep95/jon1/paper_html/node2.html , Reference 2 https://sites.ualberta.ca/~mostafan/Files/Papers/md_convolution_TLE2009.pdf for more details of the concept).
I did this under the (possibly incorrect) assumption that a large, single dimensional convolution was likely to be easier on a GPU than a multidimensional one, as well as that the method is trivially scalable to N dimensions.
Particularly, a quote from Reference 2. states:
We have not found important gains in computational efficiency between N-D standard convolution versus using the
algorithm described in the text. We have, however, found that
writing codes for seismic data regularization with the described
trick leads to algorithms that can easily handle regularization
problems with any number of spatial dimensions (Naghizadeh
and Sacchi, 2009).
I have written an implementation of the function below, which compares to signal.fftconvolve. It is slower on the CPU compared to this function, but I would nonetheless like to see how it performs on the GPU in PyTorch as a forward convolutional layer.
Can someone kindly help me port this code to PyTorch so I can verify how it behaves?
"""
HELIX CONVOLUTION FUNCTION
Shrink:
CROPS THE SIZE OF THE CONVOLVED SIGNAL DOWN TO THE ORIGINAL SIZE OF THE ORIGINAL.
Pad:
PADS THE DIFFERENCE BETWEEN THE ORIGINAL SHAPE AND THE DESIRED, CONVOLVED SHAPE FOR KERNEL AND SIGNAL.
GetLength:
EXTRACTS THE LENGTH OF THE UNWOUND STRIP OF THE SIGNAL AND KERNEL THAT IS TO BE CONVOLVED.
FFTConvolve:
USES THE NUMPY FFT PACKAGE TO PERFORM FAST FOURIER CONVOLUTION ON THE SIGNALS
Convolve:
USES HELIX CONVOLUTION ON AN INPUT ARRAY AND KERNEL.
"""
import numpy as np
from numpy import *
from scipy import signal
import operator
import time
class HelixCPU:
@classmethod
def Shrink(cls,array, bounding):
start = tuple(map(lambda a, da: (a-da)//2, array.shape, bounding))
end = tuple(map(operator.add, start, bounding))
slices = tuple(map(slice, start, end))
return array[slices]
@classmethod
def Pad(cls,array, target_shape):
diff = target_shape-array.shape
padder=[(0,val) for val in diff]
padded = np.pad(array, padder, 'constant')
return padded
@classmethod
def GetLength(cls,array_shape, padded_shape):
temp=1
steps=np.zeros_like(array_shape)
for i, entry in enumerate(padded_shape[::-1]):
if(i==len(padded_shape)-1):
steps[i]=1
else:
temp=entry*temp
steps[i]=temp
steps=np.roll(steps, 1)
steps=steps[::-1]
ones=np.ones_like(array_shape)
ones[-1]=0
out=np.multiply(steps,array_shape - ones)
length = np.sum(out)
return length
@classmethod
def FFTConvolve(cls, in1, in2, len1, len2):
s1 = len1
s2 = len2
shape = s1 + s2 - 1
fsize = 2 ** np.ceil(cp.log2(shape)).astype(int)
fslice = slice(0, shape)
conv = np.fft.ifft(np.fft.fft(in1, int(fsize)) * np.fft.fft(in2, int(fsize)))[fslice].copy()
return conv
@classmethod
def Convolve(cls,array, kernel):
m = array.shape
n = kernel.shape
mn = np.add(m, n)
mn = mn-np.ones_like(mn)
k_pad=cls.Pad(kernel, mn)
a_pad=cls.Pad(array, mn)
length_k = cls.GetLength(kernel.shape, k_pad.shape);
length_a = cls.GetLength(array.shape, a_pad.shape);
k_flat = k_pad.flatten()[0:length_k]
a_flat = a_pad.flatten()[0:length_a]
conv = cls.FFTConvolve(a_flat, k_flat)
conv = np.resize(conv,mn)
conv = cls.Shrink(conv, m)
return conv
def main():
array=np.random.rand(25,25,41,51)
kernel=np.random.rand(10, 10, 10, 10)
start2 =time.process_time()
test2 = HelixCPU.Convolve(array, kernel)
end2=time.process_time()
start1= time.process_time()
test1 = signal.fftconvolve(array, kernel, "same")
end1= time.process_time()
print ("")
print ("========================")
print ("SOME LARGE CONVOLVED RANDOM ARRAYS. ")
print ("========================")
print("")
print ("Random Calorimeter Image of Size {0} Created".format(array.shape))
print ("Random Kernel of Size {0} Created".format(kernel.shape))
print("")
print ("Value\tOriginal\tHelix")
print ("Time Taken [s]\t{0}\t{1}\t{2}".format( (end1-start1), (end2-start2), (end2-start2)/(end1-start1) ))
print ("Maximum Value\t{:03.2f}\t{:13.2f}".format( np.max(test1), np.max(test2) ))
print ("Matrix Norm \t{:03.2f}\t{:13.2f}".format( np.linalg.norm(test1), np.linalg.norm(test2) ))
print ("All Close?\t{0}".format(np.allclose(test1, test2)))
A:
Sorry, I cannot add a comment due to low rep, so I ask my question as an answer and hopefully can answer your question.
By helix convolution, do you mean defining a convolution operation as a single matrix multiplcation? If so, I did try this in the past but it is really memory inefficient for it to be practical.
A:
Here is an implementation of the HelixCPU class in PyTorch:
import torch
class HelixCPU:
@classmethod
def Shrink(cls, array, bounding):
start = (array.shape - bounding) // 2
end = start + bounding
return array[start:end]
@classmethod
def Pad(cls, array, target_shape):
diff = target_shape - array.shape
padder = [(0, val) for val in diff]
padded = torch.nn.functional.pad(array, padder, 'constant')
return padded
@classmethod
def GetLength(cls, array_shape, padded_shape):
temp = 1
steps = torch.zeros_like(array_shape)
for i, entry in enumerate(padded_shape[::-1]):
if(i == len(padded_shape) - 1):
steps[i] = 1
else:
temp = entry * temp
steps[i] = temp
steps = torch.roll(steps, 1)
steps = steps[::-1]
ones = torch.ones_like(array_shape)
ones[-1] = 0
out = steps * (array_shape - ones)
length = torch.sum(out)
return length
@classmethod
def FFTConvolve(cls, in1, in2, len1, len2):
s1 = len1
s2 = len2
shape = s1 + s2 - 1
fsize = 2 ** torch.ceil(torch.log2(shape)).type(torch.int64)
fslice = slice(0, shape)
conv = torch.ifft(torch.fft(in1, fsize) * torch.fft(in2, f
| Helix Convolution in Pytorch (Machine Learning) | I currently investigate the development of a convolutional neural network involving up to 5 or 6 dimensional arrays efficiently.
I was aware that many of the tools used for convolutional neural networks do not really deal with ND convolutions, so I decided to try and write an implementation of Helix Convolution, whereby the convolution can be treated as a large, 1D convolution (see Reference 1. http://sepwww.stanford.edu/public/docs/sep95/jon1/paper_html/node2.html , Reference 2 https://sites.ualberta.ca/~mostafan/Files/Papers/md_convolution_TLE2009.pdf for more details of the concept).
I did this under the (possibly incorrect) assumption that a large, single dimensional convolution was likely to be easier on a GPU than a multidimensional one, as well as that the method is trivially scalable to N dimensions.
Particularly, a quote from Reference 2. states:
We have not found important gains in computational efficiency between N-D standard convolution versus using the
algorithm described in the text. We have, however, found that
writing codes for seismic data regularization with the described
trick leads to algorithms that can easily handle regularization
problems with any number of spatial dimensions (Naghizadeh
and Sacchi, 2009).
I have written an implementation of the function below, which compares to signal.fftconvolve. It is slower on the CPU compared to this function, but I would nonetheless like to see how it performs on the GPU in PyTorch as a forward convolutional layer.
Can someone kindly help me port this code to PyTorch so I can verify how it behaves?
"""
HELIX CONVOLUTION FUNCTION
Shrink:
CROPS THE SIZE OF THE CONVOLVED SIGNAL DOWN TO THE ORIGINAL SIZE OF THE ORIGINAL.
Pad:
PADS THE DIFFERENCE BETWEEN THE ORIGINAL SHAPE AND THE DESIRED, CONVOLVED SHAPE FOR KERNEL AND SIGNAL.
GetLength:
EXTRACTS THE LENGTH OF THE UNWOUND STRIP OF THE SIGNAL AND KERNEL THAT IS TO BE CONVOLVED.
FFTConvolve:
USES THE NUMPY FFT PACKAGE TO PERFORM FAST FOURIER CONVOLUTION ON THE SIGNALS
Convolve:
USES HELIX CONVOLUTION ON AN INPUT ARRAY AND KERNEL.
"""
import numpy as np
from numpy import *
from scipy import signal
import operator
import time
class HelixCPU:
@classmethod
def Shrink(cls,array, bounding):
start = tuple(map(lambda a, da: (a-da)//2, array.shape, bounding))
end = tuple(map(operator.add, start, bounding))
slices = tuple(map(slice, start, end))
return array[slices]
@classmethod
def Pad(cls,array, target_shape):
diff = target_shape-array.shape
padder=[(0,val) for val in diff]
padded = np.pad(array, padder, 'constant')
return padded
@classmethod
def GetLength(cls,array_shape, padded_shape):
temp=1
steps=np.zeros_like(array_shape)
for i, entry in enumerate(padded_shape[::-1]):
if(i==len(padded_shape)-1):
steps[i]=1
else:
temp=entry*temp
steps[i]=temp
steps=np.roll(steps, 1)
steps=steps[::-1]
ones=np.ones_like(array_shape)
ones[-1]=0
out=np.multiply(steps,array_shape - ones)
length = np.sum(out)
return length
@classmethod
def FFTConvolve(cls, in1, in2, len1, len2):
s1 = len1
s2 = len2
shape = s1 + s2 - 1
fsize = 2 ** np.ceil(cp.log2(shape)).astype(int)
fslice = slice(0, shape)
conv = np.fft.ifft(np.fft.fft(in1, int(fsize)) * np.fft.fft(in2, int(fsize)))[fslice].copy()
return conv
@classmethod
def Convolve(cls,array, kernel):
m = array.shape
n = kernel.shape
mn = np.add(m, n)
mn = mn-np.ones_like(mn)
k_pad=cls.Pad(kernel, mn)
a_pad=cls.Pad(array, mn)
length_k = cls.GetLength(kernel.shape, k_pad.shape);
length_a = cls.GetLength(array.shape, a_pad.shape);
k_flat = k_pad.flatten()[0:length_k]
a_flat = a_pad.flatten()[0:length_a]
conv = cls.FFTConvolve(a_flat, k_flat)
conv = np.resize(conv,mn)
conv = cls.Shrink(conv, m)
return conv
def main():
array=np.random.rand(25,25,41,51)
kernel=np.random.rand(10, 10, 10, 10)
start2 =time.process_time()
test2 = HelixCPU.Convolve(array, kernel)
end2=time.process_time()
start1= time.process_time()
test1 = signal.fftconvolve(array, kernel, "same")
end1= time.process_time()
print ("")
print ("========================")
print ("SOME LARGE CONVOLVED RANDOM ARRAYS. ")
print ("========================")
print("")
print ("Random Calorimeter Image of Size {0} Created".format(array.shape))
print ("Random Kernel of Size {0} Created".format(kernel.shape))
print("")
print ("Value\tOriginal\tHelix")
print ("Time Taken [s]\t{0}\t{1}\t{2}".format( (end1-start1), (end2-start2), (end2-start2)/(end1-start1) ))
print ("Maximum Value\t{:03.2f}\t{:13.2f}".format( np.max(test1), np.max(test2) ))
print ("Matrix Norm \t{:03.2f}\t{:13.2f}".format( np.linalg.norm(test1), np.linalg.norm(test2) ))
print ("All Close?\t{0}".format(np.allclose(test1, test2)))
| [
"Sorry, I cannot add a comment due to low rep, so I ask my question as an answer and hopefully can answer your question.\nBy helix convolution, do you mean defining a convolution operation as a single matrix multiplcation? If so, I did try this in the past but it is really memory inefficient for it to be practical.\n",
"Here is an implementation of the HelixCPU class in PyTorch:\nimport torch\n\nclass HelixCPU:\n @classmethod\n def Shrink(cls, array, bounding):\n start = (array.shape - bounding) // 2\n end = start + bounding\n return array[start:end]\n\n @classmethod\n def Pad(cls, array, target_shape):\n diff = target_shape - array.shape\n padder = [(0, val) for val in diff]\n padded = torch.nn.functional.pad(array, padder, 'constant')\n return padded\n\n @classmethod\n def GetLength(cls, array_shape, padded_shape):\n temp = 1\n steps = torch.zeros_like(array_shape)\n\n for i, entry in enumerate(padded_shape[::-1]):\n if(i == len(padded_shape) - 1):\n steps[i] = 1\n else:\n temp = entry * temp\n steps[i] = temp\n\n steps = torch.roll(steps, 1)\n steps = steps[::-1]\n ones = torch.ones_like(array_shape)\n ones[-1] = 0\n out = steps * (array_shape - ones)\n length = torch.sum(out)\n return length\n\n @classmethod\n def FFTConvolve(cls, in1, in2, len1, len2):\n s1 = len1\n s2 = len2\n shape = s1 + s2 - 1\n fsize = 2 ** torch.ceil(torch.log2(shape)).type(torch.int64)\n fslice = slice(0, shape)\n conv = torch.ifft(torch.fft(in1, fsize) * torch.fft(in2, f\n\n\n"
] | [
0,
0
] | [] | [] | [
"conv_neural_network",
"convolution",
"helix",
"python",
"pytorch"
] | stackoverflow_0060103887_conv_neural_network_convolution_helix_python_pytorch.txt |
Q:
Reduce Heroku Slug Size for Machine Learning (Python, PyTorch, Fastai)
I am attempting to deploy a simple maching learning app to heroku but I keep exceeding the slug size requirement of 500MB, it looks like in the end I come up to about 1GB. Most of this appears to come from PyTorch for about 700MB.
Collecting torch>=1.0.0
Downloading torch-1.6.0-cp36-cp36m-manylinux1_x86_64.whl (748.8 MB)
My requirements.txt file looks like
tensorboardX==1.6
opencv-python>=3.3.0.10
pillow>=6.2.1
flask
scikit-image
gunicorn
pandas
And the error message I get states I am over the slug size limit.
How can I only install the CPU version of PyTorch to get the slug size down?
A:
Try adding the following lines to requirements.txt
-f https://download.pytorch.org/whl/torch_stable.html
torch==1.8.1+cpu
torchvision==0.9.1+cpu
fastai
voila
ipywidgets
A:
(Aug, 2, 2022) the only solution I found was leaving the requirements.txt like this:
--find-links https://download.pytorch.org/whl/torch_stable.html
torch==1.11.0+cpu
--find-links https://download.pytorch.org/whl/torch_stable.html
torchvision==0.12.0+cpu
A:
To install the CPU version of PyTorch, you can specify the cpuonly version in your requirements.txt file like this:
torch==1.6.0+cpu
This will install the CPU version of PyTorch, which should be significantly smaller in size than the GPU version. You can also specify the specific version of PyTorch that you want to install, in this case 1.6.0, in the requirements.txt file.
Once you have updated your requirements.txt file, you can run pip install -r requirements.txt to install the required packages. This should install the CPU version of PyTorch and reduce the overall size of your app.
| Reduce Heroku Slug Size for Machine Learning (Python, PyTorch, Fastai) | I am attempting to deploy a simple maching learning app to heroku but I keep exceeding the slug size requirement of 500MB, it looks like in the end I come up to about 1GB. Most of this appears to come from PyTorch for about 700MB.
Collecting torch>=1.0.0
Downloading torch-1.6.0-cp36-cp36m-manylinux1_x86_64.whl (748.8 MB)
My requirements.txt file looks like
tensorboardX==1.6
opencv-python>=3.3.0.10
pillow>=6.2.1
flask
scikit-image
gunicorn
pandas
And the error message I get states I am over the slug size limit.
How can I only install the CPU version of PyTorch to get the slug size down?
| [
"Try adding the following lines to requirements.txt\n-f https://download.pytorch.org/whl/torch_stable.html\ntorch==1.8.1+cpu\ntorchvision==0.9.1+cpu\nfastai\nvoila\nipywidgets\n\n",
"(Aug, 2, 2022) the only solution I found was leaving the requirements.txt like this:\n--find-links https://download.pytorch.org/whl/torch_stable.html\ntorch==1.11.0+cpu\n--find-links https://download.pytorch.org/whl/torch_stable.html\ntorchvision==0.12.0+cpu\n",
"To install the CPU version of PyTorch, you can specify the cpuonly version in your requirements.txt file like this:\ntorch==1.6.0+cpu \nThis will install the CPU version of PyTorch, which should be significantly smaller in size than the GPU version. You can also specify the specific version of PyTorch that you want to install, in this case 1.6.0, in the requirements.txt file.\nOnce you have updated your requirements.txt file, you can run pip install -r requirements.txt to install the required packages. This should install the CPU version of PyTorch and reduce the overall size of your app.\n"
] | [
1,
0,
0
] | [] | [] | [
"heroku",
"pip",
"python"
] | stackoverflow_0063552330_heroku_pip_python.txt |
Q:
I'm not sure how to use RTK without a desktop app
I'm using a ZED-F9P.
Below is the Python script I've made for printing the Latitude and Longitude without correction data, but now I'd like to try and get more accurate with RTK.
I've got familiar with desktop applications for applying RTCM like PyGPSClient and u-center but I'd like to be able to achieve RTK fix within a python script.
I say this because my goal is to achieve RTK on an Arduino or similar device, then send that to the cloud where I can compare it to an identical device in another location (i.e. get the distance between the two).
I thought perhaps I could use parts of the source code for PyGPSClient? I'm not sure where to start. Any advice would be appreciated. Thanks!
import serial
gps = serial.Serial('com5', baudrate=9600)
while True:
ser_bytes = gps.readline()
decoded_bytes = ser_bytes.decode("utf-8")
data = decoded_bytes.split(",")
if data[0] == '$GNRMC':
lat_nmea = (data[3],data[4])
lat_degrees = float(lat_nmea[0][0:2])
lat_minutes = float(lat_nmea[0][2:])
lat = lat_degrees + (lat_minutes/60)
lon_nmea = (data[5],data[6])
lon_degrees = float(lon_nmea[0][:3])
lon_minutes = float(lon_nmea[0][3:])
lon = lon_degrees + (lon_minutes/60)
if lat_nmea[1] == 'S':
lat = -lat
if lon_nmea[1] == 'W':
lon = -lon
print("%0.8f" %lat,',' "%0.8f" %lon)
A:
Check out the rtk_example.py script here:
https://github.com/semuconsulting/pygnssutils/blob/main/examples/rtk_example.py
(pygnssutils is the core package used by PyGPSClient)
| I'm not sure how to use RTK without a desktop app | I'm using a ZED-F9P.
Below is the Python script I've made for printing the Latitude and Longitude without correction data, but now I'd like to try and get more accurate with RTK.
I've got familiar with desktop applications for applying RTCM like PyGPSClient and u-center but I'd like to be able to achieve RTK fix within a python script.
I say this because my goal is to achieve RTK on an Arduino or similar device, then send that to the cloud where I can compare it to an identical device in another location (i.e. get the distance between the two).
I thought perhaps I could use parts of the source code for PyGPSClient? I'm not sure where to start. Any advice would be appreciated. Thanks!
import serial
gps = serial.Serial('com5', baudrate=9600)
while True:
ser_bytes = gps.readline()
decoded_bytes = ser_bytes.decode("utf-8")
data = decoded_bytes.split(",")
if data[0] == '$GNRMC':
lat_nmea = (data[3],data[4])
lat_degrees = float(lat_nmea[0][0:2])
lat_minutes = float(lat_nmea[0][2:])
lat = lat_degrees + (lat_minutes/60)
lon_nmea = (data[5],data[6])
lon_degrees = float(lon_nmea[0][:3])
lon_minutes = float(lon_nmea[0][3:])
lon = lon_degrees + (lon_minutes/60)
if lat_nmea[1] == 'S':
lat = -lat
if lon_nmea[1] == 'W':
lon = -lon
print("%0.8f" %lat,',' "%0.8f" %lon)
| [
"Check out the rtk_example.py script here:\nhttps://github.com/semuconsulting/pygnssutils/blob/main/examples/rtk_example.py\n(pygnssutils is the core package used by PyGPSClient)\n"
] | [
0
] | [] | [] | [
"gps",
"ntrip",
"python",
"rtk"
] | stackoverflow_0074470405_gps_ntrip_python_rtk.txt |
Q:
Steps for Machine Learning in Pytorch
When we define our model in PyTorch. We run through different #epochs. I want to know that in the iteration of epochs.
What is the difference between the two following snippets of code in which the order is different? These two snippet versions are:
I found over tutorials
The code provided by my supervisor for the project.
Tutorial Version
for i in range(epochs):
logits = model(x)
loss = loss_fcn(logits,lables)
loss.backward()
optimizer.step()
optimizer.zero_grad()
Supervisor Version
for i in range(epochs):
logits = model(x)
loss = loss_fcn(logits,lables)
optimizer.zero_grad()
loss.backward()
optimizer.step()
A:
The only difference is when the gradients are cleared. (when you call optimizer.zero_grad()) the first version zeros out the gradients after updating the weights (optimizer.step()), the second one zeroes out the gradient after updating the weights. both versions should run fine. The only difference would be the first iteration, where the second snippet is better as it makes sure the residue gradients are zero before calculating the gradients. Check this link that explains why you would zero the gradients
A:
In PyTorch, we typically want to explicitly set the gradients to zero for every mini-batch during the training phase before starting backpropagation (i.e., updating the Weights and biases) because PyTorch accumulates the gradients on subsequent backward passes.
Regarding your question, both snippets do the same, the important detail is calling optimizer.zero_grad() before loss.backward().
A:
Here is a pseudo code for the iteration:
run model
compute loss
<-- zero grads here...
go backward (compute grads if no grads otherwise accumulate)
update weights
<-- ...or here
Basically you zero grads before or after going backward and updating the weights. Both code snippets are OK.
A:
The main difference between the two snippets is the order in which the optimizer's zero_grad() and step() methods are called.
In the tutorial version, the optimizer's zero_grad() method is called before the loss.backward() method, while in the supervisor version, the optimizer's zero_grad() method is called after the loss.backward() method.
This difference in the order of the zero_grad() and step() calls can affect the performance of the model. In the tutorial version, the optimizer's gradients will be reset to zero before the backward pass, which can prevent the gradients from accumulating and potentially causing numerical instability. In the supervisor version, the optimizer's gradients will not be reset to zero until after the backward pass, which can allow the gradients to accumulate and potentially lead to numerical instability.
It is generally recommended to call the optimizer's zero_grad() method before the backward pass, as this can help prevent numerical instability and improve the model's performance. However, the exact order in which these methods are called may depend on the specific details of the model and the optimization algorithm being used.
| Steps for Machine Learning in Pytorch | When we define our model in PyTorch. We run through different #epochs. I want to know that in the iteration of epochs.
What is the difference between the two following snippets of code in which the order is different? These two snippet versions are:
I found over tutorials
The code provided by my supervisor for the project.
Tutorial Version
for i in range(epochs):
logits = model(x)
loss = loss_fcn(logits,lables)
loss.backward()
optimizer.step()
optimizer.zero_grad()
Supervisor Version
for i in range(epochs):
logits = model(x)
loss = loss_fcn(logits,lables)
optimizer.zero_grad()
loss.backward()
optimizer.step()
| [
"The only difference is when the gradients are cleared. (when you call optimizer.zero_grad()) the first version zeros out the gradients after updating the weights (optimizer.step()), the second one zeroes out the gradient after updating the weights. both versions should run fine. The only difference would be the first iteration, where the second snippet is better as it makes sure the residue gradients are zero before calculating the gradients. Check this link that explains why you would zero the gradients\n",
"In PyTorch, we typically want to explicitly set the gradients to zero for every mini-batch during the training phase before starting backpropagation (i.e., updating the Weights and biases) because PyTorch accumulates the gradients on subsequent backward passes.\nRegarding your question, both snippets do the same, the important detail is calling optimizer.zero_grad() before loss.backward().\n",
"Here is a pseudo code for the iteration:\n\nrun model\ncompute loss\n\n<-- zero grads here...\n\ngo backward (compute grads if no grads otherwise accumulate)\nupdate weights\n\n<-- ...or here\nBasically you zero grads before or after going backward and updating the weights. Both code snippets are OK.\n",
"The main difference between the two snippets is the order in which the optimizer's zero_grad() and step() methods are called.\nIn the tutorial version, the optimizer's zero_grad() method is called before the loss.backward() method, while in the supervisor version, the optimizer's zero_grad() method is called after the loss.backward() method.\nThis difference in the order of the zero_grad() and step() calls can affect the performance of the model. In the tutorial version, the optimizer's gradients will be reset to zero before the backward pass, which can prevent the gradients from accumulating and potentially causing numerical instability. In the supervisor version, the optimizer's gradients will not be reset to zero until after the backward pass, which can allow the gradients to accumulate and potentially lead to numerical instability.\nIt is generally recommended to call the optimizer's zero_grad() method before the backward pass, as this can help prevent numerical instability and improve the model's performance. However, the exact order in which these methods are called may depend on the specific details of the model and the optimization algorithm being used.\n"
] | [
1,
0,
0,
0
] | [] | [] | [
"machine_learning",
"python",
"pytorch"
] | stackoverflow_0072262608_machine_learning_python_pytorch.txt |
Q:
Pytorch: How to format data before execution of machine learning
I'm learning how to use pytorch and I was able to get a grasp on the overall process of construction and execution of ML models. However, what I am not able to grasp is how to "format" or "reshape" the data before executing the model. I keep getting errors like:
RuntimeError: size mismatch, m1: [1 x 700], m2: [1 x 1] at c:\programdata\miniconda3\conda-bld\pytorch_1524543037166\work\aten\src\th\generic/THTensorMath.c:2033
Or,
Expected object of type Variable[torch.DoubleTensor] but found type Variable[torch.FloatTensor] for argument #1 ‘mat2’
So, I have a csv file named "train.csv" with attributes called 'x' and 'y' and there are 700 samples in it, I want to perform a simple linear regression on the data, and I parse data from it using pandas, how do I format or reshape the data such that it will execute smoothly? How does pytorch iterate through input data?
The recent code i executed is:
import torch
import torch.nn as nn
from torch.autograd import Variable
import pandas as pd
class Linear_Reg(nn.Module):
def __init__(self, inp_sz, out_sz):
super(Linear_Reg, self).__init__()
self.linear = nn.Linear(inp_sz, out_sz)
def forward(self, x):
out = self.linear(x)
return out
train = pd.read_csv('C:\\Users\\hgstr\\Jupyter_Files\\Data_Sets\\linear_regression\\train.csv')
test = pd.read_csv('C:\\Users\\hgstr\\Jupyter_Files\\Data_Sets\\linear_regression\\test.csv')
x_train = torch.Tensor(train['x'])
y_train = torch.Tensor(train['y'])
x_test = torch.Tensor(test['x'])
y_test = torch.Tensor(test['y'])
x_train = torch.Tensor(x_train)
x_train = x_train.view(1,-1)
#================================
input_sz = 1;
output_sz = 1
epochs = 60
learning_rate = 0.001
#================================
model = Linear_Reg(input_sz, output_sz)
crit = nn.MSELoss()
opt = torch.optim.SGD(model.parameters(), learning_rate)
for e in range(epochs):
opt.zero_grad()
out = model(x_train)
loss = crit(out, y_train)
loss.backward()
opt.step()
print('epoch {}, loss {}'.format(e,loss.data[0]))
And it gave out the following:
RuntimeError: size mismatch, m1: [1 x 700], m2: [1 x 1] at c:\programdata\miniconda3\conda-bld\pytorch_1524543037166\work\aten\src\th\generic/THTensorMath.c:2033
Solutions?
A:
According to the error, I believe that your data is not correctly formatted. The tensor should be in the form [700, 2] (batch x data) and yours is [1, 700] (data x batch). This makes the model 'think' that you are adding only one entry as training with 700 features instead of 700 entries with only 1 feature.
Reshaping the x_train variable should make the code work. Just remove the line x_train = x_train.view(1,-1).
Regarding the second error, it can be that after reading the .csv into a variable its type is Double (due to pd.read_csv) while in pytorch by default Tensors are created as floats. I think that casting your input data before feeding it to the model should be enough: model(x_train.float()) or specifying it in the Tensor creation part x_train = torch.FloatTensor(train['x']). Note that you should cast all the Tensors that are not Floats.
edit: This piece of code works for me
import torch
import torch.nn as nn
import pandas as pd
class Linear_Reg(nn.Module):
def __init__(self, inp_sz, out_sz):
super(Linear_Reg, self).__init__()
self.linear = nn.Linear(inp_sz, out_sz)
def forward(self, x):
out = self.linear(x)
return out
train = pd.read_csv('yourpath')
test = pd.read_csv('yourpath')
x_train = torch.Tensor(train['x']).to(torch.float).view(700, 1)
y_train = torch.Tensor(train['y']).to(torch.float).view(700, 1)
x_test = torch.Tensor(test['x']).to(torch.float).view(300, 1)
y_test = torch.Tensor(test['y']).to(torch.float).view(300, 1)
# ================================
input_sz = 1;
output_sz = 1
epochs = 60
learning_rate = 0.001
# ================================
model = Linear_Reg(input_sz, output_sz)
crit = nn.MSELoss()
opt = torch.optim.SGD(model.parameters(), learning_rate)
for e in range(epochs):
opt.zero_grad()
out = model(x_train)
loss = crit(out, y_train)
loss.backward()
opt.step()
print('epoch {}, loss {}'.format(e, loss.data[0]))
A:
To solve this issue, you need to reshape the tensor containing your training data so that it has the correct dimensions for your model. In this case, the model expects a tensor of size [1 x 1], but your training data has the size [1 x 700].
To reshape the tensor, you can use the .view() method. For example, to reshape the tensor containing the x_train data to have the correct dimensions, you can do the following:
x_train = x_train.view(1, -1)
This reshapes the tensor to have a size of [1 x 1] by setting the first dimension to 1 and allowing the second dimension to be inferred from the size of the original tensor.
Additionally, it looks like you are trying to perform a regression task with only a single input and output dimension. In this case, you may want to consider using a different loss function, such as the L1 loss or the L2 loss, which are more commonly used for regression tasks. You can use these loss functions by replacing the line:
crit = nn.MSELoss()
with the following:
crit = nn.L1Loss() # use L1 loss
crit = nn.L2Loss() # use L2 loss
After making these changes, the model should be able to run without errors.
| Pytorch: How to format data before execution of machine learning | I'm learning how to use pytorch and I was able to get a grasp on the overall process of construction and execution of ML models. However, what I am not able to grasp is how to "format" or "reshape" the data before executing the model. I keep getting errors like:
RuntimeError: size mismatch, m1: [1 x 700], m2: [1 x 1] at c:\programdata\miniconda3\conda-bld\pytorch_1524543037166\work\aten\src\th\generic/THTensorMath.c:2033
Or,
Expected object of type Variable[torch.DoubleTensor] but found type Variable[torch.FloatTensor] for argument #1 ‘mat2’
So, I have a csv file named "train.csv" with attributes called 'x' and 'y' and there are 700 samples in it, I want to perform a simple linear regression on the data, and I parse data from it using pandas, how do I format or reshape the data such that it will execute smoothly? How does pytorch iterate through input data?
The recent code i executed is:
import torch
import torch.nn as nn
from torch.autograd import Variable
import pandas as pd
class Linear_Reg(nn.Module):
def __init__(self, inp_sz, out_sz):
super(Linear_Reg, self).__init__()
self.linear = nn.Linear(inp_sz, out_sz)
def forward(self, x):
out = self.linear(x)
return out
train = pd.read_csv('C:\\Users\\hgstr\\Jupyter_Files\\Data_Sets\\linear_regression\\train.csv')
test = pd.read_csv('C:\\Users\\hgstr\\Jupyter_Files\\Data_Sets\\linear_regression\\test.csv')
x_train = torch.Tensor(train['x'])
y_train = torch.Tensor(train['y'])
x_test = torch.Tensor(test['x'])
y_test = torch.Tensor(test['y'])
x_train = torch.Tensor(x_train)
x_train = x_train.view(1,-1)
#================================
input_sz = 1;
output_sz = 1
epochs = 60
learning_rate = 0.001
#================================
model = Linear_Reg(input_sz, output_sz)
crit = nn.MSELoss()
opt = torch.optim.SGD(model.parameters(), learning_rate)
for e in range(epochs):
opt.zero_grad()
out = model(x_train)
loss = crit(out, y_train)
loss.backward()
opt.step()
print('epoch {}, loss {}'.format(e,loss.data[0]))
And it gave out the following:
RuntimeError: size mismatch, m1: [1 x 700], m2: [1 x 1] at c:\programdata\miniconda3\conda-bld\pytorch_1524543037166\work\aten\src\th\generic/THTensorMath.c:2033
Solutions?
| [
"According to the error, I believe that your data is not correctly formatted. The tensor should be in the form [700, 2] (batch x data) and yours is [1, 700] (data x batch). This makes the model 'think' that you are adding only one entry as training with 700 features instead of 700 entries with only 1 feature. \nReshaping the x_train variable should make the code work. Just remove the line x_train = x_train.view(1,-1).\nRegarding the second error, it can be that after reading the .csv into a variable its type is Double (due to pd.read_csv) while in pytorch by default Tensors are created as floats. I think that casting your input data before feeding it to the model should be enough: model(x_train.float()) or specifying it in the Tensor creation part x_train = torch.FloatTensor(train['x']). Note that you should cast all the Tensors that are not Floats. \nedit: This piece of code works for me\nimport torch\nimport torch.nn as nn\nimport pandas as pd\n\nclass Linear_Reg(nn.Module):\n def __init__(self, inp_sz, out_sz):\n super(Linear_Reg, self).__init__()\n self.linear = nn.Linear(inp_sz, out_sz)\n\n def forward(self, x):\n out = self.linear(x)\n return out\n\n\ntrain = pd.read_csv('yourpath')\ntest = pd.read_csv('yourpath')\n\nx_train = torch.Tensor(train['x']).to(torch.float).view(700, 1)\ny_train = torch.Tensor(train['y']).to(torch.float).view(700, 1)\n\nx_test = torch.Tensor(test['x']).to(torch.float).view(300, 1)\ny_test = torch.Tensor(test['y']).to(torch.float).view(300, 1)\n\n# ================================\ninput_sz = 1;\noutput_sz = 1\nepochs = 60\nlearning_rate = 0.001\n# ================================\n\nmodel = Linear_Reg(input_sz, output_sz)\ncrit = nn.MSELoss()\nopt = torch.optim.SGD(model.parameters(), learning_rate)\n\nfor e in range(epochs):\n opt.zero_grad()\n out = model(x_train)\n\n loss = crit(out, y_train)\n loss.backward()\n opt.step()\n\n print('epoch {}, loss {}'.format(e, loss.data[0]))\n\n",
"To solve this issue, you need to reshape the tensor containing your training data so that it has the correct dimensions for your model. In this case, the model expects a tensor of size [1 x 1], but your training data has the size [1 x 700].\nTo reshape the tensor, you can use the .view() method. For example, to reshape the tensor containing the x_train data to have the correct dimensions, you can do the following:\nx_train = x_train.view(1, -1)\n\nThis reshapes the tensor to have a size of [1 x 1] by setting the first dimension to 1 and allowing the second dimension to be inferred from the size of the original tensor.\nAdditionally, it looks like you are trying to perform a regression task with only a single input and output dimension. In this case, you may want to consider using a different loss function, such as the L1 loss or the L2 loss, which are more commonly used for regression tasks. You can use these loss functions by replacing the line:\ncrit = nn.MSELoss()\n\nwith the following:\n\ncrit = nn.L1Loss() # use L1 loss\n\ncrit = nn.L2Loss() # use L2 loss\n\nAfter making these changes, the model should be able to run without errors.\n"
] | [
0,
0
] | [] | [] | [
"linear_regression",
"machine_learning",
"python",
"pytorch"
] | stackoverflow_0050432506_linear_regression_machine_learning_python_pytorch.txt |
Q:
Screen Recorded Through Python Script is Too fast
I could record the screen, but whenever I play the video it is very fast. How can I solve this issue?
import pyautogui
import cv2
import numpy as np
resolution = (1920, 1080)
codec = cv2.VideoWriter_fourcc(*"XVID")
filename = "Recording.avi"
fps = 60.0
out = cv2.VideoWriter(filename, codec, fps, resolution)
cv2.namedWindow("Live", cv2.WINDOW_NORMAL)
cv2.resizeWindow("Live", 480, 270)
while True:
img = pyautogui.screenshot()
frame = np.array(img)
frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
out.write(frame)
cv2.imshow('Live', frame)
if cv2.waitKey(1) == ord('q'):
break
time.sleep(1/30)
out.release()
cv2.destroyAllWindows()
A:
There are a few things you can try to make the recorded video play at a normal speed. One possible solution is to reduce the number of frames per second (fps) that are being recorded. In your code, you are setting the fps value to 60.0, which is a very high value and may be causing the recorded video to play back too quickly. Try set fps to 25 or 30. Also you can try increasing the amount of time that the sleep() function is called, which will cause the loop to pause for a longer period of time between frames.
| Screen Recorded Through Python Script is Too fast | I could record the screen, but whenever I play the video it is very fast. How can I solve this issue?
import pyautogui
import cv2
import numpy as np
resolution = (1920, 1080)
codec = cv2.VideoWriter_fourcc(*"XVID")
filename = "Recording.avi"
fps = 60.0
out = cv2.VideoWriter(filename, codec, fps, resolution)
cv2.namedWindow("Live", cv2.WINDOW_NORMAL)
cv2.resizeWindow("Live", 480, 270)
while True:
img = pyautogui.screenshot()
frame = np.array(img)
frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
out.write(frame)
cv2.imshow('Live', frame)
if cv2.waitKey(1) == ord('q'):
break
time.sleep(1/30)
out.release()
cv2.destroyAllWindows()
| [
"There are a few things you can try to make the recorded video play at a normal speed. One possible solution is to reduce the number of frames per second (fps) that are being recorded. In your code, you are setting the fps value to 60.0, which is a very high value and may be causing the recorded video to play back too quickly. Try set fps to 25 or 30. Also you can try increasing the amount of time that the sleep() function is called, which will cause the loop to pause for a longer period of time between frames.\n"
] | [
0
] | [] | [] | [
"numpy",
"pyautogui",
"python",
"screen_recording"
] | stackoverflow_0074666388_numpy_pyautogui_python_screen_recording.txt |
Q:
Convert long series keys to hex, then Choose desired values from a list of long separated keys
I have code to generate series of keys as in below:
def Keygen (x,r,size):
key=[]
for i in range(size):
x= r*x*(1-x)
key.append(int((x*pow(10,16))%256))
return key
if __name__=="__main__":
key=Keygen(0.45,0.685,92)#Intial Parameters
print('nx key:', key, "\n")
The output keys are:
nx key: [0, 11, 53, 42, 111, 38, 55, 102, 252, 155, 54, 219, 149, 220, 235, 177, 140, 46, 209, 249, 46, 241, 218, 243, 6, 166, 247, 106, 33, 24, 220, 185, 129, 182, 214, 210, 180, 28, 84, 117, 228, 213, 205, 240, 125, 37, 181, 234, 246, 54, 22, 195, 38, 174, 212, 166, 9, 237, 25, 225, 81, 23, 244, 235, 171, 196, 111, 182, 227, 26, 22, 246, 35, 52, 225, 249, 90, 237, 162, 111, 76, 52, 35, 24, 16, 11, 7, 5, 3, 2, 1, 1]
I try to convert all key values to hex by used the following code:
K=hex(key)
print('nx key:', key, "\n")
But when run I got the error "TypeError: 'list' object cannot be interpreted as an integer"
Then try to use "K= hex(ord(key))" but also got another error "TypeError: ord() expected string of length 1, but list found"
What I need is to convert all keys to hex, then select just 4 keys to be like this
K = (0x3412, 0x7856, 0xBC9A, 0xF0DE)
A:
In order to get hex values for your list of keys, you have to iterate over the list and turn each element seperately into a hex value:
K = tuple(hex(x) for x in key)
Then you can select 4 random keys (no repeat) from this list by:
import random
selectedKeys = random.sample(K, 4)
A:
Maybe a better name for key is keys, cause is a list of keys. That said
[hex(key) for key in keys]
should do the trick.
This a is a usage of list comprehension
A:
I might be able to help you with your error.
Based on your output with your values wrapped in [], you have a list for key. What you then want to do is iterate through each element in that list to apply your hex.
hexed_keys = [hex(i) for i in key]
Good luck and happy coding! Please up vote my answer if useful so I can contribute more on Stack Overflow:)
| Convert long series keys to hex, then Choose desired values from a list of long separated keys | I have code to generate series of keys as in below:
def Keygen (x,r,size):
key=[]
for i in range(size):
x= r*x*(1-x)
key.append(int((x*pow(10,16))%256))
return key
if __name__=="__main__":
key=Keygen(0.45,0.685,92)#Intial Parameters
print('nx key:', key, "\n")
The output keys are:
nx key: [0, 11, 53, 42, 111, 38, 55, 102, 252, 155, 54, 219, 149, 220, 235, 177, 140, 46, 209, 249, 46, 241, 218, 243, 6, 166, 247, 106, 33, 24, 220, 185, 129, 182, 214, 210, 180, 28, 84, 117, 228, 213, 205, 240, 125, 37, 181, 234, 246, 54, 22, 195, 38, 174, 212, 166, 9, 237, 25, 225, 81, 23, 244, 235, 171, 196, 111, 182, 227, 26, 22, 246, 35, 52, 225, 249, 90, 237, 162, 111, 76, 52, 35, 24, 16, 11, 7, 5, 3, 2, 1, 1]
I try to convert all key values to hex by used the following code:
K=hex(key)
print('nx key:', key, "\n")
But when run I got the error "TypeError: 'list' object cannot be interpreted as an integer"
Then try to use "K= hex(ord(key))" but also got another error "TypeError: ord() expected string of length 1, but list found"
What I need is to convert all keys to hex, then select just 4 keys to be like this
K = (0x3412, 0x7856, 0xBC9A, 0xF0DE)
| [
"In order to get hex values for your list of keys, you have to iterate over the list and turn each element seperately into a hex value:\nK = tuple(hex(x) for x in key)\n\nThen you can select 4 random keys (no repeat) from this list by:\nimport random\nselectedKeys = random.sample(K, 4)\n\n",
"Maybe a better name for key is keys, cause is a list of keys. That said\n[hex(key) for key in keys]\nshould do the trick.\nThis a is a usage of list comprehension\n",
"I might be able to help you with your error.\nBased on your output with your values wrapped in [], you have a list for key. What you then want to do is iterate through each element in that list to apply your hex.\nhexed_keys = [hex(i) for i in key]\n\nGood luck and happy coding! Please up vote my answer if useful so I can contribute more on Stack Overflow:)\n"
] | [
0,
0,
0
] | [] | [] | [
"hex",
"python"
] | stackoverflow_0074666330_hex_python.txt |
Q:
Python read in file: ERROR: line contains NULL byte
I would like to parse an .ubx File(=my input file). This file contains many different NMEA sentences as well as raw receiver data. The output file should just contain informations out of GGA sentences. This works fine as far as the .ubx File does not contain any raw messages. However if it contains raw data
I get the following error:
Traceback (most recent call last):
File "C:...myParser.py", line 25, in
for row in reader:
Error: line contains NULL byte
My code looks like this:
import csv
from datetime import datetime
import math
# adapt this to your file
INPUT_FILENAME = 'Rover.ubx'
OUTPUT_FILENAME = 'out2.csv'
# open the input file in read mode
with open(INPUT_FILENAME, 'r') as input_file:
# open the output file in write mode
with open(OUTPUT_FILENAME, 'wt') as output_file:
# create a csv reader object from the input file (nmea files are basically csv)
reader = csv.reader(input_file)
# create a csv writer object for the output file
writer = csv.writer(output_file, delimiter=',', lineterminator='\n')
# write the header line to the csv file
writer.writerow(['Time','Longitude','Latitude','Altitude','Quality','Number of Sat.','HDOP','Geoid seperation','diffAge'])
# iterate over all the rows in the nmea file
for row in reader:
if row[0].startswith('$GNGGA'):
time = row[1]
# merge the time and date columns into one Python datetime object (usually more convenient than having both separately)
date_and_time = datetime.strptime(time, '%H%M%S.%f')
date_and_time = date_and_time.strftime('%H:%M:%S.%f')[:-6] #
writer.writerow([date_and_time])
My .ubx file looks like this:
$GNGSA,A,3,16,25,29,20,31,26,05,21,,,,,1.30,0.70,1.10*10
$GNGSA,A,3,88,79,78,81,82,80,72,,,,,,1.30,0.70,1.10*16
$GPGSV,4,1,13,02,08,040,17,04,,,47,05,18,071,44,09,02,348,24*49
$GPGSV,4,2,13,12,03,118,24,16,12,298,36,20,15,118,30,21,44,179,51*74
$GPGSV,4,3,13,23,06,324,35,25,37,121,47,26,40,299,48,29,60,061,49*73
$GPGSV,4,4,13,31,52,239,51*42
$GLGSV,3,1,10,65,07,076,24,70,01,085,,71,04,342,34,72,13,029,35*64
$GLGSV,3,2,10,78,35,164,41,79,75,214,48,80,34,322,46,81,79,269,49*64
$GLGSV,3,3,10,82,28,235,52,88,39,043,43*6D
$GNGLL,4951.69412,N,00839.03672,E,124610.00,A,D*71
$GNGST,124610.00,12,,,,0.010,0.010,0.010*4B
$GNZDA,124610.00,03,07,2016,00,00*79
µb< ¸½¸Abð½ . SB éF é v.¥ # 1 f =•Iè ,
Ïÿÿ£Ëÿÿd¡ ¬M 0+ùÿÿ³øÿÿµj #ª ² -K*
,¨ , éºJU /) ++ f 5 .lG NL C8G /{; „> é óK 3 — Bòl . "¿ 2 bm¡
4âH ÐM X cRˆ 35 »7 Óo‡ž "*ßÿÿØÜÿÿUhQ`
3ŒðÿÿÂïÿÿþþûù ÂÈÿÿñÅÿÿJX ES
$²I uM N:w (YÃÿÿV¿ÿÿ> =ìî 1¥éÿÿèÿÿmk³m /?ÔÿÿÒÿÿšz+Ú Ïÿÿ6ÍÿÿêwÇ\ ? ]? ˜B Aÿƒ y µbÐD‹lçtæ@p3,}ßœŒ-vAh
¿M"A‚UE ôû JQý
'wA´üát¸jžAÀ‚"Å
)DÂï–ŽtAöÙüñÅ›A|$Å ôû/ Ìcd§ÇørA†áãì˜AØY–Ä ôû1 /Áƒ´zsAc5+_’ô™AìéNÅ ôû( ¶y(,wvAFøÈV§ƒA˜ÝwE ôû$ _S R‰wAhÙ]‘ÑëžAÇ9Å vwAòܧsAŒöƒd§Ò™AÜOÄ ôû3 kœÕ}vA;D.ž‡žAÒûàÄ @ˆ" ϬŸ ntAfˆÞ3ךA~Y2E ôû3 :GVtAæ93l)ÆšAß yE ôû4 Uþy.TwA<âƒ' ¦žAhmëC ôû" ¯4Çï ›wAþ‰Ì½6ŸAŠû¶D ~~xI]tA<ÞÿrÁšAmHE ôû/ ÖÆ@ÈgŸsAXnþ‚†4šA'0tE ôû. ·ÈO:’
sA¢B†i™Aë%
E ôû/ >Þ,À8vA°‚9êœA>ÇD ôû, ø(¼+çŠuAÆOÁ לAÈΆD
ôû# ¨Ä-_c¯qAuÓ?]> —AÐкà ôû0 ÆUV¨ØZsA]ðÛñß™AÛ'Å ôû, ™mv7žqAYÐ:›Ä‘—AdWxD ôû1 ûö>%vA}„
ëV˜A.êbE
AÝ$GNRMC,124611.00,A,4951.69413,N,00839.03672,E,0.009,,030716,,,D*62
$GNVTG,,T,,M,0.009,N,0.016,K,D*36
$GNGNS,124611.00,4951.69413,N,00839.03672,E,RR,15,0.70,162.5,47.6,1.0,0000*42
$GNGGA,124611.00,4951.69413,N,00839.03672,E,4,12,0.70,162.5,M,47.6,M,1.0,0000*6A
$GNGSA,A,3,16,25,29,20,31,26,05,21,,,,,1.31,0.70,1.10*11
$GNGSA,A,3,88,79,78,81,82,80,72,,,,,,1.31,0.70,1.10*17
$GPGSV,4,1,13,02,08,040,18,04,,,47,05,18,071,44,09,02,348,21*43
$GPGSV,4,2,13,12,03,118,24,16,
I already searched for similar problems. However I was not able to find a solution which workes for me.
I ended up with code like that:
import csv
def unfussy_reader(csv_reader):
while True:
try:
yield next(csv_reader)
except csv.Error:
# log the problem or whatever
print("Problem with some row")
continue
if __name__ == '__main__':
#
# Generate malformed csv file for
# demonstration purposes
#
with open("temp.csv", "w") as fout:
fout.write("abc,def\nghi\x00,klm\n123,456")
#
# Open the malformed file for reading, fire up a
# conventional CSV reader over it, wrap that reader
# in our "unfussy" generator and enumerate over that
# generator.
#
with open("Rover.ubx") as fin:
reader = unfussy_reader(csv.reader(fin))
for n, row in enumerate(reader):
fout.write(row[0])
However I was not able to simply write a file containing just all the rows read in with the unfuss_reader wrapper using the above code.
Would be glad if you could help me.
Here is an Image of how the .ubx file looks in notepad++image
Thanks!
A:
I am not quite sure but your file looks pretty binary. You should try to open it as such
with open(INPUT_FILENAME, 'rb') as input_file:
A:
It seems like you did not open the file with correct coding format.
So the raw message cannot be read correctly.
If it is encoded as UTF8, you need to open the file with coding option:
with open(INPUT_FILENAME, 'r', newline='', encoding='utf8') as input_file
A:
Hey if anyone else has this proglem to read in NMEA sentences of uBlox .ubx files
this pyhton code worked for me:
def read_in():
with open('GNGGA.txt', 'w') as GNGGA:
with open('GNRMC.txt','w') as GNRMC:
with open('rover.ubx', 'rb') as f:
for line in f:
#print line
if line.startswith('$GNGGA'):
#print line
GNGGA.write(line)
if line.startswith('$GNRMC'):
GNRMC.write(line)
read_in()
A:
You could also use the gnssdump command line utility which is installed with the PyGPSClient and pygnssutils Python packages.
e.g.
gnssdump filename=Rover.ubx msgfilter=GNGGA
See gnssdump -h for help.
Alternatively if you want a simple Python script you could use the pyubx2 Python package, e.g.
from pyubx2 import UBXReader
with open("Rover.ubx", "rb") as stream:
ubr = UBXReader(stream)
for (_, parsed_data) in ubr.iterate():
if parsed_data.identity in ("GNGGA", "GNRMC"):
print(parsed_data)
| Python read in file: ERROR: line contains NULL byte | I would like to parse an .ubx File(=my input file). This file contains many different NMEA sentences as well as raw receiver data. The output file should just contain informations out of GGA sentences. This works fine as far as the .ubx File does not contain any raw messages. However if it contains raw data
I get the following error:
Traceback (most recent call last):
File "C:...myParser.py", line 25, in
for row in reader:
Error: line contains NULL byte
My code looks like this:
import csv
from datetime import datetime
import math
# adapt this to your file
INPUT_FILENAME = 'Rover.ubx'
OUTPUT_FILENAME = 'out2.csv'
# open the input file in read mode
with open(INPUT_FILENAME, 'r') as input_file:
# open the output file in write mode
with open(OUTPUT_FILENAME, 'wt') as output_file:
# create a csv reader object from the input file (nmea files are basically csv)
reader = csv.reader(input_file)
# create a csv writer object for the output file
writer = csv.writer(output_file, delimiter=',', lineterminator='\n')
# write the header line to the csv file
writer.writerow(['Time','Longitude','Latitude','Altitude','Quality','Number of Sat.','HDOP','Geoid seperation','diffAge'])
# iterate over all the rows in the nmea file
for row in reader:
if row[0].startswith('$GNGGA'):
time = row[1]
# merge the time and date columns into one Python datetime object (usually more convenient than having both separately)
date_and_time = datetime.strptime(time, '%H%M%S.%f')
date_and_time = date_and_time.strftime('%H:%M:%S.%f')[:-6] #
writer.writerow([date_and_time])
My .ubx file looks like this:
$GNGSA,A,3,16,25,29,20,31,26,05,21,,,,,1.30,0.70,1.10*10
$GNGSA,A,3,88,79,78,81,82,80,72,,,,,,1.30,0.70,1.10*16
$GPGSV,4,1,13,02,08,040,17,04,,,47,05,18,071,44,09,02,348,24*49
$GPGSV,4,2,13,12,03,118,24,16,12,298,36,20,15,118,30,21,44,179,51*74
$GPGSV,4,3,13,23,06,324,35,25,37,121,47,26,40,299,48,29,60,061,49*73
$GPGSV,4,4,13,31,52,239,51*42
$GLGSV,3,1,10,65,07,076,24,70,01,085,,71,04,342,34,72,13,029,35*64
$GLGSV,3,2,10,78,35,164,41,79,75,214,48,80,34,322,46,81,79,269,49*64
$GLGSV,3,3,10,82,28,235,52,88,39,043,43*6D
$GNGLL,4951.69412,N,00839.03672,E,124610.00,A,D*71
$GNGST,124610.00,12,,,,0.010,0.010,0.010*4B
$GNZDA,124610.00,03,07,2016,00,00*79
µb< ¸½¸Abð½ . SB éF é v.¥ # 1 f =•Iè ,
Ïÿÿ£Ëÿÿd¡ ¬M 0+ùÿÿ³øÿÿµj #ª ² -K*
,¨ , éºJU /) ++ f 5 .lG NL C8G /{; „> é óK 3 — Bòl . "¿ 2 bm¡
4âH ÐM X cRˆ 35 »7 Óo‡ž "*ßÿÿØÜÿÿUhQ`
3ŒðÿÿÂïÿÿþþûù ÂÈÿÿñÅÿÿJX ES
$²I uM N:w (YÃÿÿV¿ÿÿ> =ìî 1¥éÿÿèÿÿmk³m /?ÔÿÿÒÿÿšz+Ú Ïÿÿ6ÍÿÿêwÇ\ ? ]? ˜B Aÿƒ y µbÐD‹lçtæ@p3,}ßœŒ-vAh
¿M"A‚UE ôû JQý
'wA´üát¸jžAÀ‚"Å
)DÂï–ŽtAöÙüñÅ›A|$Å ôû/ Ìcd§ÇørA†áãì˜AØY–Ä ôû1 /Áƒ´zsAc5+_’ô™AìéNÅ ôû( ¶y(,wvAFøÈV§ƒA˜ÝwE ôû$ _S R‰wAhÙ]‘ÑëžAÇ9Å vwAòܧsAŒöƒd§Ò™AÜOÄ ôû3 kœÕ}vA;D.ž‡žAÒûàÄ @ˆ" ϬŸ ntAfˆÞ3ךA~Y2E ôû3 :GVtAæ93l)ÆšAß yE ôû4 Uþy.TwA<âƒ' ¦žAhmëC ôû" ¯4Çï ›wAþ‰Ì½6ŸAŠû¶D ~~xI]tA<ÞÿrÁšAmHE ôû/ ÖÆ@ÈgŸsAXnþ‚†4šA'0tE ôû. ·ÈO:’
sA¢B†i™Aë%
E ôû/ >Þ,À8vA°‚9êœA>ÇD ôû, ø(¼+çŠuAÆOÁ לAÈΆD
ôû# ¨Ä-_c¯qAuÓ?]> —AÐкà ôû0 ÆUV¨ØZsA]ðÛñß™AÛ'Å ôû, ™mv7žqAYÐ:›Ä‘—AdWxD ôû1 ûö>%vA}„
ëV˜A.êbE
AÝ$GNRMC,124611.00,A,4951.69413,N,00839.03672,E,0.009,,030716,,,D*62
$GNVTG,,T,,M,0.009,N,0.016,K,D*36
$GNGNS,124611.00,4951.69413,N,00839.03672,E,RR,15,0.70,162.5,47.6,1.0,0000*42
$GNGGA,124611.00,4951.69413,N,00839.03672,E,4,12,0.70,162.5,M,47.6,M,1.0,0000*6A
$GNGSA,A,3,16,25,29,20,31,26,05,21,,,,,1.31,0.70,1.10*11
$GNGSA,A,3,88,79,78,81,82,80,72,,,,,,1.31,0.70,1.10*17
$GPGSV,4,1,13,02,08,040,18,04,,,47,05,18,071,44,09,02,348,21*43
$GPGSV,4,2,13,12,03,118,24,16,
I already searched for similar problems. However I was not able to find a solution which workes for me.
I ended up with code like that:
import csv
def unfussy_reader(csv_reader):
while True:
try:
yield next(csv_reader)
except csv.Error:
# log the problem or whatever
print("Problem with some row")
continue
if __name__ == '__main__':
#
# Generate malformed csv file for
# demonstration purposes
#
with open("temp.csv", "w") as fout:
fout.write("abc,def\nghi\x00,klm\n123,456")
#
# Open the malformed file for reading, fire up a
# conventional CSV reader over it, wrap that reader
# in our "unfussy" generator and enumerate over that
# generator.
#
with open("Rover.ubx") as fin:
reader = unfussy_reader(csv.reader(fin))
for n, row in enumerate(reader):
fout.write(row[0])
However I was not able to simply write a file containing just all the rows read in with the unfuss_reader wrapper using the above code.
Would be glad if you could help me.
Here is an Image of how the .ubx file looks in notepad++image
Thanks!
| [
"I am not quite sure but your file looks pretty binary. You should try to open it as such\nwith open(INPUT_FILENAME, 'rb') as input_file:\n\n",
"It seems like you did not open the file with correct coding format.\nSo the raw message cannot be read correctly.\nIf it is encoded as UTF8, you need to open the file with coding option:\nwith open(INPUT_FILENAME, 'r', newline='', encoding='utf8') as input_file\n\n",
"Hey if anyone else has this proglem to read in NMEA sentences of uBlox .ubx files\nthis pyhton code worked for me:\ndef read_in():\nwith open('GNGGA.txt', 'w') as GNGGA:\n with open('GNRMC.txt','w') as GNRMC:\n with open('rover.ubx', 'rb') as f:\n for line in f:\n #print line\n if line.startswith('$GNGGA'):\n #print line\n GNGGA.write(line)\n if line.startswith('$GNRMC'):\n GNRMC.write(line)\n\nread_in()\n",
"You could also use the gnssdump command line utility which is installed with the PyGPSClient and pygnssutils Python packages.\ne.g.\ngnssdump filename=Rover.ubx msgfilter=GNGGA\n\nSee gnssdump -h for help.\nAlternatively if you want a simple Python script you could use the pyubx2 Python package, e.g.\nfrom pyubx2 import UBXReader\n\nwith open(\"Rover.ubx\", \"rb\") as stream:\n\n ubr = UBXReader(stream)\n for (_, parsed_data) in ubr.iterate():\n if parsed_data.identity in (\"GNGGA\", \"GNRMC\"):\n print(parsed_data)\n\n"
] | [
1,
0,
0,
0
] | [] | [] | [
"nmea",
"parsing",
"python"
] | stackoverflow_0038179492_nmea_parsing_python.txt |
Q:
Save & load best model in AutoTS python
After fitting AutoTS model over some time series data, how can I save & load the best model trained? Though, the AutoTS object has export_template() & import_template() functions to save best model, but while loading best model from this template, it requires re-fitting. How can such a solution be used in production? My code:
from autots import AutoTS
model = AutoTS(
frequency='infer',
prediction_interval=0.9,
ensemble=None,
model_list="fast", # "superfast", "default", "fast_parallel"
transformer_list="fast", # "superfast",
drop_most_recent=1,
max_generations=4,
num_validations=2,
validation_method="backwards")
model.fit(df_day,date_col='xyz',value_col='abc')
model.export_template("unique_user_1", models='best', n=1, max_per_model_class=3)
Now, in some new instance, when I do
model = model.import_template('unique_user_1.csv',method='only')
The model required retraining.
A:
The major issue with your code is that you have named your export_template as 'unique_user_1' without an extension. Try saving it as csv file with 'unique_user_1.csv'
Once you feel you are done with training your model. Write the following lines
model.export_template(
"unique_user_1.csv",
models="best",
max_per_model_class=1,
include_results=True,
)
To load the template & reuse it
model = model.import_template(
"unique_user_1.csv",
method="only",
enforce_model_list=True,)
model.fit(data)
prediction = model.predict(forecast_length=15)
| Save & load best model in AutoTS python | After fitting AutoTS model over some time series data, how can I save & load the best model trained? Though, the AutoTS object has export_template() & import_template() functions to save best model, but while loading best model from this template, it requires re-fitting. How can such a solution be used in production? My code:
from autots import AutoTS
model = AutoTS(
frequency='infer',
prediction_interval=0.9,
ensemble=None,
model_list="fast", # "superfast", "default", "fast_parallel"
transformer_list="fast", # "superfast",
drop_most_recent=1,
max_generations=4,
num_validations=2,
validation_method="backwards")
model.fit(df_day,date_col='xyz',value_col='abc')
model.export_template("unique_user_1", models='best', n=1, max_per_model_class=3)
Now, in some new instance, when I do
model = model.import_template('unique_user_1.csv',method='only')
The model required retraining.
| [
"The major issue with your code is that you have named your export_template as 'unique_user_1' without an extension. Try saving it as csv file with 'unique_user_1.csv'\nOnce you feel you are done with training your model. Write the following lines\nmodel.export_template(\n\"unique_user_1.csv\",\nmodels=\"best\",\nmax_per_model_class=1,\ninclude_results=True,\n\n)\nTo load the template & reuse it\nmodel = model.import_template(\n\"unique_user_1.csv\",\nmethod=\"only\",\nenforce_model_list=True,)\nmodel.fit(data)\nprediction = model.predict(forecast_length=15)\n\n"
] | [
0
] | [] | [] | [
"data_science",
"forecasting",
"machine_learning",
"python",
"time_series"
] | stackoverflow_0072123229_data_science_forecasting_machine_learning_python_time_series.txt |
Q:
pandas apply subtractions on columns function when indexes are not equal, based on alignment in another columns
I have two dataframes:
df1 =
C0 C1. C2.
4 AB. 1. 2
5 AC. 7 8
6 AD. 9. 9
7 AE. 2. 6
8 AG 8. 9
df2 =
C0 C1. C2
8 AB 0. 1
9 AE. 6. 3
10 AD. 1. 2
I want to apply a subtraction between these two dataframes, such that when the value of the columns C0 is the same - I will get the subsraction, and when is not - a bool column will have the value False. notice that current indeics are not aligned.
So new df1 should be:
df1 =
C0 C1. C2. diff_C1 match
4 AB. 1. 2. 1. True
5 AC. 7 8. 0. False
6 AD. 9. 9. 8. True
7 AE. 2. 6. -4. True
8 AG 8. 9. 0 False
What is the best way to do it?
A:
A possible solution, based on pandas.DataFrame.merge:
(df1.merge(df2.iloc[:,:-1], on='C0', suffixes=['', 'y'], how='left')
.rename({'C1.y': 'diff_C1'}, axis=1)
.assign(diff_C1 = lambda x: x['C1.'].sub(x['diff_C1']))
.assign(match = lambda x: x['diff_C1'].notna())
.fillna(0))
Output:
C0 C1. C2. diff_C1 match
0 AB. 1.0 2 1.0 True
1 AC. 7.0 8 0.0 False
2 AD. 9.0 9 8.0 True
3 AE. 2.0 6 -4.0 True
4 AG. 8.0 9 0.0 False
A:
You can try merging the columns using pandas.DataFrame.merge on column C0 and how as left as shown below
df1.merge(df2, how='left', on='C0')
.assign(match=lambda x: x['C1_y'].notna())
.fillna(0)
Output:
then subtract the C1 columns i.e. C1_x and C1_y
df['C1_diff'] = df['C1_x'] - df['C1_y']
| pandas apply subtractions on columns function when indexes are not equal, based on alignment in another columns | I have two dataframes:
df1 =
C0 C1. C2.
4 AB. 1. 2
5 AC. 7 8
6 AD. 9. 9
7 AE. 2. 6
8 AG 8. 9
df2 =
C0 C1. C2
8 AB 0. 1
9 AE. 6. 3
10 AD. 1. 2
I want to apply a subtraction between these two dataframes, such that when the value of the columns C0 is the same - I will get the subsraction, and when is not - a bool column will have the value False. notice that current indeics are not aligned.
So new df1 should be:
df1 =
C0 C1. C2. diff_C1 match
4 AB. 1. 2. 1. True
5 AC. 7 8. 0. False
6 AD. 9. 9. 8. True
7 AE. 2. 6. -4. True
8 AG 8. 9. 0 False
What is the best way to do it?
| [
"A possible solution, based on pandas.DataFrame.merge:\n(df1.merge(df2.iloc[:,:-1], on='C0', suffixes=['', 'y'], how='left')\n .rename({'C1.y': 'diff_C1'}, axis=1)\n .assign(diff_C1 = lambda x: x['C1.'].sub(x['diff_C1']))\n .assign(match = lambda x: x['diff_C1'].notna())\n .fillna(0))\n\nOutput:\n C0 C1. C2. diff_C1 match\n0 AB. 1.0 2 1.0 True\n1 AC. 7.0 8 0.0 False\n2 AD. 9.0 9 8.0 True\n3 AE. 2.0 6 -4.0 True\n4 AG. 8.0 9 0.0 False\n\n",
"You can try merging the columns using pandas.DataFrame.merge on column C0 and how as left as shown below\ndf1.merge(df2, how='left', on='C0')\n .assign(match=lambda x: x['C1_y'].notna())\n .fillna(0)\n\nOutput:\n\nthen subtract the C1 columns i.e. C1_x and C1_y\ndf['C1_diff'] = df['C1_x'] - df['C1_y']\n\n\n"
] | [
1,
0
] | [] | [] | [
"data_munging",
"data_science",
"dataframe",
"pandas",
"python"
] | stackoverflow_0074666280_data_munging_data_science_dataframe_pandas_python.txt |
Q:
ImproperlyConfigured AUTH_USER_MODEL refers to model 'core.User' that has not been installed
I am calling this method in my core app - models.py,
from django.contrib.auth import get_user_model
User = get_user_model()
I am getting error,
Exception has occurred: ImproperlyConfigured (note: full exception trace is shown but execution is paused at: <module>)
AUTH_USER_MODEL refers to model 'core.User' that has not been installed
debugger points to this line
A:
I found the problem,
User = get_user_model()
I had pasted follwing code inside the core app models.py
| ImproperlyConfigured AUTH_USER_MODEL refers to model 'core.User' that has not been installed | I am calling this method in my core app - models.py,
from django.contrib.auth import get_user_model
User = get_user_model()
I am getting error,
Exception has occurred: ImproperlyConfigured (note: full exception trace is shown but execution is paused at: <module>)
AUTH_USER_MODEL refers to model 'core.User' that has not been installed
debugger points to this line
| [
"I found the problem,\nUser = get_user_model()\n\nI had pasted follwing code inside the core app models.py\n"
] | [
0
] | [] | [] | [
"django",
"django_models",
"python",
"python_3.x"
] | stackoverflow_0074666310_django_django_models_python_python_3.x.txt |
Q:
Match with Django import_export with multiple fields
I would like to import a CSV in Django. The issue occurs when trying to import based on the attributes. Here is my code:
class Event(models.Model):
id = models.BigAutoField(primary_key=True)
amount = models.ForeignKey(Amount, on_delete=models.CASCADE)
value = models.FloatField()
space = models.ForeignKey(Space, on_delete=models.RESTRICT)
time = models.ForeignKey(Time, on_delete=models.RESTRICT)
class Meta:
managed = True
db_table = "event"
class Space(models.Model):
objects = SpaceManager()
id = models.BigAutoField(primary_key=True)
code = models.CharField(max_length=100)
type = models.ForeignKey(SpaceType, on_delete=models.RESTRICT)
space_date = models.DateField(blank=True, null=True)
def natural_key(self):
return self.code # + self.type + self.source_date
def __str__(self):
return f"{self.name}"
class Meta:
managed = True
db_table = "space"
class Time(models.Model):
objects = TimeManager()
id = models.BigAutoField(primary_key=True)
type = models.ForeignKey(TimeType, on_delete=models.RESTRICT)
startdate = models.DateTimeField()
enddate = models.DateTimeField()
def natural_key(self):
return self.name
def __str__(self):
return f"{self.name}"
class Meta:
managed = True
db_table = "time"
Now, I create the resource that should find the right objects, but it seems it does not enter into ForeignKeyWidget(s) at all:
class AmountForeignKeyWidget(ForeignKeyWidget):
def clean(self, value, row=None, **kwargs):
logger.critical("<<<<< {AmountForeignKeyWidget} <<<<<<<")
name_upper = value.upper()
amount = Amount.objects.get_by_natural_key(name=name_upper)
return amount
class SpaceForeignKeyWidget(ForeignKeyWidget):
def clean(self, value, row, **kwargs):
logger.critical("<<<<< {SpaceForeignKeyWidget} <<<<<<<")
space_code = row["space_code"]
space_type = SpatialDimensionType.objects.get_by_natural_key(row["space_type"])
try:
space_date = datetime.strptime(row["space_date"], "%Y%m%d")
except ValueError:
space_date = None
space = Space.objects.get(
code=space_code, type=space_type, source_date=space_date
)
return space
class TimeForeignKeyWidget(ForeignKeyWidget):
def clean(self, value, row, **kwargs):
logger.critical("<<<<< {TimeForeignKeyWidget} <<<<<<<")
time_type = TimeType.objects.get_by_natural_key(row["time_type"])
time_date = parse_datetime(row["time_date"])
time = Time.objects.get_or_create(
type=time_type, startdate=time_date), defaults={...}
)
return time
class EventResource(ModelResource):
amount = Field(
column_name="amount",
attribute="amount",
widget=AmountForeignKeyWidget(Amount),
)
space = Field(
# column_name="space_code",
attribute="space",
widget=SpaceForeignKeyWidget(Space),
)
time = Field(
attribute="time",
widget=TimeForeignKeyWidget(Time),
)
def before_import_row(self, row, row_number=None, **kwargs):
logger.error(f">>>> before_import_row() >>>>>>")
time_date = datetime.strptime(row["time_date"], "%Y%m%d").date()
time_type = TimeType.objects.get_by_natural_key(row["time_type"])
Time.objects.get_or_create(
type=time_type, startdate=time_date,
defaults={
"name": str(time_type) + str(time_date),
"type": time_type,
"startdate": time_date,
"enddate": time_date + timedelta(days=1),
},
)
class Meta:
model = Event
I added some loggers, but I only print out the log at AmountForeignKeyWidget. The main question is: How to search for objects in Space by attributes (space_code,space_type,space_date) and in Time search and create by (time_date,time_type)
A lesser question is why SpaceForeignKeyWidget and TimeForeignKeyWidget are not used?
A:
The main question is: How to search for objects in Space by attributes (space_code,space_type,space_date) and in Time search and create by (time_date,time_type)
It looks like you are searching for these objects correctly, but it might not be being called. Often with import-export you will save yourself a lot of time if you setup your debugger and step through the code.
It could be that there isn't a 'space' or a 'time' column in your source csv. If there are no such fields, then the import process will silently skip this declaration. If you need to create objects if they don't exist, it's probably best to use before_import_row() for this, as you do in your example. Ensure that you use get_or_create() so that re-runs of the import are handled correctly.
Update
I believe the use case you have is that you need to link relations (Time, Space) to an Event instance during import, but there is no single field which identifies the relations. Instead, they are defined by a combination of fields.
This use case can be handled by import-export but it requires overriding the correct functions. We need to create relations if they don't exist, and then link the created relation instances to the model instance. Therefore we need to find a method in the code base which takes both the instance and the row as params. Unfortunately this is not as well defined as it could be in the code base (before_save_instance() would be a good candidate), but there is an method called import_obj() which we can use.
def import_obj(self, obj, data, dry_run, **kwargs):
# 'obj' is the object instance
# 'data' is the row data
# go ahead and create the relation objects
time_type = TimeType.objects.get_by_natural_key(row["time_type"])
time_date = parse_datetime(row["time_date"])
obj.time = Time.objects.get_or_create(
type=time_type, startdate=time_date), defaults={...}
)
# other relation creations omitted...
super().import_obj(obj, data, dry_run, **kwargs)
A lesser question is why SpaceForeignKeyWidget and TimeForeignKeyWidget are not used?
As above, if there is no 'space' or 'time' column in the source data, then they will never be called.
It shouldn't make a difference but your clean() method declaration does not define row as a kwarg in SpaceForeignKeyWidget and TimeForeignKeyWidget. Change the clean() definition to:
def clean(self, value, row=None, **kwargs):
# your implementation here
I can't see that this will fix it but maybe when running in your context it is an issue.
Note that there are some changes you can make to improve your code.
For AmountForeignKeyWidget, if you only need to look up by one value, you can change your resource declaration to this:
class EventResource(ModelResource):
amount = Field(
column_name="amount",
attribute="amount",
widget=ForeignKeyWidget(Amount, field="name__iexact"),
)
You don't need any extra logic, and the lookup will be case-insensitive.
A:
I managed to solve all the issues and make proper imports. Following is the code I used:
class EventResource(ModelResource):
amount = Field(
column_name="amount",
attribute="amount",
widget=ForeignKeyWidget(Amount, field="name__iexact"),
)
space_code = Field(
attribute="space",
widget=SpaceForeignKeyWidget(Space),
)
time_date = Field(
attribute="time",
widget=TimeForeignKeyWidget(Time),
)
class Meta:
model = Event
For the amount field I don't need to make a derivative Widget, since it is using only one variable in CSV. For the two others, implementation follows. I noticed that the widgets for the two other variables were not called and the reason is the variable names were non-existent in my CSV file. When I renamed them to the column names existing in the CSV they have been called.
class SpaceForeignKeyWidget(ForeignKeyWidget):
def clean(self, value, row, **kwargs):
space_code = row["spacial_code"]
space_type = SpaceDimensionType.objects.get(type=row["space_type"])
try:
space_date = datetime.strptime(row["space_date"], "%Y%m%d")
except ValueError:
space_date = None
space = SpaceDimension.objects.get(
code=space_code, type=space_type, source_date=space_date
)
return space
class TimeForeignKeyWidget(ForeignKeyWidget):
def clean(self, value, row, **kwargs):
time_type = TimeDimensionType.objects.get(type=row["time_type"])
delta = T_TYPES[time_type]
start_date = datetime.strptime(row["time_date"], "%Y%m%d").date()
end_date = start_date + timedelta(days=delta)
time, created = TimeDimension.objects.get_or_create(
type=time_type,
startdate=start_date,
enddate=start_date + timedelta(days=delta),
defaults={
"name": f"{time_type}: {start_date}-{end_date}",
"type": time_type,
"startdate": start_date,
"enddate": end_date,
},
)
return temporal
SpaceForeignKeyWidget only searches it the record is existing and returns the object and TimeForeignKeyWidget creates if non-existing and returns the record. This way no need to use before_import_row() and all the logic is localized to this two widgets.
| Match with Django import_export with multiple fields | I would like to import a CSV in Django. The issue occurs when trying to import based on the attributes. Here is my code:
class Event(models.Model):
id = models.BigAutoField(primary_key=True)
amount = models.ForeignKey(Amount, on_delete=models.CASCADE)
value = models.FloatField()
space = models.ForeignKey(Space, on_delete=models.RESTRICT)
time = models.ForeignKey(Time, on_delete=models.RESTRICT)
class Meta:
managed = True
db_table = "event"
class Space(models.Model):
objects = SpaceManager()
id = models.BigAutoField(primary_key=True)
code = models.CharField(max_length=100)
type = models.ForeignKey(SpaceType, on_delete=models.RESTRICT)
space_date = models.DateField(blank=True, null=True)
def natural_key(self):
return self.code # + self.type + self.source_date
def __str__(self):
return f"{self.name}"
class Meta:
managed = True
db_table = "space"
class Time(models.Model):
objects = TimeManager()
id = models.BigAutoField(primary_key=True)
type = models.ForeignKey(TimeType, on_delete=models.RESTRICT)
startdate = models.DateTimeField()
enddate = models.DateTimeField()
def natural_key(self):
return self.name
def __str__(self):
return f"{self.name}"
class Meta:
managed = True
db_table = "time"
Now, I create the resource that should find the right objects, but it seems it does not enter into ForeignKeyWidget(s) at all:
class AmountForeignKeyWidget(ForeignKeyWidget):
def clean(self, value, row=None, **kwargs):
logger.critical("<<<<< {AmountForeignKeyWidget} <<<<<<<")
name_upper = value.upper()
amount = Amount.objects.get_by_natural_key(name=name_upper)
return amount
class SpaceForeignKeyWidget(ForeignKeyWidget):
def clean(self, value, row, **kwargs):
logger.critical("<<<<< {SpaceForeignKeyWidget} <<<<<<<")
space_code = row["space_code"]
space_type = SpatialDimensionType.objects.get_by_natural_key(row["space_type"])
try:
space_date = datetime.strptime(row["space_date"], "%Y%m%d")
except ValueError:
space_date = None
space = Space.objects.get(
code=space_code, type=space_type, source_date=space_date
)
return space
class TimeForeignKeyWidget(ForeignKeyWidget):
def clean(self, value, row, **kwargs):
logger.critical("<<<<< {TimeForeignKeyWidget} <<<<<<<")
time_type = TimeType.objects.get_by_natural_key(row["time_type"])
time_date = parse_datetime(row["time_date"])
time = Time.objects.get_or_create(
type=time_type, startdate=time_date), defaults={...}
)
return time
class EventResource(ModelResource):
amount = Field(
column_name="amount",
attribute="amount",
widget=AmountForeignKeyWidget(Amount),
)
space = Field(
# column_name="space_code",
attribute="space",
widget=SpaceForeignKeyWidget(Space),
)
time = Field(
attribute="time",
widget=TimeForeignKeyWidget(Time),
)
def before_import_row(self, row, row_number=None, **kwargs):
logger.error(f">>>> before_import_row() >>>>>>")
time_date = datetime.strptime(row["time_date"], "%Y%m%d").date()
time_type = TimeType.objects.get_by_natural_key(row["time_type"])
Time.objects.get_or_create(
type=time_type, startdate=time_date,
defaults={
"name": str(time_type) + str(time_date),
"type": time_type,
"startdate": time_date,
"enddate": time_date + timedelta(days=1),
},
)
class Meta:
model = Event
I added some loggers, but I only print out the log at AmountForeignKeyWidget. The main question is: How to search for objects in Space by attributes (space_code,space_type,space_date) and in Time search and create by (time_date,time_type)
A lesser question is why SpaceForeignKeyWidget and TimeForeignKeyWidget are not used?
| [
"\nThe main question is: How to search for objects in Space by attributes (space_code,space_type,space_date) and in Time search and create by (time_date,time_type)\n\nIt looks like you are searching for these objects correctly, but it might not be being called. Often with import-export you will save yourself a lot of time if you setup your debugger and step through the code.\nIt could be that there isn't a 'space' or a 'time' column in your source csv. If there are no such fields, then the import process will silently skip this declaration. If you need to create objects if they don't exist, it's probably best to use before_import_row() for this, as you do in your example. Ensure that you use get_or_create() so that re-runs of the import are handled correctly.\nUpdate\nI believe the use case you have is that you need to link relations (Time, Space) to an Event instance during import, but there is no single field which identifies the relations. Instead, they are defined by a combination of fields.\nThis use case can be handled by import-export but it requires overriding the correct functions. We need to create relations if they don't exist, and then link the created relation instances to the model instance. Therefore we need to find a method in the code base which takes both the instance and the row as params. Unfortunately this is not as well defined as it could be in the code base (before_save_instance() would be a good candidate), but there is an method called import_obj() which we can use.\ndef import_obj(self, obj, data, dry_run, **kwargs):\n # 'obj' is the object instance\n # 'data' is the row data\n # go ahead and create the relation objects\n time_type = TimeType.objects.get_by_natural_key(row[\"time_type\"])\n time_date = parse_datetime(row[\"time_date\"])\n obj.time = Time.objects.get_or_create(\n type=time_type, startdate=time_date), defaults={...}\n )\n # other relation creations omitted...\n super().import_obj(obj, data, dry_run, **kwargs)\n\n\nA lesser question is why SpaceForeignKeyWidget and TimeForeignKeyWidget are not used?\n\nAs above, if there is no 'space' or 'time' column in the source data, then they will never be called.\nIt shouldn't make a difference but your clean() method declaration does not define row as a kwarg in SpaceForeignKeyWidget and TimeForeignKeyWidget. Change the clean() definition to:\ndef clean(self, value, row=None, **kwargs):\n # your implementation here\n\nI can't see that this will fix it but maybe when running in your context it is an issue.\nNote that there are some changes you can make to improve your code.\nFor AmountForeignKeyWidget, if you only need to look up by one value, you can change your resource declaration to this:\nclass EventResource(ModelResource):\n amount = Field(\n column_name=\"amount\",\n attribute=\"amount\",\n widget=ForeignKeyWidget(Amount, field=\"name__iexact\"),\n )\n\nYou don't need any extra logic, and the lookup will be case-insensitive.\n",
"I managed to solve all the issues and make proper imports. Following is the code I used:\nclass EventResource(ModelResource):\n amount = Field(\n column_name=\"amount\",\n attribute=\"amount\",\n widget=ForeignKeyWidget(Amount, field=\"name__iexact\"),\n )\n space_code = Field(\n attribute=\"space\",\n widget=SpaceForeignKeyWidget(Space),\n )\n time_date = Field(\n attribute=\"time\",\n widget=TimeForeignKeyWidget(Time),\n )\n\n class Meta:\n model = Event\n\nFor the amount field I don't need to make a derivative Widget, since it is using only one variable in CSV. For the two others, implementation follows. I noticed that the widgets for the two other variables were not called and the reason is the variable names were non-existent in my CSV file. When I renamed them to the column names existing in the CSV they have been called.\nclass SpaceForeignKeyWidget(ForeignKeyWidget):\n def clean(self, value, row, **kwargs):\n space_code = row[\"spacial_code\"]\n space_type = SpaceDimensionType.objects.get(type=row[\"space_type\"])\n try:\n space_date = datetime.strptime(row[\"space_date\"], \"%Y%m%d\")\n except ValueError:\n space_date = None\n\n space = SpaceDimension.objects.get(\n code=space_code, type=space_type, source_date=space_date\n )\n return space\n\n\nclass TimeForeignKeyWidget(ForeignKeyWidget):\n def clean(self, value, row, **kwargs):\n time_type = TimeDimensionType.objects.get(type=row[\"time_type\"])\n delta = T_TYPES[time_type]\n\n start_date = datetime.strptime(row[\"time_date\"], \"%Y%m%d\").date()\n end_date = start_date + timedelta(days=delta)\n time, created = TimeDimension.objects.get_or_create(\n type=time_type,\n startdate=start_date,\n enddate=start_date + timedelta(days=delta),\n defaults={\n \"name\": f\"{time_type}: {start_date}-{end_date}\",\n \"type\": time_type,\n \"startdate\": start_date,\n \"enddate\": end_date,\n },\n )\n return temporal\n\n\nSpaceForeignKeyWidget only searches it the record is existing and returns the object and TimeForeignKeyWidget creates if non-existing and returns the record. This way no need to use before_import_row() and all the logic is localized to this two widgets.\n"
] | [
1,
0
] | [] | [] | [
"django",
"django_import_export",
"python"
] | stackoverflow_0074647054_django_django_import_export_python.txt |
Q:
How to determine the majority of appearances of a list in list of lists. (Python)
I am trying to determine the majority in a list of lists for a project I am working on. My problem is that the code will run in an environment that not allow me to use packages. Can someone refer me to an algorithm that does what I am asking or let me know about a way to do it with pre built functions in python that don't require outside packages?. Thank you for your time.
Example:
data = [ ["hello", 1], ["hello", 1], ["hello", 1], ["other", 32] ]
Output:
["hello", 1]
A:
You can actually use a dictionairy to save the lists as keys and use the values as count. Then you can take the maximum count, to get your result.
data = [ ["hello", 1], ["hello", 1], ["hello", 1], ["other", 32] ]
# Make a dictionary:
dic = {}
# Loop over every item in the data
for item in data:
# Convert to tuple, since a list is unhashable:
entry = tuple(item)
# Add one to the count
# dic.get() gets the value of the entry in the dictionairy
# if this exists. Else, it sets the value to 0.
dic[entry] = dic.get(entry, 0) + 1
# Get the maximum argument by using a lambda function
# on the items in the dictionary. Get the key by taking index 0.
result = max(dic.items(), key = lambda x: x[1])[0]
You might want to convert the tuple back to a list by
result = list(result)
A:
Here is one possible solution using the built-in Counter class from the collections module in Python:
from collections import Counter
data = [ ["hello", 1], ["hello", 1], ["hello", 1], ["other", 32] ]
# Create a list of all the elements in the sublists
elements = [element[0] for element in data]
# Use Counter to count the occurrences of each element
c = Counter(elements)
# Get the most common element
most_common_element = c.most_common(1)[0][0]
# Get the value of the most common element from the original data
for element in data:
if element[0] == most_common_element:
value = element[1]
break
# Print the result
print([most_common_element, value])
A:
Try this:
data = [ ["hello", 1], ["hello", 1], ["hello", 1], ["other", 32] ]
for i in data:
if data.count(i) == max(data.count(i) for i in data):
res = i
print(res)
Or this:
res = [i for i in data if data.count(i) == max(data.count(i) for i in data)][0]
print(res)
Output:
['hello', 1]
| How to determine the majority of appearances of a list in list of lists. (Python) | I am trying to determine the majority in a list of lists for a project I am working on. My problem is that the code will run in an environment that not allow me to use packages. Can someone refer me to an algorithm that does what I am asking or let me know about a way to do it with pre built functions in python that don't require outside packages?. Thank you for your time.
Example:
data = [ ["hello", 1], ["hello", 1], ["hello", 1], ["other", 32] ]
Output:
["hello", 1]
| [
"You can actually use a dictionairy to save the lists as keys and use the values as count. Then you can take the maximum count, to get your result.\ndata = [ [\"hello\", 1], [\"hello\", 1], [\"hello\", 1], [\"other\", 32] ]\n\n# Make a dictionary:\ndic = {}\n\n# Loop over every item in the data\nfor item in data:\n\n # Convert to tuple, since a list is unhashable:\n entry = tuple(item)\n\n # Add one to the count\n # dic.get() gets the value of the entry in the dictionairy\n # if this exists. Else, it sets the value to 0.\n dic[entry] = dic.get(entry, 0) + 1\n\n# Get the maximum argument by using a lambda function \n# on the items in the dictionary. Get the key by taking index 0.\nresult = max(dic.items(), key = lambda x: x[1])[0]\n \n\nYou might want to convert the tuple back to a list by\nresult = list(result)\n\n",
"Here is one possible solution using the built-in Counter class from the collections module in Python:\nfrom collections import Counter\n\ndata = [ [\"hello\", 1], [\"hello\", 1], [\"hello\", 1], [\"other\", 32] ]\n\n# Create a list of all the elements in the sublists\nelements = [element[0] for element in data]\n\n# Use Counter to count the occurrences of each element\nc = Counter(elements)\n\n# Get the most common element\nmost_common_element = c.most_common(1)[0][0]\n\n# Get the value of the most common element from the original data\nfor element in data:\n if element[0] == most_common_element:\n value = element[1]\n break\n\n# Print the result\nprint([most_common_element, value])\n\n",
"Try this:\ndata = [ [\"hello\", 1], [\"hello\", 1], [\"hello\", 1], [\"other\", 32] ]\n\nfor i in data:\n if data.count(i) == max(data.count(i) for i in data):\n res = i\n\nprint(res)\n\nOr this:\nres = [i for i in data if data.count(i) == max(data.count(i) for i in data)][0]\nprint(res)\n\nOutput:\n['hello', 1]\n\n"
] | [
0,
0,
0
] | [] | [] | [
"python"
] | stackoverflow_0074665675_python.txt |
Q:
What is meant by ‘define model class’ in pytorch documentation?
On the pytorch documentation page about saving and loading models, it says that when loading a saved model, # Model class must be defined somewhere https://pytorch.org/tutorials/beginner/saving_loading_models.html#:~:text=%23%20Model%20class%20must%20be%20defined%20somewhere
Maybe my question is silly, but what does class in this context refer to? Thanks in advance.
Earlier on the page, the 'loading-of-a-model process' is described such as
Load:
model = TheModelClass(*args, **kwargs)
model.load_state_dict(torch.load(PATH))
model.eval()
A:
You need to define the model class as, for example, explained here. Re-using the example from the linked website as a random example, a class for TheModelClass could be defined as follows:
class TheModelClass(torch.nn.Module):
def __init__(self):
super(TheModelClass, self).__init__()
self.linear1 = torch.nn.Linear(100, 200)
self.activation = torch.nn.ReLU()
self.linear2 = torch.nn.Linear(200, 10)
self.softmax = torch.nn.Softmax()
def forward(self, x):
x = self.linear1(x)
x = self.activation(x)
x = self.linear2(x)
x = self.softmax(x)
return x
A:
The class in that context refers to the class of the model you’re trying to load with torch.load. The class must be defined because that function will construct the model object using the model class name stored in PATH. Thus, the construction will fail if the class with that name is not defined somewhere before torch.load is executed. This process is similar to how pickle loads a .pkl file (in fact I think torch.load uses pickle by default).
Note that the model class definition is not needed if you save and load the model’s state dict (the recommended way) because state dicts are Python dicts with strings as keys and torch.Tensor as values. Dicts and strings are built-ins so they’re always defined, and torch.Tensor is always defined whenever you import torch to use torch.load.
| What is meant by ‘define model class’ in pytorch documentation? | On the pytorch documentation page about saving and loading models, it says that when loading a saved model, # Model class must be defined somewhere https://pytorch.org/tutorials/beginner/saving_loading_models.html#:~:text=%23%20Model%20class%20must%20be%20defined%20somewhere
Maybe my question is silly, but what does class in this context refer to? Thanks in advance.
Earlier on the page, the 'loading-of-a-model process' is described such as
Load:
model = TheModelClass(*args, **kwargs)
model.load_state_dict(torch.load(PATH))
model.eval()
| [
"You need to define the model class as, for example, explained here. Re-using the example from the linked website as a random example, a class for TheModelClass could be defined as follows:\nclass TheModelClass(torch.nn.Module):\n\n def __init__(self):\n super(TheModelClass, self).__init__()\n\n self.linear1 = torch.nn.Linear(100, 200)\n self.activation = torch.nn.ReLU()\n self.linear2 = torch.nn.Linear(200, 10)\n self.softmax = torch.nn.Softmax()\n\n def forward(self, x):\n x = self.linear1(x)\n x = self.activation(x)\n x = self.linear2(x)\n x = self.softmax(x)\n return x\n\n",
"The class in that context refers to the class of the model you’re trying to load with torch.load. The class must be defined because that function will construct the model object using the model class name stored in PATH. Thus, the construction will fail if the class with that name is not defined somewhere before torch.load is executed. This process is similar to how pickle loads a .pkl file (in fact I think torch.load uses pickle by default).\nNote that the model class definition is not needed if you save and load the model’s state dict (the recommended way) because state dicts are Python dicts with strings as keys and torch.Tensor as values. Dicts and strings are built-ins so they’re always defined, and torch.Tensor is always defined whenever you import torch to use torch.load.\n"
] | [
0,
0
] | [] | [] | [
"nlp",
"python",
"pytorch"
] | stackoverflow_0073339264_nlp_python_pytorch.txt |
Q:
Concatenate columns of Pandas dataframe into a new column of lists with only non-zero values
I have a Pandas dataframe that looks like:
mwe5a = pd.DataFrame({'a': [0.1, 0.0],
'b': [0.0, 0.2],
'c': [0.3, 0.0]
}
)
mwe5a
a b c
0 0.1 0.0 0.3
1 0.0 0.2 0.0
My desired output is:
mwe5b
output_column
[0.1, 0.3]
[0.2]
How do I do that?
After that, I'd like to sort the order of a column in another Pandas dataframe based on those values, from largest value to least.
mwe7a = pd.DataFrame({'items': [ ['item1', 'item2'],
['item3']
]})
['item1', 'item2']
['item3']
which should then look like
mwe7b
['item2', 'item1']
['item3']
UPDATE:
I updated the MWE dataframes to be less confusing. So to review, I can get the following to work:
token_uniqueness_sparse = pd.DataFrame({'token_a': [0.1, 0.0],
'token_b': [0.0, 0.2],
'token c': [0.3, 0.0]
}
)
token_uniqueness_sparse
token_a token_b token c
0 0.1 0.0 0.3
1 0.0 0.2 0.0
sf_fake = pd.DataFrame({'items': [ ['token_a', 'token_c'],
['token_b']],
'rcol': [1,2]
})
sf_fake
items rcol
0 [token_a, token_c] 1
1 [token_b] 2
token_uniqueness_dense = (token_uniqueness_sparse
.apply(lambda x: list(x[x.ne(0)]), axis=1)
.to_frame('output_column'))
token_uniqueness_dense
output_column
0 [0.1, 0.3]
1 [0.2]
(sf_fake.apply(lambda x: sorted(x['items'], key=lambda y: token_uniqueness_dense.loc[x.name,
'output_column'][x['items'].index(y)], reverse=True), axis=1))
So I know the solution works. But when I attempt to apply it to my actual dataframes and not the toy ones above, I get the following error:
Input In [76], in <lambda>(x)
----> 1 (forbes_df.apply(lambda x: sorted(x['tokenized_company_name'],
2 key=lambda y: tfidf_df_dense.loc[x.name,
3 'output_column'][x['tokenized_company_name'].index(y)], reverse=True), axis=1))
Input In [76], in <lambda>.<locals>.<lambda>(y)
1 (forbes_df.apply(lambda x: sorted(x['tokenized_company_name'],
----> 2 key=lambda y: tfidf_df_dense.loc[x.name,
3 'output_column'][x['tokenized_company_name'].index(y)], reverse=True), axis=1))
IndexError: list index out of range
Any ideas what to check for?
A:
A possible solution:
mwe5b = (mwe5a
.apply(lambda x: list(x[x.ne(0)].sort_values(ascending=False)), axis=1)
.to_frame('output_column'))
Output:
output_column
0 [0.3, 0.1]
1 [0.2]
EDIT
To accomplish the goal the OP wants with mwe7a, I offer the following solution:
(mwe7a.apply(lambda x: sorted(x['items'], key=lambda y: mwe5b.loc[x.name,
'output_column'][x['items'].index(y)], reverse=True), axis=1))
To get mwe5b without sorting, as needed for getting mwe7a:
mwe5b = (mwe5a
.apply(lambda x: list(x[x.ne(0)]), axis=1)
.to_frame('output_column'))
Output:
0 [item2, item1]
1 [item3]
| Concatenate columns of Pandas dataframe into a new column of lists with only non-zero values | I have a Pandas dataframe that looks like:
mwe5a = pd.DataFrame({'a': [0.1, 0.0],
'b': [0.0, 0.2],
'c': [0.3, 0.0]
}
)
mwe5a
a b c
0 0.1 0.0 0.3
1 0.0 0.2 0.0
My desired output is:
mwe5b
output_column
[0.1, 0.3]
[0.2]
How do I do that?
After that, I'd like to sort the order of a column in another Pandas dataframe based on those values, from largest value to least.
mwe7a = pd.DataFrame({'items': [ ['item1', 'item2'],
['item3']
]})
['item1', 'item2']
['item3']
which should then look like
mwe7b
['item2', 'item1']
['item3']
UPDATE:
I updated the MWE dataframes to be less confusing. So to review, I can get the following to work:
token_uniqueness_sparse = pd.DataFrame({'token_a': [0.1, 0.0],
'token_b': [0.0, 0.2],
'token c': [0.3, 0.0]
}
)
token_uniqueness_sparse
token_a token_b token c
0 0.1 0.0 0.3
1 0.0 0.2 0.0
sf_fake = pd.DataFrame({'items': [ ['token_a', 'token_c'],
['token_b']],
'rcol': [1,2]
})
sf_fake
items rcol
0 [token_a, token_c] 1
1 [token_b] 2
token_uniqueness_dense = (token_uniqueness_sparse
.apply(lambda x: list(x[x.ne(0)]), axis=1)
.to_frame('output_column'))
token_uniqueness_dense
output_column
0 [0.1, 0.3]
1 [0.2]
(sf_fake.apply(lambda x: sorted(x['items'], key=lambda y: token_uniqueness_dense.loc[x.name,
'output_column'][x['items'].index(y)], reverse=True), axis=1))
So I know the solution works. But when I attempt to apply it to my actual dataframes and not the toy ones above, I get the following error:
Input In [76], in <lambda>(x)
----> 1 (forbes_df.apply(lambda x: sorted(x['tokenized_company_name'],
2 key=lambda y: tfidf_df_dense.loc[x.name,
3 'output_column'][x['tokenized_company_name'].index(y)], reverse=True), axis=1))
Input In [76], in <lambda>.<locals>.<lambda>(y)
1 (forbes_df.apply(lambda x: sorted(x['tokenized_company_name'],
----> 2 key=lambda y: tfidf_df_dense.loc[x.name,
3 'output_column'][x['tokenized_company_name'].index(y)], reverse=True), axis=1))
IndexError: list index out of range
Any ideas what to check for?
| [
"A possible solution:\nmwe5b = (mwe5a\n .apply(lambda x: list(x[x.ne(0)].sort_values(ascending=False)), axis=1)\n .to_frame('output_column'))\n\nOutput:\n output_column\n0 [0.3, 0.1]\n1 [0.2]\n\nEDIT\nTo accomplish the goal the OP wants with mwe7a, I offer the following solution:\n(mwe7a.apply(lambda x: sorted(x['items'], key=lambda y: mwe5b.loc[x.name,\n 'output_column'][x['items'].index(y)], reverse=True), axis=1))\n\nTo get mwe5b without sorting, as needed for getting mwe7a:\nmwe5b = (mwe5a\n .apply(lambda x: list(x[x.ne(0)]), axis=1)\n .to_frame('output_column'))\n\nOutput:\n0 [item2, item1]\n1 [item3]\n\n"
] | [
2
] | [] | [] | [
"pandas",
"python",
"python_3.x"
] | stackoverflow_0074666489_pandas_python_python_3.x.txt |
Q:
Alphabet Layers In Python
How to multiply layers without ankwardly repeating elif lines? Cannot get += 1 working. Or perhaps different string approach? I'm certainly new in Python.
layer = int(input("Give a number between 2 and 26: "))
table_size = layer + layer - 1
ts = table_size
center = (ts // 2)
for row in range(ts):
for col in range(ts):
if row == col == (center):
print("A", end="")
elif (row > center or col > center \
or row < center or col < center) \
and row < center + 2 and row > center - 2 \
and col < center + 2 and col > center - 2 :
print("B", end="")
elif (row > center+1 or col > center+1 \
or row < center-1 or col < center-1) \
and row < center+3 and row > center-3 \
and col < center+3 and col > center-3 :
print(chr(67), end="")
else:
print(" ", end="")
print()
CCCCC
CBBBC
CBABC
CBBBC
CCCCC
A:
You can resort to numpy to prepare the indexation of the alphabet, and then use the prepared indexes to get your final string. This is how:
# Get your number of layers
N = int(input("Give a number between 2 and 26: "))
assert 2<=N<=26, 'Wrong number'
# INDEX PREPARATION WITH NP
import numpy as np
len_vec = np.arange(N)
horiz_vec = np.concatenate([np.flip(len_vec[1:]), len_vec])
rep_mat = np.tile(horiz_vec, [ 2*N-1, 1])
idx_mat = np.maximum(rep_mat, rep_mat.T)
# STRING CREATION: join elements in row with '', and rows with newline '\n'
from string import ascii_uppercase # 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'
final_string = '\n'.join(''.join([ascii_uppercase[i] for i in row]) for row in idx_mat)
# PRINTING THE STRING
print(final_string)
An example with N=3:
#> len_vec
array([0, 1, 2])
#> horiz_vec
array([2, 1, 0, 1, 2])
#> rep_mat
array([[2, 1, 0, 1, 2],
[2, 1, 0, 1, 2],
[2, 1, 0, 1, 2],
[2, 1, 0, 1, 2],
[2, 1, 0, 1, 2]])
#> idx_mat
array([[2, 2, 2, 2, 2],
[2, 1, 1, 1, 2],
[2, 1, 0, 1, 2],
[2, 1, 1, 1, 2],
[2, 2, 2, 2, 2]])
#> print(final_string)
CCCCC
CBBBC
CBABC
CBBBC
CCCCC
A:
This is an example with a regular python list:
from string import ascii_uppercase
result = []
# Get your number of layers
N = int(input("Give a number between 2 and 26: "))
assert 2<=N<=26, 'Wrong number'
for i in range(N):
# update existing rows
for j, string in enumerate(result):
result[j] = ascii_uppercase[i] + string + ascii_uppercase[i]
# add top and bottom row
result.append((2*i+1)*ascii_uppercase[i])
if i != 0:
result.insert(0, (2*i+1)*ascii_uppercase[i])
# print result
for line in result:
print(line)
A:
layer = int(input("Give a number between 2 and 26: "))
table_size = layer + layer - 1
ts = table_size
center = (ts // 2)
counter=0
print(center)
for row in range(ts):
for col in range(ts):
if row<=center and ts-counter>col:
outcome=65+center-min(row,col)
elif row <=center and col>=ts-counter :
outcome=65+col-center
elif row>center and ts-counter>col:
outcome=65+center-min(row,col)
elif row >center and col<counter :
outcome=65+row-center
elif row >center and col>=counter :
outcome=65+row-center+(col-counter)
print(chr(outcome), end="")
counter=counter+1
print()
A:
user_input = int(input("Layers: "))
center = 25
layer = user_input - 1
counter = 0
import string
string_x = ""
alphabet = 26
list_of_letters = [True]
while alphabet != (-1):
string_x = string_x + string.ascii_uppercase[alphabet-1]*alphabet
string_y = string_x[::-1]
string_y = string_y[1:len(string_y)]
alphabet = alphabet - 1
string_z = string_x + string_y
list_of_letters.append(string_z)
string_x = string_x[0:26-alphabet]
dictionary = { }
variable = 0
for number in range(1,27):
dictionary[number] = 24 - variable
variable = variable + 1
differential = user_input - dictionary[user_input]
counter = user_input - differential + 2
helper_variable = counter
while counter != 26:
print(list_of_letters[counter][center-layer:center+user_input])
counter = counter + 1
while counter != helper_variable - 1:
print(list_of_letters[counter][center-layer:center+user_input])
counter = counter - 1
You can make this box of letters by creating a list with elements from 'ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ' to 'ZYXWVUTSRQPONMLKJIHGFEDCBABCDEFGHIJKLMNOPQRSTUVWXYZ'. And then find a way to reference and print these strings exactly as many times as you want considering that 'A' will be 25th and you add layers with neighbouring letters. Do it in both directions using while-loop and helper variables.
| Alphabet Layers In Python | How to multiply layers without ankwardly repeating elif lines? Cannot get += 1 working. Or perhaps different string approach? I'm certainly new in Python.
layer = int(input("Give a number between 2 and 26: "))
table_size = layer + layer - 1
ts = table_size
center = (ts // 2)
for row in range(ts):
for col in range(ts):
if row == col == (center):
print("A", end="")
elif (row > center or col > center \
or row < center or col < center) \
and row < center + 2 and row > center - 2 \
and col < center + 2 and col > center - 2 :
print("B", end="")
elif (row > center+1 or col > center+1 \
or row < center-1 or col < center-1) \
and row < center+3 and row > center-3 \
and col < center+3 and col > center-3 :
print(chr(67), end="")
else:
print(" ", end="")
print()
CCCCC
CBBBC
CBABC
CBBBC
CCCCC
| [
"You can resort to numpy to prepare the indexation of the alphabet, and then use the prepared indexes to get your final string. This is how:\n# Get your number of layers\nN = int(input(\"Give a number between 2 and 26: \"))\nassert 2<=N<=26, 'Wrong number'\n\n# INDEX PREPARATION WITH NP\nimport numpy as np\nlen_vec = np.arange(N) \nhoriz_vec = np.concatenate([np.flip(len_vec[1:]), len_vec]) \nrep_mat = np.tile(horiz_vec, [ 2*N-1, 1])\nidx_mat = np.maximum(rep_mat, rep_mat.T)\n\n# STRING CREATION: join elements in row with '', and rows with newline '\\n'\nfrom string import ascii_uppercase # 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'\nfinal_string = '\\n'.join(''.join([ascii_uppercase[i] for i in row]) for row in idx_mat)\n\n# PRINTING THE STRING\nprint(final_string)\n\nAn example with N=3:\n#> len_vec\narray([0, 1, 2])\n#> horiz_vec\narray([2, 1, 0, 1, 2])\n#> rep_mat\narray([[2, 1, 0, 1, 2],\n [2, 1, 0, 1, 2],\n [2, 1, 0, 1, 2],\n [2, 1, 0, 1, 2],\n [2, 1, 0, 1, 2]])\n#> idx_mat\narray([[2, 2, 2, 2, 2],\n [2, 1, 1, 1, 2],\n [2, 1, 0, 1, 2],\n [2, 1, 1, 1, 2],\n [2, 2, 2, 2, 2]])\n#> print(final_string)\nCCCCC\nCBBBC\nCBABC\nCBBBC\nCCCCC\n\n",
"This is an example with a regular python list:\nfrom string import ascii_uppercase\n\nresult = []\n\n# Get your number of layers\nN = int(input(\"Give a number between 2 and 26: \"))\nassert 2<=N<=26, 'Wrong number'\n\nfor i in range(N):\n # update existing rows\n for j, string in enumerate(result):\n result[j] = ascii_uppercase[i] + string + ascii_uppercase[i]\n\n # add top and bottom row\n result.append((2*i+1)*ascii_uppercase[i])\n if i != 0:\n result.insert(0, (2*i+1)*ascii_uppercase[i])\n \n# print result\nfor line in result:\n print(line)\n\n",
" layer = int(input(\"Give a number between 2 and 26: \"))\ntable_size = layer + layer - 1\nts = table_size\ncenter = (ts // 2)\ncounter=0\nprint(center)\nfor row in range(ts):\n for col in range(ts):\n if row<=center and ts-counter>col:\n outcome=65+center-min(row,col)\n elif row <=center and col>=ts-counter :\n outcome=65+col-center \n elif row>center and ts-counter>col:\n outcome=65+center-min(row,col) \n elif row >center and col<counter : \n outcome=65+row-center\n elif row >center and col>=counter : \n outcome=65+row-center+(col-counter) \n \n print(chr(outcome), end=\"\")\n counter=counter+1 \n \n print()\n\n",
"user_input = int(input(\"Layers: \"))\ncenter = 25\nlayer = user_input - 1\ncounter = 0\n\nimport string\nstring_x = \"\"\nalphabet = 26\nlist_of_letters = [True]\nwhile alphabet != (-1):\n string_x = string_x + string.ascii_uppercase[alphabet-1]*alphabet\n string_y = string_x[::-1]\n string_y = string_y[1:len(string_y)]\n alphabet = alphabet - 1\n string_z = string_x + string_y \n list_of_letters.append(string_z)\n string_x = string_x[0:26-alphabet]\n\ndictionary = { }\nvariable = 0\nfor number in range(1,27):\n dictionary[number] = 24 - variable\n variable = variable + 1\n\ndifferential = user_input - dictionary[user_input]\ncounter = user_input - differential + 2\nhelper_variable = counter\n\nwhile counter != 26:\n print(list_of_letters[counter][center-layer:center+user_input])\n counter = counter + 1\nwhile counter != helper_variable - 1:\n print(list_of_letters[counter][center-layer:center+user_input])\n counter = counter - 1\n\nYou can make this box of letters by creating a list with elements from 'ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ' to 'ZYXWVUTSRQPONMLKJIHGFEDCBABCDEFGHIJKLMNOPQRSTUVWXYZ'. And then find a way to reference and print these strings exactly as many times as you want considering that 'A' will be 25th and you add layers with neighbouring letters. Do it in both directions using while-loop and helper variables.\n"
] | [
0,
0,
0,
0
] | [] | [] | [
"alphabet",
"design_patterns",
"layer",
"loops",
"python"
] | stackoverflow_0067938383_alphabet_design_patterns_layer_loops_python.txt |
Q:
PyDev and Django: how to restart dev server?
I'm new to Django. I think I'm making a simple mistake.
I launched the dev server with Pydev:
RClick on project >> Django >> Custom
command >> runserver
The server came up, and everything was great. But now I'm trying to stop it, and can't figure out how. I stopped the process in the PyDev console, and closed Eclipse, but web pages are still being served from http://127.0.0.1:8000.
I launched and quit the server from the command line normally:
python manage.py runserver
But the server is still up. What am I doing wrong here?
A:
By default, the runserver command runs in autoreload mode, which runs in a separate process. This means that PyDev doesn't know how to stop it, and doesn't display its output in the console window.
If you run the command runserver --noreload instead, the auto-reloader will be disabled. Then you can see the console output and stop the server normally. However, this means that changes to your Python files won't be effective until you manually restart the server.
A:
Run the project 1. Right click on the project (not subfolders) 2. Run As > Pydev:Django
Terminate 1. Click terminate in console window
The server is down
A:
I usually run it from console. Running from PyDev adds unnecessary confusion, and doesn't bring any benefit until you happen to use PyDev's GUI interactive debugging.
A:
Edit: Latest PyDev versions (since PyDev 3.4.1) no longer need any workaround:
i.e.: PyDev will properly kill subprocesses on a kill process operation and when debugging even with regular reloading on, PyDev will attach the debugger to the child processes.
Old answer (for PyDev versions older than 3.4.1):
Unfortunately, that's expected, as PyDev will simply kill the parent process (i.e.: as if instead of ctrl+C you kill the parent process in the task manager).
The solution would be editing Django itself so that the child process polls the parent process to know it's still alive and exit if it's not... see: How to make child process die after parent exits? for a reference.
After a quick look it seems related to django/utils/autoreload.py and the way it starts up things -- so, it'd be needed to start a thread that keeps seeing if the parent is alive and if it's not it kills the child process -- I've reported that as a bug in Django itself: https://code.djangoproject.com/ticket/16982
Note: as a workaround for PyDev, you can make Django allocate a new console (out of PyDev) while still running from PyDev (so, until a proper solution is available from Django, the patch below can be used to make the Django autoreload allocate a new console -- where you can properly use Ctrl+C).
Index: django/utils/autoreload.py
===================================================================
--- django/utils/autoreload.py (revision 16923)
+++ django/utils/autoreload.py (working copy)
@@ -98,11 +98,14 @@
def restart_with_reloader():
while True:
args = [sys.executable] + ['-W%s' % o for o in sys.warnoptions] + sys.argv
- if sys.platform == "win32":
- args = ['"%s"' % arg for arg in args]
new_environ = os.environ.copy()
new_environ["RUN_MAIN"] = 'true'
- exit_code = os.spawnve(os.P_WAIT, sys.executable, args, new_environ)
+
+ import subprocess
+ popen = subprocess.Popen(args, env=new_environ, creationflags=subprocess.CREATE_NEW_CONSOLE)
+ exit_code = popen.wait()
if exit_code != 3:
return exit_code
A:
Solution: create an interpreter error in some project file. This will cause the server to crash. Server can then be restarted as normal.
A:
If you operate on Windows using the CMD: Quit the server with CTRL+BREAK.
python manage.py runserver localhost:8000
A:
you can quit by clicking Ctrl+ Pause keys. Note that the Pause key might be called Break and in some laptops it is made using the combination Fn + F12. Hope this might helps.
A:
run sudo lsof -i:8000
then run kill -9 #PID should work to kill the processes running that server.
then you can python manage.py server on that port again
| PyDev and Django: how to restart dev server? | I'm new to Django. I think I'm making a simple mistake.
I launched the dev server with Pydev:
RClick on project >> Django >> Custom
command >> runserver
The server came up, and everything was great. But now I'm trying to stop it, and can't figure out how. I stopped the process in the PyDev console, and closed Eclipse, but web pages are still being served from http://127.0.0.1:8000.
I launched and quit the server from the command line normally:
python manage.py runserver
But the server is still up. What am I doing wrong here?
| [
"By default, the runserver command runs in autoreload mode, which runs in a separate process. This means that PyDev doesn't know how to stop it, and doesn't display its output in the console window.\nIf you run the command runserver --noreload instead, the auto-reloader will be disabled. Then you can see the console output and stop the server normally. However, this means that changes to your Python files won't be effective until you manually restart the server.\n",
"Run the project 1. Right click on the project (not subfolders) 2. Run As > Pydev:Django\nTerminate 1. Click terminate in console window\nThe server is down\n",
"I usually run it from console. Running from PyDev adds unnecessary confusion, and doesn't bring any benefit until you happen to use PyDev's GUI interactive debugging.\n",
"Edit: Latest PyDev versions (since PyDev 3.4.1) no longer need any workaround:\ni.e.: PyDev will properly kill subprocesses on a kill process operation and when debugging even with regular reloading on, PyDev will attach the debugger to the child processes.\n\nOld answer (for PyDev versions older than 3.4.1):\nUnfortunately, that's expected, as PyDev will simply kill the parent process (i.e.: as if instead of ctrl+C you kill the parent process in the task manager).\nThe solution would be editing Django itself so that the child process polls the parent process to know it's still alive and exit if it's not... see: How to make child process die after parent exits? for a reference.\nAfter a quick look it seems related to django/utils/autoreload.py and the way it starts up things -- so, it'd be needed to start a thread that keeps seeing if the parent is alive and if it's not it kills the child process -- I've reported that as a bug in Django itself: https://code.djangoproject.com/ticket/16982\nNote: as a workaround for PyDev, you can make Django allocate a new console (out of PyDev) while still running from PyDev (so, until a proper solution is available from Django, the patch below can be used to make the Django autoreload allocate a new console -- where you can properly use Ctrl+C).\nIndex: django/utils/autoreload.py\n===================================================================\n--- django/utils/autoreload.py (revision 16923)\n+++ django/utils/autoreload.py (working copy)\n@@ -98,11 +98,14 @@\n def restart_with_reloader():\n while True:\n args = [sys.executable] + ['-W%s' % o for o in sys.warnoptions] + sys.argv\n- if sys.platform == \"win32\":\n- args = ['\"%s\"' % arg for arg in args]\n new_environ = os.environ.copy()\n new_environ[\"RUN_MAIN\"] = 'true'\n- exit_code = os.spawnve(os.P_WAIT, sys.executable, args, new_environ)\n+\n+ import subprocess\n+ popen = subprocess.Popen(args, env=new_environ, creationflags=subprocess.CREATE_NEW_CONSOLE)\n+ exit_code = popen.wait()\n if exit_code != 3:\n return exit_code\n\n",
"Solution: create an interpreter error in some project file. This will cause the server to crash. Server can then be restarted as normal.\n",
"If you operate on Windows using the CMD: Quit the server with CTRL+BREAK.\npython manage.py runserver localhost:8000\n\n",
"you can quit by clicking Ctrl+ Pause keys. Note that the Pause key might be called Break and in some laptops it is made using the combination Fn + F12. Hope this might helps.\n",
"run sudo lsof -i:8000\nthen run kill -9 #PID should work to kill the processes running that server.\nthen you can python manage.py server on that port again\n"
] | [
14,
5,
4,
3,
2,
1,
0,
0
] | [] | [] | [
"devserver",
"django",
"eclipse",
"pydev",
"python"
] | stackoverflow_0002746512_devserver_django_eclipse_pydev_python.txt |
Q:
Hi I am new to python programming. I have written the following code but I keep getting this error. Can anyone help me at all please?
count = 1
total = 0
average = 0
array = []
while input("Enter q to quit or any other key to continue: ") != "q":
numlist = input('Enter number\n')
array.append(numlist)
try:
count = count + 1
total = total + float(numlist)
except:
count = count - 1
print('Enter a valid number')
continue
average = float(total) / float(count)
array.sort()
mid = len(array) // 2
res = (array[mid] + array[~mid]) / 2
print('Avg:', average)
print("The median is : ", res)
I get this following error:
Traceback (most recent call last):
File "<string>", line 22, in <module>
TypeError: unsupported operand type(s) for /: 'str' and 'int'
I was expecting to get 'enter a valid number' when the user enters anything but number.
A:
An input function is returning a string even though you actually type a number:
https://docs.python.org/3/library/functions.html#input
You need to convert that string to number before appending to array, for instance:
array.append(float(numlist))
but it should be in try / except block so your validation checks also work.
In this case you will be indexing only actual numbers, not everything that has been typed.
| Hi I am new to python programming. I have written the following code but I keep getting this error. Can anyone help me at all please? | count = 1
total = 0
average = 0
array = []
while input("Enter q to quit or any other key to continue: ") != "q":
numlist = input('Enter number\n')
array.append(numlist)
try:
count = count + 1
total = total + float(numlist)
except:
count = count - 1
print('Enter a valid number')
continue
average = float(total) / float(count)
array.sort()
mid = len(array) // 2
res = (array[mid] + array[~mid]) / 2
print('Avg:', average)
print("The median is : ", res)
I get this following error:
Traceback (most recent call last):
File "<string>", line 22, in <module>
TypeError: unsupported operand type(s) for /: 'str' and 'int'
I was expecting to get 'enter a valid number' when the user enters anything but number.
| [
"An input function is returning a string even though you actually type a number:\nhttps://docs.python.org/3/library/functions.html#input\nYou need to convert that string to number before appending to array, for instance:\narray.append(float(numlist))\n\nbut it should be in try / except block so your validation checks also work.\nIn this case you will be indexing only actual numbers, not everything that has been typed.\n"
] | [
0
] | [] | [] | [
"python"
] | stackoverflow_0074666567_python.txt |
Q:
Assigned a complex value in cupy RawKernel
I am a beginner learning how to exploit GPU for parallel computation using python and cupy. I would like to implement my code to simulate some problems in physics and require to use complex number, but don't know how to manage it. Although there are examples in Cupy's official document, it only mentions about include complex.cuh library and how to declare a complex variable. I can't find any example about how to assign a complex number correctly, as well ass how to call the function in the complex.cuh library to do calculation.
I am stuck in line 11 of this code. I want to make a complex number value equal x[tIdx]+j*y[t_Idx], j is the imaginary number. I tried several ways and no one works, so I left this one here.
import cupy as cp
import time
add_kernel = cp.RawKernel(r'''
#include <cupy/complex.cuh>
extern "C" __global__
void test(double* x, double* y, complex<float>* z){
int tId_x = blockDim.x*blockIdx.x + threadIdx.x;
int tId_y = blockDim.y*blockIdx.y + threadIdx.y;
complex<float>* value = complex(x[tId_x],y[tId_y]);
z[tId_x*blockDim.y*gridDim.y+tId_y] = value;
}''',"test")
x = cp.random.rand(1,8,4096,dtype = cp.float32)
y = cp.random.rand(1,8,4096,dtype = cp.float32)
z = cp.zeros((4096,4096), dtype = cp.complex64)
t1 = time.time()
add_kernel((128,128),(32,32),(x,y,z))
print(time.time()-t1)
What is the proper way to assign a complex number in the RawKernel?
Thank you for answering this question!
A:
@plaeonix, thank you very much for your hint. I find out the answer.
This line:
complex<float>* value = complex(x[tId_x],y[tId_y])
should be replaced to:
complex<float> value = complex<float>(x[tId_x],y[tId_y])
Then the assignment of a complex number works.
| Assigned a complex value in cupy RawKernel | I am a beginner learning how to exploit GPU for parallel computation using python and cupy. I would like to implement my code to simulate some problems in physics and require to use complex number, but don't know how to manage it. Although there are examples in Cupy's official document, it only mentions about include complex.cuh library and how to declare a complex variable. I can't find any example about how to assign a complex number correctly, as well ass how to call the function in the complex.cuh library to do calculation.
I am stuck in line 11 of this code. I want to make a complex number value equal x[tIdx]+j*y[t_Idx], j is the imaginary number. I tried several ways and no one works, so I left this one here.
import cupy as cp
import time
add_kernel = cp.RawKernel(r'''
#include <cupy/complex.cuh>
extern "C" __global__
void test(double* x, double* y, complex<float>* z){
int tId_x = blockDim.x*blockIdx.x + threadIdx.x;
int tId_y = blockDim.y*blockIdx.y + threadIdx.y;
complex<float>* value = complex(x[tId_x],y[tId_y]);
z[tId_x*blockDim.y*gridDim.y+tId_y] = value;
}''',"test")
x = cp.random.rand(1,8,4096,dtype = cp.float32)
y = cp.random.rand(1,8,4096,dtype = cp.float32)
z = cp.zeros((4096,4096), dtype = cp.complex64)
t1 = time.time()
add_kernel((128,128),(32,32),(x,y,z))
print(time.time()-t1)
What is the proper way to assign a complex number in the RawKernel?
Thank you for answering this question!
| [
"@plaeonix, thank you very much for your hint. I find out the answer.\nThis line:\ncomplex<float>* value = complex(x[tId_x],y[tId_y])\nshould be replaced to:\ncomplex<float> value = complex<float>(x[tId_x],y[tId_y])\nThen the assignment of a complex number works.\n"
] | [
1
] | [] | [] | [
"cuda",
"cupy",
"python"
] | stackoverflow_0074654285_cuda_cupy_python.txt |
Q:
why i getting IndexError: list index out of range
Complete the solution so that it splits the string into pairs of two characters. If the string contains an odd number of characters then it should replace the missing second character of the final pair with an underscore ('_').
Examples:
'abc' => ['ab', 'c_']
'abcdef' => ['ab', 'cd', 'ef']
https://prnt.sc/E2sdtceLtkmF
# **My Code:**
def solution(s):
```n = 2
```sp = [s[index : index + n] for index in range(0, len(s), n)]
```if len(sp[-1]) == 1:
sp[-1] = sp[-1] + "_"
```return sp
```else:
```return sp
and i geting this error:
Traceback (most recent call last):
File "/workspace/default/tests.py", line 13, in <module>
test.assert_equals(solution(inp), exp)
File "/workspace/default/solution.py", line 5, in solution
if len(sp[-1]) == 1:
IndexError: list index out of range
# pls someone help
A:
its fixed by adding another if condition which checking sp is empty or not
def solution(s):
n = 2
sp = [s[index : index + n] for index in range(0, len(s), n)]
if len(sp) == 0:
return sp
if len(sp[-1]) == 1:
sp[-1] = sp[-1] + "_"
return sp
else:
return sp
A:
You need to test for the possibility that the input parameter is an empty string.
def solution(s):
n = 2
sp = [s[index : index + n] for index in range(0, len(s), n)]
if sp and len(sp[-1]) == 1:
sp[-1] += '_'
return sp
| why i getting IndexError: list index out of range | Complete the solution so that it splits the string into pairs of two characters. If the string contains an odd number of characters then it should replace the missing second character of the final pair with an underscore ('_').
Examples:
'abc' => ['ab', 'c_']
'abcdef' => ['ab', 'cd', 'ef']
https://prnt.sc/E2sdtceLtkmF
# **My Code:**
def solution(s):
```n = 2
```sp = [s[index : index + n] for index in range(0, len(s), n)]
```if len(sp[-1]) == 1:
sp[-1] = sp[-1] + "_"
```return sp
```else:
```return sp
and i geting this error:
Traceback (most recent call last):
File "/workspace/default/tests.py", line 13, in <module>
test.assert_equals(solution(inp), exp)
File "/workspace/default/solution.py", line 5, in solution
if len(sp[-1]) == 1:
IndexError: list index out of range
# pls someone help
| [
"its fixed by adding another if condition which checking sp is empty or not\ndef solution(s):\n n = 2\n sp = [s[index : index + n] for index in range(0, len(s), n)]\n\n if len(sp) == 0:\n return sp\n\n if len(sp[-1]) == 1:\n sp[-1] = sp[-1] + \"_\"\n return sp\n\n else:\n return sp\n\n",
"You need to test for the possibility that the input parameter is an empty string.\ndef solution(s):\n n = 2\n sp = [s[index : index + n] for index in range(0, len(s), n)]\n if sp and len(sp[-1]) == 1:\n sp[-1] += '_'\n return sp\n\n"
] | [
0,
0
] | [] | [] | [
"python"
] | stackoverflow_0074666426_python.txt |
Q:
Can;t install lxml package on windwos 11
"PS D:\Complete-Python-3-Bootcamp-master\12-Advanced Python Modules\puzzle_unzip> pip install lxml
Collecting lxml
Using cached lxml-4.9.1.tar.gz (3.4 MB)
Preparing metadata (setup.py) ... done
Installing collected packages: lxml
DEPRECATION: lxml is being installed using the legacy 'setup.py install' method, because it does not have a 'pyproject.toml' and the 'wheel' package is not installed. pip 23.1 will enforce this behaviour change. A possible replacement is to enable the '--use-pep517' option. Discussion can be found at https://github.com/pypa/pip/issues/8559
Running setup.py install for lxml ... error
error: subprocess-exited-with-error
× Running setup.py install for lxml did not run successfully.
│ exit code: 1
╰─> [96 lines of output]
Building lxml version 4.9.1.
Building without Cython.
Building against pre-built libxml2 andl libxslt libraries
running install
C:\Users\lohar\AppData\Local\Programs\Python\Python311\Lib\site-packages\setuptools\command\install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
warnings.warn(
running build
running build_py
creating build
creating build\lib.win-amd64-cpython-311
creating build\lib.win-amd64-cpython-311\lxml
copying src\lxml\builder.py -> build\lib.win-amd64-cpython-311\lxml
copying src\lxml\cssselect.py -> build\lib.win-amd64-cpython-311\lxml
copying src\lxml\doctestcompare.py -> build\lib.win-amd64-cpython-311\lxml
copying src\lxml\ElementInclude.py -> build\lib.win-amd64-cpython-311\lxml
copying src\lxml\pyclasslookup.py -> build\lib.win-amd64-cpython-311\lxml
copying src\lxml\sax.py -> build\lib.win-amd64-cpython-311\lxml
copying src\lxml\usedoctest.py -> build\lib.win-amd64-cpython-311\lxml
copying src\lxml\_elementpath.py -> build\lib.win-amd64-cpython-311\lxml
copying src\lxml\__init__.py -> build\lib.win-amd64-cpython-311\lxml
creating build\lib.win-amd64-cpython-311\lxml\includes
copying src\lxml\includes\__init__.py -> build\lib.win-amd64-cpython-311\lxml\includes
creating build\lib.win-amd64-cpython-311\lxml\html
copying src\lxml\html\builder.py -> build\lib.win-amd64-cpython-311\lxml\html
copying src\lxml\html\clean.py -> build\lib.win-amd64-cpython-311\lxml\html
copying src\lxml\html\defs.py -> build\lib.win-amd64-cpython-311\lxml\html
copying src\lxml\html\diff.py -> build\lib.win-amd64-cpython-311\lxml\html
copying src\lxml\html\ElementSoup.py -> build\lib.win-amd64-cpython-311\lxml\html
copying src\lxml\html\formfill.py -> build\lib.win-amd64-cpython-311\lxml\html
copying src\lxml\html\html5parser.py -> build\lib.win-amd64-cpython-311\lxml\html
copying src\lxml\html\soupparser.py -> build\lib.win-amd64-cpython-311\lxml\html
copying src\lxml\html\usedoctest.py -> build\lib.win-amd64-cpython-311\lxml\html
copying src\lxml\html\_diffcommand.py -> build\lib.win-amd64-cpython-311\lxml\html
copying src\lxml\html\_html5builder.py -> build\lib.win-amd64-cpython-311\lxml\html
copying src\lxml\html\_setmixin.py -> build\lib.win-amd64-cpython-311\lxml\html
copying src\lxml\html\__init__.py -> build\lib.win-amd64-cpython-311\lxml\html
creating build\lib.win-amd64-cpython-311\lxml\isoschematron
copying src\lxml\isoschematron\__init__.py -> build\lib.win-amd64-cpython-311\lxml\isoschematron
copying src\lxml\etree.h -> build\lib.win-amd64-cpython-311\lxml
copying src\lxml\etree_api.h -> build\lib.win-amd64-cpython-311\lxml
copying src\lxml\lxml.etree.h -> build\lib.win-amd64-cpython-311\lxml
copying src\lxml\lxml.etree_api.h -> build\lib.win-amd64-cpython-311\lxml
copying src\lxml\includes\c14n.pxd -> build\lib.win-amd64-cpython-311\lxml\includes
copying src\lxml\includes\config.pxd -> build\lib.win-amd64-cpython-311\lxml\includes
copying src\lxml\includes\dtdvalid.pxd -> build\lib.win-amd64-cpython-311\lxml\includes
copying src\lxml\includes\etreepublic.pxd -> build\lib.win-amd64-cpython-311\lxml\includes
copying src\lxml\includes\htmlparser.pxd -> build\lib.win-amd64-cpython-311\lxml\includes
copying src\lxml\includes\relaxng.pxd -> build\lib.win-amd64-cpython-311\lxml\includes
copying src\lxml\includes\schematron.pxd -> build\lib.win-amd64-cpython-311\lxml\includes
copying src\lxml\includes\tree.pxd -> build\lib.win-amd64-cpython-311\lxml\includes
copying src\lxml\includes\uri.pxd -> build\lib.win-amd64-cpython-311\lxml\includes
copying src\lxml\includes\xinclude.pxd -> build\lib.win-amd64-cpython-311\lxml\includes
copying src\lxml\includes\xmlerror.pxd -> build\lib.win-amd64-cpython-311\lxml\includes
copying src\lxml\includes\xmlparser.pxd -> build\lib.win-amd64-cpython-311\lxml\includes
copying src\lxml\includes\xmlschema.pxd -> build\lib.win-amd64-cpython-311\lxml\includes
copying src\lxml\includes\xpath.pxd -> build\lib.win-amd64-cpython-311\lxml\includes
copying src\lxml\includes\xslt.pxd -> build\lib.win-amd64-cpython-311\lxml\includes
copying src\lxml\includes\__init__.pxd -> build\lib.win-amd64-cpython-311\lxml\includes
copying src\lxml\includes\etree_defs.h -> build\lib.win-amd64-cpython-311\lxml\includes
copying src\lxml\includes\lxml-version.h -> build\lib.win-amd64-cpython-311\lxml\includes
creating build\lib.win-amd64-cpython-311\lxml\isoschematron\resources
creating build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\rng
copying src\lxml\isoschematron\resources\rng\iso-schematron.rng -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\rng
creating build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl
copying src\lxml\isoschematron\resources\xsl\RNG2Schtrn.xsl -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl
copying src\lxml\isoschematron\resources\xsl\XSD2Schtrn.xsl -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl
creating build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl\iso-schematron-xslt1
copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\iso_abstract_expand.xsl -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl\iso-schematron-xslt1
copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\iso_dsdl_include.xsl -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl\iso-schematron-xslt1
copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\iso_schematron_message.xsl -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl\iso-schematron-xslt1
copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\iso_schematron_skeleton_for_xslt1.xsl -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl\iso-schematron-xslt1
copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\iso_svrl_for_xslt1.xsl -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl\iso-schematron-xslt1
copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\readme.txt -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl\iso-schematron-xslt1
running build_ext
building 'lxml.etree' extension
creating build\temp.win-amd64-cpython-311
creating build\temp.win-amd64-cpython-311\Release
creating build\temp.win-amd64-cpython-311\Release\src
creating build\temp.win-amd64-cpython-311\Release\src\lxml
"C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.34.31933\bin\HostX86\x64\cl.exe" /c /nologo /O2 /W3 /GL /DNDEBUG /MD -DCYTHON_CLINE_IN_TRACEBACK=0 -Isrc -Isrc\lxml\includes -IC:\Users\lohar\AppData\Local\Programs\Python\Python311\include -IC:\Users\lohar\AppData\Local\Programs\Python\Python311\Include "-IC:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.34.31933\include" "-IC:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Auxiliary\VS\include" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22000.0\\um" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22000.0\\shared" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22000.0\\winrt" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22000.0\\cppwinrt" /Tcsrc\lxml\etree.c /Fobuild\temp.win-amd64-cpython-311\Release\src\lxml\etree.obj -w
cl : Command line warning D9025 : overriding '/W3' with '/w'
etree.c
C:\Users\lohar\AppData\Local\Temp\pip-install-v8_cypj7\lxml_b1e7951ab83046e384fffcd4610d3736\src\lxml\includes/etree_defs.h(14): fatal error C1083: Cannot open include file: 'libxml/xmlversion.h': No such file or directory
Compile failed: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2022\\BuildTools\\VC\\Tools\\MSVC\\14.34.31933\\bin\\HostX86\\x64\\cl.exe' failed with exit code 2
creating Users
creating Users\lohar
creating Users\lohar\AppData
creating Users\lohar\AppData\Local
creating Users\lohar\AppData\Local\Temp
"C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.34.31933\bin\HostX86\x64\cl.exe" /c /nologo /O2 /W3 /GL /DNDEBUG /MD -I/usr/include/libxml2 "-IC:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.34.31933\include" "-IC:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Auxiliary\VS\include" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22000.0\\um" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22000.0\\shared" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22000.0\\winrt" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22000.0\\cppwinrt" /TcC:\Users\lohar\AppData\Local\Temp\xmlXPathInituop21067.c /FoUsers\lohar\AppData\Local\Temp\xmlXPathInituop21067.obj
xmlXPathInituop21067.c
C:\Users\lohar\AppData\Local\Temp\xmlXPathInituop21067.c(1): fatal error C1083: Cannot open include file: 'libxml/xpath.h': No such file or directory
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2022\\BuildTools\\VC\\Tools\\MSVC\\14.34.31933\\bin\\HostX86\\x64\\cl.exe' failed with exit code 2
*********************************************************************************
Could not find function xmlCheckVersion in library libxml2. Is libxml2 installed?
*********************************************************************************
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: legacy-install-failure
× Encountered error while trying to install package.
╰─> lxml
note: This is an issue with the package mentioned above, not pip.
hint: See above for output from the failure.
PS D:\Complete-Python-3-Bootcamp-master\12-Advanced Python Modules\puzzle_unzip> "
im trying to install lxml library by pip install lxml.
i also installed vs build tools 2022 .
after that i stuck on this error i tryed multiple things but they dont work
thigs that i tried manually installing packages.
multiple internet solutions
im expecting a solution to install lxml on W11 machine.and also im using vs code and pycharm python version 3.11 and pip version 22.3.1
A:
The Python lxml module is a language-binding / wrapper for two C libraries.
For Windows they provide binary builds that include these libraries. Otherwise it will be pain and suffering getting it installed and running on Windows. Because it's Windows. "Developers, developers, developers".. (As lxml developers put it: "users of that platform usually fail to build lxml themselves")
Normally you should get the binary distribution when doing install through pip but in this case you don't.
Try to pin an older version, maybe binaries are available for it:
pip install lxml==4.9.0
Try to download the lxml binary distribution by Christoph Gohlke available here.
You can install the wheel file also via pip.
Sources:
Where are the binary builds?
Source builds on MS Windows
| Can;t install lxml package on windwos 11 | "PS D:\Complete-Python-3-Bootcamp-master\12-Advanced Python Modules\puzzle_unzip> pip install lxml
Collecting lxml
Using cached lxml-4.9.1.tar.gz (3.4 MB)
Preparing metadata (setup.py) ... done
Installing collected packages: lxml
DEPRECATION: lxml is being installed using the legacy 'setup.py install' method, because it does not have a 'pyproject.toml' and the 'wheel' package is not installed. pip 23.1 will enforce this behaviour change. A possible replacement is to enable the '--use-pep517' option. Discussion can be found at https://github.com/pypa/pip/issues/8559
Running setup.py install for lxml ... error
error: subprocess-exited-with-error
× Running setup.py install for lxml did not run successfully.
│ exit code: 1
╰─> [96 lines of output]
Building lxml version 4.9.1.
Building without Cython.
Building against pre-built libxml2 andl libxslt libraries
running install
C:\Users\lohar\AppData\Local\Programs\Python\Python311\Lib\site-packages\setuptools\command\install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
warnings.warn(
running build
running build_py
creating build
creating build\lib.win-amd64-cpython-311
creating build\lib.win-amd64-cpython-311\lxml
copying src\lxml\builder.py -> build\lib.win-amd64-cpython-311\lxml
copying src\lxml\cssselect.py -> build\lib.win-amd64-cpython-311\lxml
copying src\lxml\doctestcompare.py -> build\lib.win-amd64-cpython-311\lxml
copying src\lxml\ElementInclude.py -> build\lib.win-amd64-cpython-311\lxml
copying src\lxml\pyclasslookup.py -> build\lib.win-amd64-cpython-311\lxml
copying src\lxml\sax.py -> build\lib.win-amd64-cpython-311\lxml
copying src\lxml\usedoctest.py -> build\lib.win-amd64-cpython-311\lxml
copying src\lxml\_elementpath.py -> build\lib.win-amd64-cpython-311\lxml
copying src\lxml\__init__.py -> build\lib.win-amd64-cpython-311\lxml
creating build\lib.win-amd64-cpython-311\lxml\includes
copying src\lxml\includes\__init__.py -> build\lib.win-amd64-cpython-311\lxml\includes
creating build\lib.win-amd64-cpython-311\lxml\html
copying src\lxml\html\builder.py -> build\lib.win-amd64-cpython-311\lxml\html
copying src\lxml\html\clean.py -> build\lib.win-amd64-cpython-311\lxml\html
copying src\lxml\html\defs.py -> build\lib.win-amd64-cpython-311\lxml\html
copying src\lxml\html\diff.py -> build\lib.win-amd64-cpython-311\lxml\html
copying src\lxml\html\ElementSoup.py -> build\lib.win-amd64-cpython-311\lxml\html
copying src\lxml\html\formfill.py -> build\lib.win-amd64-cpython-311\lxml\html
copying src\lxml\html\html5parser.py -> build\lib.win-amd64-cpython-311\lxml\html
copying src\lxml\html\soupparser.py -> build\lib.win-amd64-cpython-311\lxml\html
copying src\lxml\html\usedoctest.py -> build\lib.win-amd64-cpython-311\lxml\html
copying src\lxml\html\_diffcommand.py -> build\lib.win-amd64-cpython-311\lxml\html
copying src\lxml\html\_html5builder.py -> build\lib.win-amd64-cpython-311\lxml\html
copying src\lxml\html\_setmixin.py -> build\lib.win-amd64-cpython-311\lxml\html
copying src\lxml\html\__init__.py -> build\lib.win-amd64-cpython-311\lxml\html
creating build\lib.win-amd64-cpython-311\lxml\isoschematron
copying src\lxml\isoschematron\__init__.py -> build\lib.win-amd64-cpython-311\lxml\isoschematron
copying src\lxml\etree.h -> build\lib.win-amd64-cpython-311\lxml
copying src\lxml\etree_api.h -> build\lib.win-amd64-cpython-311\lxml
copying src\lxml\lxml.etree.h -> build\lib.win-amd64-cpython-311\lxml
copying src\lxml\lxml.etree_api.h -> build\lib.win-amd64-cpython-311\lxml
copying src\lxml\includes\c14n.pxd -> build\lib.win-amd64-cpython-311\lxml\includes
copying src\lxml\includes\config.pxd -> build\lib.win-amd64-cpython-311\lxml\includes
copying src\lxml\includes\dtdvalid.pxd -> build\lib.win-amd64-cpython-311\lxml\includes
copying src\lxml\includes\etreepublic.pxd -> build\lib.win-amd64-cpython-311\lxml\includes
copying src\lxml\includes\htmlparser.pxd -> build\lib.win-amd64-cpython-311\lxml\includes
copying src\lxml\includes\relaxng.pxd -> build\lib.win-amd64-cpython-311\lxml\includes
copying src\lxml\includes\schematron.pxd -> build\lib.win-amd64-cpython-311\lxml\includes
copying src\lxml\includes\tree.pxd -> build\lib.win-amd64-cpython-311\lxml\includes
copying src\lxml\includes\uri.pxd -> build\lib.win-amd64-cpython-311\lxml\includes
copying src\lxml\includes\xinclude.pxd -> build\lib.win-amd64-cpython-311\lxml\includes
copying src\lxml\includes\xmlerror.pxd -> build\lib.win-amd64-cpython-311\lxml\includes
copying src\lxml\includes\xmlparser.pxd -> build\lib.win-amd64-cpython-311\lxml\includes
copying src\lxml\includes\xmlschema.pxd -> build\lib.win-amd64-cpython-311\lxml\includes
copying src\lxml\includes\xpath.pxd -> build\lib.win-amd64-cpython-311\lxml\includes
copying src\lxml\includes\xslt.pxd -> build\lib.win-amd64-cpython-311\lxml\includes
copying src\lxml\includes\__init__.pxd -> build\lib.win-amd64-cpython-311\lxml\includes
copying src\lxml\includes\etree_defs.h -> build\lib.win-amd64-cpython-311\lxml\includes
copying src\lxml\includes\lxml-version.h -> build\lib.win-amd64-cpython-311\lxml\includes
creating build\lib.win-amd64-cpython-311\lxml\isoschematron\resources
creating build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\rng
copying src\lxml\isoschematron\resources\rng\iso-schematron.rng -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\rng
creating build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl
copying src\lxml\isoschematron\resources\xsl\RNG2Schtrn.xsl -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl
copying src\lxml\isoschematron\resources\xsl\XSD2Schtrn.xsl -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl
creating build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl\iso-schematron-xslt1
copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\iso_abstract_expand.xsl -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl\iso-schematron-xslt1
copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\iso_dsdl_include.xsl -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl\iso-schematron-xslt1
copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\iso_schematron_message.xsl -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl\iso-schematron-xslt1
copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\iso_schematron_skeleton_for_xslt1.xsl -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl\iso-schematron-xslt1
copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\iso_svrl_for_xslt1.xsl -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl\iso-schematron-xslt1
copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\readme.txt -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl\iso-schematron-xslt1
running build_ext
building 'lxml.etree' extension
creating build\temp.win-amd64-cpython-311
creating build\temp.win-amd64-cpython-311\Release
creating build\temp.win-amd64-cpython-311\Release\src
creating build\temp.win-amd64-cpython-311\Release\src\lxml
"C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.34.31933\bin\HostX86\x64\cl.exe" /c /nologo /O2 /W3 /GL /DNDEBUG /MD -DCYTHON_CLINE_IN_TRACEBACK=0 -Isrc -Isrc\lxml\includes -IC:\Users\lohar\AppData\Local\Programs\Python\Python311\include -IC:\Users\lohar\AppData\Local\Programs\Python\Python311\Include "-IC:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.34.31933\include" "-IC:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Auxiliary\VS\include" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22000.0\\um" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22000.0\\shared" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22000.0\\winrt" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22000.0\\cppwinrt" /Tcsrc\lxml\etree.c /Fobuild\temp.win-amd64-cpython-311\Release\src\lxml\etree.obj -w
cl : Command line warning D9025 : overriding '/W3' with '/w'
etree.c
C:\Users\lohar\AppData\Local\Temp\pip-install-v8_cypj7\lxml_b1e7951ab83046e384fffcd4610d3736\src\lxml\includes/etree_defs.h(14): fatal error C1083: Cannot open include file: 'libxml/xmlversion.h': No such file or directory
Compile failed: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2022\\BuildTools\\VC\\Tools\\MSVC\\14.34.31933\\bin\\HostX86\\x64\\cl.exe' failed with exit code 2
creating Users
creating Users\lohar
creating Users\lohar\AppData
creating Users\lohar\AppData\Local
creating Users\lohar\AppData\Local\Temp
"C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.34.31933\bin\HostX86\x64\cl.exe" /c /nologo /O2 /W3 /GL /DNDEBUG /MD -I/usr/include/libxml2 "-IC:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.34.31933\include" "-IC:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Auxiliary\VS\include" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22000.0\\um" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22000.0\\shared" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22000.0\\winrt" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22000.0\\cppwinrt" /TcC:\Users\lohar\AppData\Local\Temp\xmlXPathInituop21067.c /FoUsers\lohar\AppData\Local\Temp\xmlXPathInituop21067.obj
xmlXPathInituop21067.c
C:\Users\lohar\AppData\Local\Temp\xmlXPathInituop21067.c(1): fatal error C1083: Cannot open include file: 'libxml/xpath.h': No such file or directory
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2022\\BuildTools\\VC\\Tools\\MSVC\\14.34.31933\\bin\\HostX86\\x64\\cl.exe' failed with exit code 2
*********************************************************************************
Could not find function xmlCheckVersion in library libxml2. Is libxml2 installed?
*********************************************************************************
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: legacy-install-failure
× Encountered error while trying to install package.
╰─> lxml
note: This is an issue with the package mentioned above, not pip.
hint: See above for output from the failure.
PS D:\Complete-Python-3-Bootcamp-master\12-Advanced Python Modules\puzzle_unzip> "
im trying to install lxml library by pip install lxml.
i also installed vs build tools 2022 .
after that i stuck on this error i tryed multiple things but they dont work
thigs that i tried manually installing packages.
multiple internet solutions
im expecting a solution to install lxml on W11 machine.and also im using vs code and pycharm python version 3.11 and pip version 22.3.1
| [
"The Python lxml module is a language-binding / wrapper for two C libraries.\nFor Windows they provide binary builds that include these libraries. Otherwise it will be pain and suffering getting it installed and running on Windows. Because it's Windows. \"Developers, developers, developers\".. (As lxml developers put it: \"users of that platform usually fail to build lxml themselves\")\nNormally you should get the binary distribution when doing install through pip but in this case you don't.\n\nTry to pin an older version, maybe binaries are available for it:\npip install lxml==4.9.0\n\n\nTry to download the lxml binary distribution by Christoph Gohlke available here.\nYou can install the wheel file also via pip.\n\n\nSources:\n\nWhere are the binary builds?\nSource builds on MS Windows\n\n"
] | [
1
] | [] | [] | [
"lxml",
"pip",
"python",
"python_3.x"
] | stackoverflow_0074666576_lxml_pip_python_python_3.x.txt |
Q:
How to get data from the model
I need to get data from a model in Django. I specify filtering, an error occurs. I'm doing a project with TV series and movies. When clicking on any of the listed categories, I need to take data from this category. That is, which films belong to this category.
enter image description here
enter image description here
I am trying to fix this problems but it didnt help me
A:
I did not understand your situation well. But that's what I did when I was in that situation.
def gallery(request):
category = request.GET.get('category')
if category == None:
photos = Photo.objects.all()
else:
photos = Photo.objects.filter(category__name=category)
categorys = Category.objects.all()
context = {
'categorys':categorys,
'photos':photos,
}
return render(request, 'photos/gallery.html', context)
Making queries about more
| How to get data from the model | I need to get data from a model in Django. I specify filtering, an error occurs. I'm doing a project with TV series and movies. When clicking on any of the listed categories, I need to take data from this category. That is, which films belong to this category.
enter image description here
enter image description here
I am trying to fix this problems but it didnt help me
| [
"I did not understand your situation well. But that's what I did when I was in that situation.\ndef gallery(request):\n category = request.GET.get('category')\n if category == None:\n photos = Photo.objects.all()\n else:\n photos = Photo.objects.filter(category__name=category)\n \n categorys = Category.objects.all()\n context = {\n 'categorys':categorys,\n 'photos':photos,\n }\n return render(request, 'photos/gallery.html', context)\n\nMaking queries about more\n"
] | [
0
] | [] | [] | [
"django",
"model",
"python",
"view"
] | stackoverflow_0074665982_django_model_python_view.txt |
Q:
Why does the function return each element of the list on a new line?
So i have this code right here to separate only ints or floats out of a file and add them to a list, however when it returns the list, it returns each element on a new line and not the entire list on the same line, and I'm wondering why?
the list looks kind of like this:
12 w 21 d23g780nb deed e2 21.87
43 91 - . 222 mftg 21 bx .1 3 g d e 6 de ddd32 3412
def read_numbers(path: str) -> list:
with open(path) as f:
file_elem = f.read().split()
a = []
for x in file_elem:
if x.isnumeric():
a.append(int(x))
elif "." in x:
b = x.replace(".","")
if b.isnumeric():
a.append(float(x))
return a
If I remove the looking for floats part it will only add the ints to the list but it will return the list as intended,
Out[101]: [12, 21, 43, 91, 222, 21, 3, 6, 3412, 0, 0, 0, 1, 70, 12, 1, 9, 445, 100]
however when adding the floats, it seems that the entire list gets messed up like this
and i'm wondering why?
A:
I created a file like this:
file a:
12 43 12.145 546 23 76 5.54 231.1 32
then I run your code like this:
In [3]: def read_numbers(path: str) -> list:
...: with open(path) as f:
...: file_elem = f.read().split()
...: a = []
...: for x in file_elem:
...: if x.isnumeric():
...: a.append(int(x))
...: elif "." in x:
...: b = x.replace(".","")
...: if b.isnumeric():
...: a.append(float(x))
...: return a
...:
In [4]: read_numbers('./a')
Out[4]: [12, 43, 12.145, 546, 23, 76, 5.54, 231.1, 32]
And so everything is OK.
UPDATE:
I changed file like this:
new file a:
12 43 12.145 546 23 76 5.54 231.1 32
ad
adfad ga 235 1.12
adf a1 12 si
124 sd 1.12
and so output is like this:
In [5]: read_numbers('./a')
Out[5]: [12, 43, 12.145, 546, 23, 76, 5.54, 231.1, 32, 235, 1.12, 12, 124, 1.12]
| Why does the function return each element of the list on a new line? | So i have this code right here to separate only ints or floats out of a file and add them to a list, however when it returns the list, it returns each element on a new line and not the entire list on the same line, and I'm wondering why?
the list looks kind of like this:
12 w 21 d23g780nb deed e2 21.87
43 91 - . 222 mftg 21 bx .1 3 g d e 6 de ddd32 3412
def read_numbers(path: str) -> list:
with open(path) as f:
file_elem = f.read().split()
a = []
for x in file_elem:
if x.isnumeric():
a.append(int(x))
elif "." in x:
b = x.replace(".","")
if b.isnumeric():
a.append(float(x))
return a
If I remove the looking for floats part it will only add the ints to the list but it will return the list as intended,
Out[101]: [12, 21, 43, 91, 222, 21, 3, 6, 3412, 0, 0, 0, 1, 70, 12, 1, 9, 445, 100]
however when adding the floats, it seems that the entire list gets messed up like this
and i'm wondering why?
| [
"I created a file like this:\nfile a:\n12 43 12.145 546 23 76 5.54 231.1 32\n\nthen I run your code like this:\nIn [3]: def read_numbers(path: str) -> list:\n ...: with open(path) as f:\n ...: file_elem = f.read().split()\n ...: a = []\n ...: for x in file_elem:\n ...: if x.isnumeric():\n ...: a.append(int(x))\n ...: elif \".\" in x:\n ...: b = x.replace(\".\",\"\")\n ...: if b.isnumeric():\n ...: a.append(float(x))\n ...: return a\n ...:\n\nIn [4]: read_numbers('./a')\nOut[4]: [12, 43, 12.145, 546, 23, 76, 5.54, 231.1, 32]\n\nAnd so everything is OK.\n\nUPDATE:\nI changed file like this:\nnew file a:\n12 43 12.145 546 23 76 5.54 231.1 32\nad\nadfad ga 235 1.12\nadf a1 12 si\n124 sd 1.12\n\nand so output is like this:\nIn [5]: read_numbers('./a')\nOut[5]: [12, 43, 12.145, 546, 23, 76, 5.54, 231.1, 32, 235, 1.12, 12, 124, 1.12]\n\n"
] | [
1
] | [] | [] | [
"append",
"list",
"python"
] | stackoverflow_0074666578_append_list_python.txt |
Q:
Reauthentication failed error while accessing bigquery via python
i am trying to access bigquery using python . even though after executing "gcloud auth login"
getting below error.
google.auth.exceptions.ReauthFailError: Reauthentication failed. Reauthentication challenge could not be answered because you are not in an interactive session.
what can be issue here
A:
You can solve this problem by creating a service account and set up the Cloud SDK to use the service account.
Example command:
gcloud auth activate-service-account account-name --key-file=/fullpath/service-account.json
Other way is to set up the environment variables for the Python script to use while accessing BigQuery.
Example command:
export GOOGLE_APPLICATION_CREDENTIALS=/fullpath/service-account.json
| Reauthentication failed error while accessing bigquery via python | i am trying to access bigquery using python . even though after executing "gcloud auth login"
getting below error.
google.auth.exceptions.ReauthFailError: Reauthentication failed. Reauthentication challenge could not be answered because you are not in an interactive session.
what can be issue here
| [
"You can solve this problem by creating a service account and set up the Cloud SDK to use the service account.\nExample command:\ngcloud auth activate-service-account account-name --key-file=/fullpath/service-account.json\n\nOther way is to set up the environment variables for the Python script to use while accessing BigQuery.\nExample command:\nexport GOOGLE_APPLICATION_CREDENTIALS=/fullpath/service-account.json\n\n"
] | [
0
] | [] | [] | [
"google_bigquery",
"google_cloud_platform",
"python"
] | stackoverflow_0074475900_google_bigquery_google_cloud_platform_python.txt |
Q:
List comprehension for running total
I want to get a running total from a list of numbers.
For demo purposes, I start with a sequential list of numbers using range
a = range(20)
runningTotal = []
for n in range(len(a)):
new = runningTotal[n-1] + a[n] if n > 0 else a[n]
runningTotal.append(new)
# This one is a syntax error
# runningTotal = [a[n] for n in range(len(a)) if n == 0 else runningTotal[n-1] + a[n]]
for i in zip(a, runningTotal):
print "{0:>3}{1:>5}".format(*i)
yields
0 0
1 1
2 3
3 6
4 10
5 15
6 21
7 28
8 36
9 45
10 55
11 66
12 78
13 91
14 105
15 120
16 136
17 153
18 171
19 190
As you can see, I initialize an empty list [], then append() in each loop iteration. Is there a more elegant way to this, like a list comprehension?
A:
A list comprehension has no good (clean, portable) way to refer to the very list it's building. One good and elegant approach might be to do the job in a generator:
def running_sum(a):
tot = 0
for item in a:
tot += item
yield tot
to get this as a list instead, of course, use list(running_sum(a)).
A:
If you can use numpy, it has a built-in function named cumsum that does this.
import numpy as np
tot = np.cumsum(a) # returns a np.ndarray
tot = list(tot) # if you prefer a list
A:
I'm not sure about 'elegant', but I think the following is much simpler and more intuitive (at the cost of an extra variable):
a = range(20)
runningTotal = []
total = 0
for n in a:
total += n
runningTotal.append(total)
The functional way to do the same thing is:
a = range(20)
runningTotal = reduce(lambda x, y: x+[x[-1]+y], a, [0])[1:]
...but that's much less readable/maintainable, etc.
@Omnifarous suggests this should be improved to:
a = range(20)
runningTotal = reduce(lambda l, v: (l.append(l[-1] + v) or l), a, [0])
...but I still find that less immediately comprehensible than my initial suggestion.
Remember the words of Kernighan: "Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it."
A:
This can be implemented in 2 lines in Python.
Using a default parameter eliminates the need to maintain an aux variable outside, and then we just do a map to the list.
def accumulate(x, l=[0]): l[0] += x; return l[0];
map(accumulate, range(20))
A:
Use itertools.accumulate(). Here is an example:
from itertools import accumulate
a = range(20)
runningTotals = list(accumulate(a))
for i in zip(a, runningTotals):
print "{0:>3}{1:>5}".format(*i)
This only works on Python 3. On Python 2 you can use the backport in the more-itertools package.
A:
When we take the sum of a list, we designate an accumulator (memo) and then walk through the list, applying the binary function "x+y" to each element and the accumulator. Procedurally, this looks like:
def mySum(list):
memo = 0
for e in list:
memo = memo + e
return memo
This is a common pattern, and useful for things other than taking sums — we can generalize it to any binary function, which we'll supply as a parameter, and also let the caller specify an initial value. This gives us a function known as reduce, foldl, or inject[1]:
def myReduce(function, list, initial):
memo = initial
for e in list:
memo = function(memo, e)
return memo
def mySum(list):
return myReduce(lambda memo, e: memo + e, list, 0)
In Python 2, reduce was a built-in function, but in Python 3 it's been moved to the functools module:
from functools import reduce
We can do all kinds of cool stuff with reduce depending on the function we supply as its the first argument. If we replace "sum" with "list concatenation", and "zero" with "empty list", we get the (shallow) copy function:
def myCopy(list):
return reduce(lambda memo, e: memo + [e], list, [])
myCopy(range(10))
> [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
If we add a transform function as another parameter to copy, and apply it before concatenating, we get map:
def myMap(transform, list):
return reduce(lambda memo, e: memo + [transform(e)], list, [])
myMap(lambda x: x*2, range(10))
> [0, 2, 4, 6, 8, 10, 12, 14, 16, 18]
If we add a predicate function that takes e as a parameter and returns a boolean, and use it to decide whether or not to concatenate, we get filter:
def myFilter(predicate, list):
return reduce(lambda memo, e: memo + [e] if predicate(e) else memo, list, [])
myFilter(lambda x: x%2==0, range(10))
> [0, 2, 4, 6, 8]
map and filter are sort of unfancy ways of writing list comprehensions — we could also have said [x*2 for x in range(10)] or [x for x in range(10) if x%2==0]. There's no corresponding list comprehension syntax for reduce, because reduce isn't required to return a list at all (as we saw with sum, earlier, which Python also happens to offer as a built-in function).
It turns out that for computing a running sum, the list-building abilities of reduce are exactly what we want, and probably the most elegant way to solve this problem, despite its reputation (along with lambda) as something of an un-pythonic shibboleth. The version of reduce that leaves behind copies of its old values as it runs is called reductions or scanl[1], and it looks like this:
def reductions(function, list, initial):
return reduce(lambda memo, e: memo + [function(memo[-1], e)], list, [initial])
So equipped, we can now define:
def running_sum(list):
first, rest = list[0], list[1:]
return reductions(lambda memo, e: memo + e, rest, first)
running_sum(range(10))
> [0, 1, 3, 6, 10, 15, 21, 28, 36, 45]
While conceptually elegant, this precise approach fares poorly in practice with Python. Because Python's list.append() mutates a list in place but doesn't return it, we can't use it effectively in a lambda, and have to use the + operator instead. This constructs a whole new list, which takes time proportional to the length of the accumulated list so far (that is, an O(n) operation). Since we're already inside the O(n) for loop of reduce when we do this, the overall time complexity compounds to O(n2).
In a language like Ruby[2], where array.push e returns the mutated array, the equivalent runs in O(n) time:
class Array
def reductions(initial, &proc)
self.reduce [initial] do |memo, e|
memo.push proc.call(memo.last, e)
end
end
end
def running_sum(enumerable)
first, rest = enumerable.first, enumerable.drop(1)
rest.reductions(first, &:+)
end
running_sum (0...10)
> [0, 1, 3, 6, 10, 15, 21, 28, 36, 45]
same in JavaScript[2], whose array.push(e) returns e (not array), but whose anonymous functions allow us to include multiple statements, which we can use to separately specify a return value:
function reductions(array, callback, initial) {
return array.reduce(function(memo, e) {
memo.push(callback(memo[memo.length - 1], e));
return memo;
}, [initial]);
}
function runningSum(array) {
var first = array[0], rest = array.slice(1);
return reductions(rest, function(memo, e) {
return x + y;
}, first);
}
function range(start, end) {
return(Array.apply(null, Array(end-start)).map(function(e, i) {
return start + i;
}
}
runningSum(range(0, 10));
> [0, 1, 3, 6, 10, 15, 21, 28, 36, 45]
So, how can we solve this while retaining the conceptual simplicity of a reductions function that we just pass lambda x, y: x + y to in order to create the running sum function? Let's rewrite reductions procedurally. We can fix the accidentally quadratic problem, and while we're at it, pre-allocate the result list to avoid heap thrashing[3]:
def reductions(function, list, initial):
result = [None] * len(list)
result[0] = initial
for i in range(len(list)):
result[i] = function(result[i-1], list[i])
return result
def running_sum(list):
first, rest = list[0], list[1:]
return reductions(lambda memo, e: memo + e, rest, first)
running_sum(range(0,10))
> [0, 1, 3, 6, 10, 15, 21, 28, 36, 45]
This is the sweet spot for me: O(n) performance, and the optimized procedural code is tucked away under a meaningful name where it can be re-used the next time you need to write a function that accumulates intermediate values into a list.
The names reduce/reductions come from the LISP tradition, foldl/scanl from the ML tradition, and inject from the Smalltalk tradition.
Python's List and Ruby's Array are both implementations of an automatically resizing data structure known as a "dynamic array" (or std::vector in C++). JavaScript's Array is a little more baroque, but behaves identically provided you don't assign to out of bounds indices or mutate Array.length.
The dynamic array that forms the backing store of the list in the Python runtime will resize itself every time the list's length crosses a power of two. Resizing a list means allocating a new list on the heap of twice the size of the old one, copying the contents of the old list into the new one, and returning the old list's memory to the system. This is an O(n) operation, but because it happens less and less frequently as the list grows larger and larger, the time complexity of appending to a list works out to O(1) in the average case. However, the "hole" left by the old list can sometimes be difficult to recycle, depending on its position in the heap. Even with garbage collection and a robust memory allocator, pre-allocating an array of known size can save the underlying systems some work. In an embedded environment without the benefit of an OS, this kind of micro-management becomes very important.
A:
I wanted to do the same thing to generate cumulative frequencies that I could use bisect_left over - this is the way I've generated the list;
[ sum( a[:x] ) for x in range( 1, len(a)+1 ) ]
A:
Starting Python 3.8, and the introduction of assignment expressions (PEP 572) (:= operator), we can use and increment a variable within a list comprehension:
# items = range(7)
total = 0
[(x, total := total + x) for x in items]
# [(0, 0), (1, 1), (2, 3), (3, 6), (4, 10), (5, 15), (6, 21)]
This:
Initializes a variable total to 0 which symbolizes the running sum
For each item, this both:
increments total by the current looped item (total := total + x) via an assignment expression
and at the same time returns the new value of total as part of the produced mapped tuple
A:
Here's a linear time solution one liner:
list(reduce(lambda (c,s), a: (chain(c,[s+a]), s+a), l,(iter([]),0))[0])
Example:
l = range(10)
list(reduce(lambda (c,s), a: (chain(c,[s+a]), s+a), l,(iter([]),0))[0])
>>> [0, 1, 3, 6, 10, 15, 21, 28, 36, 45]
In short, the reduce goes over the list accumulating sum and constructing an list. The final x[0] returns the list, x[1] would be the running total value.
A:
Another one-liner, in linear time and space.
def runningSum(a):
return reduce(lambda l, x: l.append(l[-1]+x) or l if l else [x], a, None)
I'm stressing linear space here, because most of the one-liners I saw in the other proposed answers --- those based on the pattern list + [sum] or using chain iterators --- generate O(n) lists or generators and stress the garbage collector so much that they perform very poorly, in comparison to this.
A:
I would use a coroutine for this:
def runningTotal():
accum = 0
yield None
while True:
accum += yield accum
tot = runningTotal()
next(tot)
running_total = [tot.send(i) for i in xrange(N)]
A:
This is inefficient as it does it every time from beginning but possible it is:
a = range(20)
runtot=[sum(a[:i+1]) for i,item in enumerate(a)]
for line in zip(a,runtot):
print line
A:
You are looking for two things: fold (reduce) and a funny function that keeps a list of the results of another function, which I have called running. I made versions both with and without an initial parameter; either way these need to go to reduce with an initial [].
def last_or_default(list, default):
if len(list) > 0:
return list[-1]
return default
def initial_or_apply(list, f, y):
if list == []:
return [y]
return list + [f(list[-1], y)]
def running_initial(f, initial):
return (lambda x, y: x + [f(last_or_default(x,initial), y)])
def running(f):
return (lambda x, y: initial_or_apply(x, f, y))
totaler = lambda x, y: x + y
running_totaler = running(totaler)
running_running_totaler = running_initial(running_totaler, [])
data = range(0,20)
running_total = reduce(running_totaler, data, [])
running_running_total = reduce(running_running_totaler, data, [])
for i in zip(data, running_total, running_running_total):
print "{0:>3}{1:>4}{2:>83}".format(*i)
These will take a long time on really large lists due to the + operator. In a functional language, if done correctly, this list construction would be O(n).
Here are the first few lines of output:
0 0 [0]
1 1 [0, 1]
2 3 [0, 1, 3]
3 6 [0, 1, 3, 6]
4 10 [0, 1, 3, 6, 10]
5 15 [0, 1, 3, 6, 10, 15]
6 21 [0, 1, 3, 6, 10, 15, 21]
| List comprehension for running total | I want to get a running total from a list of numbers.
For demo purposes, I start with a sequential list of numbers using range
a = range(20)
runningTotal = []
for n in range(len(a)):
new = runningTotal[n-1] + a[n] if n > 0 else a[n]
runningTotal.append(new)
# This one is a syntax error
# runningTotal = [a[n] for n in range(len(a)) if n == 0 else runningTotal[n-1] + a[n]]
for i in zip(a, runningTotal):
print "{0:>3}{1:>5}".format(*i)
yields
0 0
1 1
2 3
3 6
4 10
5 15
6 21
7 28
8 36
9 45
10 55
11 66
12 78
13 91
14 105
15 120
16 136
17 153
18 171
19 190
As you can see, I initialize an empty list [], then append() in each loop iteration. Is there a more elegant way to this, like a list comprehension?
| [
"A list comprehension has no good (clean, portable) way to refer to the very list it's building. One good and elegant approach might be to do the job in a generator:\ndef running_sum(a):\n tot = 0\n for item in a:\n tot += item\n yield tot\n\nto get this as a list instead, of course, use list(running_sum(a)).\n",
"If you can use numpy, it has a built-in function named cumsum that does this.\nimport numpy as np\ntot = np.cumsum(a) # returns a np.ndarray\ntot = list(tot) # if you prefer a list\n\n",
"I'm not sure about 'elegant', but I think the following is much simpler and more intuitive (at the cost of an extra variable):\na = range(20)\n\nrunningTotal = []\n\ntotal = 0\nfor n in a:\n total += n\n runningTotal.append(total)\n\nThe functional way to do the same thing is:\na = range(20)\nrunningTotal = reduce(lambda x, y: x+[x[-1]+y], a, [0])[1:]\n\n...but that's much less readable/maintainable, etc.\n@Omnifarous suggests this should be improved to:\na = range(20)\nrunningTotal = reduce(lambda l, v: (l.append(l[-1] + v) or l), a, [0])\n\n...but I still find that less immediately comprehensible than my initial suggestion.\nRemember the words of Kernighan: \"Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it.\"\n",
"This can be implemented in 2 lines in Python.\nUsing a default parameter eliminates the need to maintain an aux variable outside, and then we just do a map to the list.\ndef accumulate(x, l=[0]): l[0] += x; return l[0];\nmap(accumulate, range(20))\n\n",
"Use itertools.accumulate(). Here is an example:\nfrom itertools import accumulate\n\na = range(20)\nrunningTotals = list(accumulate(a))\n\nfor i in zip(a, runningTotals):\n print \"{0:>3}{1:>5}\".format(*i)\n\nThis only works on Python 3. On Python 2 you can use the backport in the more-itertools package.\n",
"When we take the sum of a list, we designate an accumulator (memo) and then walk through the list, applying the binary function \"x+y\" to each element and the accumulator. Procedurally, this looks like:\ndef mySum(list):\n memo = 0\n for e in list:\n memo = memo + e\n return memo\n\nThis is a common pattern, and useful for things other than taking sums — we can generalize it to any binary function, which we'll supply as a parameter, and also let the caller specify an initial value. This gives us a function known as reduce, foldl, or inject[1]:\ndef myReduce(function, list, initial):\n memo = initial\n for e in list:\n memo = function(memo, e)\n return memo\n\ndef mySum(list):\n return myReduce(lambda memo, e: memo + e, list, 0)\n\nIn Python 2, reduce was a built-in function, but in Python 3 it's been moved to the functools module:\nfrom functools import reduce\n\nWe can do all kinds of cool stuff with reduce depending on the function we supply as its the first argument. If we replace \"sum\" with \"list concatenation\", and \"zero\" with \"empty list\", we get the (shallow) copy function:\ndef myCopy(list):\n return reduce(lambda memo, e: memo + [e], list, [])\n\nmyCopy(range(10))\n> [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\n\nIf we add a transform function as another parameter to copy, and apply it before concatenating, we get map:\ndef myMap(transform, list):\n return reduce(lambda memo, e: memo + [transform(e)], list, [])\n\nmyMap(lambda x: x*2, range(10))\n> [0, 2, 4, 6, 8, 10, 12, 14, 16, 18]\n\nIf we add a predicate function that takes e as a parameter and returns a boolean, and use it to decide whether or not to concatenate, we get filter:\ndef myFilter(predicate, list):\n return reduce(lambda memo, e: memo + [e] if predicate(e) else memo, list, [])\n\nmyFilter(lambda x: x%2==0, range(10))\n> [0, 2, 4, 6, 8]\n\nmap and filter are sort of unfancy ways of writing list comprehensions — we could also have said [x*2 for x in range(10)] or [x for x in range(10) if x%2==0]. There's no corresponding list comprehension syntax for reduce, because reduce isn't required to return a list at all (as we saw with sum, earlier, which Python also happens to offer as a built-in function).\nIt turns out that for computing a running sum, the list-building abilities of reduce are exactly what we want, and probably the most elegant way to solve this problem, despite its reputation (along with lambda) as something of an un-pythonic shibboleth. The version of reduce that leaves behind copies of its old values as it runs is called reductions or scanl[1], and it looks like this:\ndef reductions(function, list, initial):\n return reduce(lambda memo, e: memo + [function(memo[-1], e)], list, [initial])\n\nSo equipped, we can now define:\ndef running_sum(list):\n first, rest = list[0], list[1:]\n return reductions(lambda memo, e: memo + e, rest, first)\n\nrunning_sum(range(10))\n> [0, 1, 3, 6, 10, 15, 21, 28, 36, 45]\n\nWhile conceptually elegant, this precise approach fares poorly in practice with Python. Because Python's list.append() mutates a list in place but doesn't return it, we can't use it effectively in a lambda, and have to use the + operator instead. This constructs a whole new list, which takes time proportional to the length of the accumulated list so far (that is, an O(n) operation). Since we're already inside the O(n) for loop of reduce when we do this, the overall time complexity compounds to O(n2).\nIn a language like Ruby[2], where array.push e returns the mutated array, the equivalent runs in O(n) time:\nclass Array\n def reductions(initial, &proc)\n self.reduce [initial] do |memo, e|\n memo.push proc.call(memo.last, e)\n end\n end\nend\n\ndef running_sum(enumerable)\n first, rest = enumerable.first, enumerable.drop(1)\n rest.reductions(first, &:+)\nend\n\nrunning_sum (0...10)\n> [0, 1, 3, 6, 10, 15, 21, 28, 36, 45]\n\nsame in JavaScript[2], whose array.push(e) returns e (not array), but whose anonymous functions allow us to include multiple statements, which we can use to separately specify a return value:\nfunction reductions(array, callback, initial) {\n return array.reduce(function(memo, e) {\n memo.push(callback(memo[memo.length - 1], e));\n return memo;\n }, [initial]);\n}\n\nfunction runningSum(array) {\n var first = array[0], rest = array.slice(1);\n return reductions(rest, function(memo, e) {\n return x + y;\n }, first);\n}\n\nfunction range(start, end) {\n return(Array.apply(null, Array(end-start)).map(function(e, i) {\n return start + i;\n }\n}\n\nrunningSum(range(0, 10));\n> [0, 1, 3, 6, 10, 15, 21, 28, 36, 45]\n\nSo, how can we solve this while retaining the conceptual simplicity of a reductions function that we just pass lambda x, y: x + y to in order to create the running sum function? Let's rewrite reductions procedurally. We can fix the accidentally quadratic problem, and while we're at it, pre-allocate the result list to avoid heap thrashing[3]:\ndef reductions(function, list, initial):\n result = [None] * len(list)\n result[0] = initial\n for i in range(len(list)):\n result[i] = function(result[i-1], list[i])\n return result\n\ndef running_sum(list):\n first, rest = list[0], list[1:]\n return reductions(lambda memo, e: memo + e, rest, first)\n\nrunning_sum(range(0,10))\n> [0, 1, 3, 6, 10, 15, 21, 28, 36, 45]\n\nThis is the sweet spot for me: O(n) performance, and the optimized procedural code is tucked away under a meaningful name where it can be re-used the next time you need to write a function that accumulates intermediate values into a list.\n\nThe names reduce/reductions come from the LISP tradition, foldl/scanl from the ML tradition, and inject from the Smalltalk tradition.\nPython's List and Ruby's Array are both implementations of an automatically resizing data structure known as a \"dynamic array\" (or std::vector in C++). JavaScript's Array is a little more baroque, but behaves identically provided you don't assign to out of bounds indices or mutate Array.length.\nThe dynamic array that forms the backing store of the list in the Python runtime will resize itself every time the list's length crosses a power of two. Resizing a list means allocating a new list on the heap of twice the size of the old one, copying the contents of the old list into the new one, and returning the old list's memory to the system. This is an O(n) operation, but because it happens less and less frequently as the list grows larger and larger, the time complexity of appending to a list works out to O(1) in the average case. However, the \"hole\" left by the old list can sometimes be difficult to recycle, depending on its position in the heap. Even with garbage collection and a robust memory allocator, pre-allocating an array of known size can save the underlying systems some work. In an embedded environment without the benefit of an OS, this kind of micro-management becomes very important.\n\n",
"I wanted to do the same thing to generate cumulative frequencies that I could use bisect_left over - this is the way I've generated the list;\n[ sum( a[:x] ) for x in range( 1, len(a)+1 ) ]\n\n",
"Starting Python 3.8, and the introduction of assignment expressions (PEP 572) (:= operator), we can use and increment a variable within a list comprehension:\n# items = range(7)\ntotal = 0\n[(x, total := total + x) for x in items]\n# [(0, 0), (1, 1), (2, 3), (3, 6), (4, 10), (5, 15), (6, 21)]\n\nThis:\n\nInitializes a variable total to 0 which symbolizes the running sum\nFor each item, this both:\n\n\nincrements total by the current looped item (total := total + x) via an assignment expression\nand at the same time returns the new value of total as part of the produced mapped tuple\n\n\n",
"Here's a linear time solution one liner:\nlist(reduce(lambda (c,s), a: (chain(c,[s+a]), s+a), l,(iter([]),0))[0])\n\nExample:\nl = range(10)\nlist(reduce(lambda (c,s), a: (chain(c,[s+a]), s+a), l,(iter([]),0))[0])\n>>> [0, 1, 3, 6, 10, 15, 21, 28, 36, 45]\n\nIn short, the reduce goes over the list accumulating sum and constructing an list. The final x[0] returns the list, x[1] would be the running total value.\n",
"Another one-liner, in linear time and space.\ndef runningSum(a):\n return reduce(lambda l, x: l.append(l[-1]+x) or l if l else [x], a, None)\n\nI'm stressing linear space here, because most of the one-liners I saw in the other proposed answers --- those based on the pattern list + [sum] or using chain iterators --- generate O(n) lists or generators and stress the garbage collector so much that they perform very poorly, in comparison to this.\n",
"I would use a coroutine for this:\ndef runningTotal():\n accum = 0\n yield None\n while True:\n accum += yield accum\n\ntot = runningTotal()\nnext(tot)\nrunning_total = [tot.send(i) for i in xrange(N)]\n\n",
"This is inefficient as it does it every time from beginning but possible it is:\na = range(20)\nruntot=[sum(a[:i+1]) for i,item in enumerate(a)]\nfor line in zip(a,runtot):\n print line\n\n",
"You are looking for two things: fold (reduce) and a funny function that keeps a list of the results of another function, which I have called running. I made versions both with and without an initial parameter; either way these need to go to reduce with an initial [].\ndef last_or_default(list, default):\n if len(list) > 0:\n return list[-1]\n return default\n\ndef initial_or_apply(list, f, y):\n if list == []:\n return [y]\n return list + [f(list[-1], y)]\n\ndef running_initial(f, initial):\n return (lambda x, y: x + [f(last_or_default(x,initial), y)])\n\ndef running(f):\n return (lambda x, y: initial_or_apply(x, f, y))\n\ntotaler = lambda x, y: x + y\nrunning_totaler = running(totaler)\nrunning_running_totaler = running_initial(running_totaler, [])\n\ndata = range(0,20)\nrunning_total = reduce(running_totaler, data, [])\nrunning_running_total = reduce(running_running_totaler, data, [])\n\nfor i in zip(data, running_total, running_running_total):\n print \"{0:>3}{1:>4}{2:>83}\".format(*i)\n\nThese will take a long time on really large lists due to the + operator. In a functional language, if done correctly, this list construction would be O(n).\nHere are the first few lines of output:\n0 0 [0]\n1 1 [0, 1]\n2 3 [0, 1, 3]\n3 6 [0, 1, 3, 6]\n4 10 [0, 1, 3, 6, 10]\n5 15 [0, 1, 3, 6, 10, 15]\n6 21 [0, 1, 3, 6, 10, 15, 21]\n\n"
] | [
30,
28,
12,
10,
9,
7,
3,
3,
2,
2,
1,
0,
0
] | [
"with Python 3.8 and above you can now use walrus operator\nxs = range(20)\ntotal = 0\nrun = [(total := total + d) for d in xs]\n\n"
] | [
-1
] | [
"cumulative_sum",
"list_comprehension",
"python"
] | stackoverflow_0003432830_cumulative_sum_list_comprehension_python.txt |
Q:
Extracting text from an alphanumeric reference
I have a load of bank statement data which includes a payment reference. This is free form so some include invoice numbers, their info or a name and typically it’s 16-256 characters depending on the system they use to make the payment. I’ve put the data in a pandas data frame with transaction amount, currency and date but now want to get python to ‘read’ the reference and extract any words to put them in a new column. I know how to separate based on indexing but as the data is free form I just want to look for any letters wherever they occur. An example record would be:
15122021 Amazon ref 1021/1022
So the ideal would be to pick up that Amazon and ref are also separate words. Any help gratefully received!
I’ve not been able to work out how I can search only for letters
A:
If the target text would always be one sequence of contiguous words, you could try using str.extract as follows:
df["name"] = df["invoice"].str.extract(r'(\w+(?: \w+)*)')
| Extracting text from an alphanumeric reference | I have a load of bank statement data which includes a payment reference. This is free form so some include invoice numbers, their info or a name and typically it’s 16-256 characters depending on the system they use to make the payment. I’ve put the data in a pandas data frame with transaction amount, currency and date but now want to get python to ‘read’ the reference and extract any words to put them in a new column. I know how to separate based on indexing but as the data is free form I just want to look for any letters wherever they occur. An example record would be:
15122021 Amazon ref 1021/1022
So the ideal would be to pick up that Amazon and ref are also separate words. Any help gratefully received!
I’ve not been able to work out how I can search only for letters
| [
"If the target text would always be one sequence of contiguous words, you could try using str.extract as follows:\ndf[\"name\"] = df[\"invoice\"].str.extract(r'(\\w+(?: \\w+)*)')\n\n"
] | [
0
] | [] | [] | [
"dataframe",
"python"
] | stackoverflow_0074666737_dataframe_python.txt |
Q:
how do you Apply math multiply a number to a decimal point python
i want apply lambda to do multiplication which is condition type data float value like this
0.412
0.0036
0.0467
0.000678
0.00000342
expected output
0.41
0.36
0.47
0.68
0.34
A:
You can use replace with astype and round.
Try this :
df["col"] = df["col"].replace("\.0*", ".", regex=True).astype(float).round(2)
# Output :
print(df)
col
0 0.41
1 0.36
2 0.47
3 0.68
4 0.34
A:
Try this:
import re
lambda_func = lambda x: re.sub(r'(\.0*)', r'.', str(x))
| how do you Apply math multiply a number to a decimal point python | i want apply lambda to do multiplication which is condition type data float value like this
0.412
0.0036
0.0467
0.000678
0.00000342
expected output
0.41
0.36
0.47
0.68
0.34
| [
"You can use replace with astype and round.\nTry this :\ndf[\"col\"] = df[\"col\"].replace(\"\\.0*\", \".\", regex=True).astype(float).round(2)\n\n# Output :\nprint(df)\n\n col\n0 0.41\n1 0.36\n2 0.47\n3 0.68\n4 0.34\n\n",
"Try this:\nimport re\nlambda_func = lambda x: re.sub(r'(\\.0*)', r'.', str(x))\n\n"
] | [
1,
0
] | [] | [] | [
"apply",
"dataframe",
"numpy",
"pandas",
"python"
] | stackoverflow_0074665796_apply_dataframe_numpy_pandas_python.txt |
Q:
Python Move/Copy files/Getting names of files from folders without using os.chdir
Without using os.chdir how to move/copy files (specific files using wild card, say ABC in file name) from folder X (drive D) to folder Y (drive E) while the python script is in folder Z (drive F), ? I will run py script from windows task scheduler.
A:
How about:
subprocess.Popen('copy file.exe C:/path/to/copy/', shell=True)
| Python Move/Copy files/Getting names of files from folders without using os.chdir | Without using os.chdir how to move/copy files (specific files using wild card, say ABC in file name) from folder X (drive D) to folder Y (drive E) while the python script is in folder Z (drive F), ? I will run py script from windows task scheduler.
| [
"How about:\nsubprocess.Popen('copy file.exe C:/path/to/copy/', shell=True)\n\n"
] | [
0
] | [] | [] | [
"python",
"shutil"
] | stackoverflow_0074666753_python_shutil.txt |
Q:
Regex python Match after and before a specific string
Lets say we have this
string:"Code:1,Some text some other text {fdf: more text, attr=important "
I want to catch the pattern using Regex that can findall attr and extract important and 1 and put them in dict.
I tried this one:
(?<=testcaseid_)[^_]+_[^_]+
but still capture all the previous
A:
I'm not sure if I understand well, but if you want to get everything starts from "1" to something after attr= you can also use regex like this:
r"1.*?attr=\w+"
| Regex python Match after and before a specific string | Lets say we have this
string:"Code:1,Some text some other text {fdf: more text, attr=important "
I want to catch the pattern using Regex that can findall attr and extract important and 1 and put them in dict.
I tried this one:
(?<=testcaseid_)[^_]+_[^_]+
but still capture all the previous
| [
"I'm not sure if I understand well, but if you want to get everything starts from \"1\" to something after attr= you can also use regex like this:\nr\"1.*?attr=\\w+\"\n\n"
] | [
0
] | [] | [] | [
"list",
"python",
"regex",
"split",
"web_scraping"
] | stackoverflow_0074666604_list_python_regex_split_web_scraping.txt |
Q:
I try to iterate over a function and don't know where the error (TypeError: 'tuple' object is not callable) is coming from
def result(player1, player2):
if player1 == 'A' and player2 == 'X' or player1 == 'B' and player2 == 'Y' or player1 == 'C' and player2 == 'Z':
state = 'draw'
return state, VALUE[player2]
if player1 == 'A' and player2 == 'Y' or player1 == 'B' and player2 == 'Z' or player1 == 'C' and player2 == 'X':
state = 'win'
return state, VALUE[player2]
if player1 == 'A' and player2 == 'Z' or player1 == 'B' and player2 == 'X' or player1 == 'C' and player2 == 'Y':
state = 'loss'
return state, VALUE[player2]
for i in range(len(new_data)):
points = 0
player1 = new_data[i][0]
player2 = new_data[i][1]
results = result(player1, player2)
if results[0] == 'draw':
points += 1 + result[1]
if results[0] == 'win':
points += 6 + result[1]
if results[0] == 'loss':
points += 1 + result[1]
I thought my function result is returning two values that are stored in the variable results as a tuple, and that I can then access with results[0], results[1].
But apparently I'm wrong.
`i = 0
result = result(new_data[i][0], new_data[i][1])
print(result)
`
Returns my desired output
('loss', 2)
This I tried and it returned the values I expected. But these values don't seem to get put into the function.
A:
points += 1 + result[1]
I think you should edit the result[1] as results[1]. You are trying to reach the 1st index of the method, not the output.
points += 1 + results[1]
And in case it does not work, can you please share all error message and your custom input used in the method?
A:
Now I would like to point out a few lines of code which don't seem to make any sense.
First, you can only return a single value from a function. Not more than that. So try to create an empty list as res[] in the function result.
Since it provides better options to handle the data with lists as you can store different data types in the list, try storing the string as res[0]="win" or "loss" or "draw" and the values as res[1]. Then you can handle the return type as a list and then access its members as list[0] and list[1].
Values() is a function used with dictionary data type and not with tuples.
Lastly, I think you have not handled the if statement correctly. Use an ample amount of parentheses () in the if statement.
Hope you'll be able to solve your problem.
| I try to iterate over a function and don't know where the error (TypeError: 'tuple' object is not callable) is coming from | def result(player1, player2):
if player1 == 'A' and player2 == 'X' or player1 == 'B' and player2 == 'Y' or player1 == 'C' and player2 == 'Z':
state = 'draw'
return state, VALUE[player2]
if player1 == 'A' and player2 == 'Y' or player1 == 'B' and player2 == 'Z' or player1 == 'C' and player2 == 'X':
state = 'win'
return state, VALUE[player2]
if player1 == 'A' and player2 == 'Z' or player1 == 'B' and player2 == 'X' or player1 == 'C' and player2 == 'Y':
state = 'loss'
return state, VALUE[player2]
for i in range(len(new_data)):
points = 0
player1 = new_data[i][0]
player2 = new_data[i][1]
results = result(player1, player2)
if results[0] == 'draw':
points += 1 + result[1]
if results[0] == 'win':
points += 6 + result[1]
if results[0] == 'loss':
points += 1 + result[1]
I thought my function result is returning two values that are stored in the variable results as a tuple, and that I can then access with results[0], results[1].
But apparently I'm wrong.
`i = 0
result = result(new_data[i][0], new_data[i][1])
print(result)
`
Returns my desired output
('loss', 2)
This I tried and it returned the values I expected. But these values don't seem to get put into the function.
| [
"points += 1 + result[1]\n\nI think you should edit the result[1] as results[1]. You are trying to reach the 1st index of the method, not the output.\npoints += 1 + results[1]\n\nAnd in case it does not work, can you please share all error message and your custom input used in the method?\n",
"Now I would like to point out a few lines of code which don't seem to make any sense.\n\nFirst, you can only return a single value from a function. Not more than that. So try to create an empty list as res[] in the function result.\nSince it provides better options to handle the data with lists as you can store different data types in the list, try storing the string as res[0]=\"win\" or \"loss\" or \"draw\" and the values as res[1]. Then you can handle the return type as a list and then access its members as list[0] and list[1].\nValues() is a function used with dictionary data type and not with tuples.\nLastly, I think you have not handled the if statement correctly. Use an ample amount of parentheses () in the if statement.\n\nHope you'll be able to solve your problem.\n"
] | [
0,
0
] | [] | [] | [
"python",
"tuples",
"typeerror"
] | stackoverflow_0074666692_python_tuples_typeerror.txt |
Q:
How to associate repeated strings with values from a dictionary in a dataframe?
I'm trying to associate in a dataframe the values of a list of numbers with the respective strings. Here's the problem:
import pandas as pd
categories = {"key1":["string1", "string2", "string3"], "key2": ["string1", "str1", "str2"]}
strings= ["string1", "string2", "string3", "string1", "str1", "str2"]
numbers = [1,2,3,4,5,6]
array = []
expected_fields = []
#Creation of the dataframe with double rows, where the first is the key of categories
#and the second is the elements of the list present in the values of categories
for key, value in categories.items():
array.extend([key]* len(value))
expected_fields.extend(value)
arrays = [array ,expected_fields]
#Creation of the dataframe
tuples = list(zip(*arrays))
index = pd.MultiIndex.from_tuples(tuples)
df = pd.Series(dtype='float', index=index)
for key, values in categories.items():
for value in values:
for i in range(len(strings)):
if strings[i] == value:
df[key, value] = numbers[i]
print(df)
Output:
key1 string1 4.0 <---------
string2 2.0
string3 3.0
key2 string1 4.0
str1 5.0
str2 6.0
Expected output:
key1 string1 1.0 <---------
string2 2.0
string3 3.0
key2 string1 4.0
str1 5.0
str2 6.0
The association is always going for the last element of the list due to the repeated string in strings. However I want the first element of numbers for the first repeated string and the following number for the second repeated string.
I could count the number of elements of the values of the dictionary categories for each key and perform an increment in the for loop correspondent to the strings and based on the lower and upper limit add an if inside that for loop, however I can't go for this approach due to technical limitations.
A:
Do you need a solution with pandas? How about this solution:
from collections import OrderedDict
categories = OrderedDict([("key1", ["string1", "string2", "string3"]), ("key2", ["string1", "str1", "str2"])])
def category_strings(ordered_dict):
current_id = 1
for key, strings in ordered_dict.items():
for string in strings:
yield current_id, key, string
current_id += 1
for id, key, string in category_strings(categories):
print(id, key, string)
Output:
1 key1 string1
2 key1 string2
3 key1 string3
4 key2 string1
5 key2 str1
6 key2 str2
A:
import pandas as pd
categories = {"key1":["string1", "string2", "string3"], "key2": ["string1", "str1", "str2"]}
strings= ["string1", "string2", "string3", "string1", "str1", "str2"]
numbers = [1,2,3,4,5,6]
array = []
expected_fields = []
#Creation of the dataframe with double rows, where the first is the key of categories
#and the second is the elements of the list present in the values of categories
for key, value in categories.items():
array.extend([key]* len(value))
expected_fields.extend(value)
arrays = [array ,expected_fields]
#Creation of the dataframe
tuples = list(zip(*arrays))
index = pd.MultiIndex.from_tuples(tuples)
df = pd.Series(dtype='float', index=index)
strings_copy = strings.copy()
for key, values in categories.items():
for value in values:
for i in range(len(strings_copy)):
if strings_copy[i] == value:
strings_copy[i] = None
df[key, value] = numbers[i]
break
print(df)
Output:
key1 string1 1.0
string2 2.0
string3 3.0
key2 string1 4.0
str1 5.0
str2 6.0
dtype: float64
| How to associate repeated strings with values from a dictionary in a dataframe? | I'm trying to associate in a dataframe the values of a list of numbers with the respective strings. Here's the problem:
import pandas as pd
categories = {"key1":["string1", "string2", "string3"], "key2": ["string1", "str1", "str2"]}
strings= ["string1", "string2", "string3", "string1", "str1", "str2"]
numbers = [1,2,3,4,5,6]
array = []
expected_fields = []
#Creation of the dataframe with double rows, where the first is the key of categories
#and the second is the elements of the list present in the values of categories
for key, value in categories.items():
array.extend([key]* len(value))
expected_fields.extend(value)
arrays = [array ,expected_fields]
#Creation of the dataframe
tuples = list(zip(*arrays))
index = pd.MultiIndex.from_tuples(tuples)
df = pd.Series(dtype='float', index=index)
for key, values in categories.items():
for value in values:
for i in range(len(strings)):
if strings[i] == value:
df[key, value] = numbers[i]
print(df)
Output:
key1 string1 4.0 <---------
string2 2.0
string3 3.0
key2 string1 4.0
str1 5.0
str2 6.0
Expected output:
key1 string1 1.0 <---------
string2 2.0
string3 3.0
key2 string1 4.0
str1 5.0
str2 6.0
The association is always going for the last element of the list due to the repeated string in strings. However I want the first element of numbers for the first repeated string and the following number for the second repeated string.
I could count the number of elements of the values of the dictionary categories for each key and perform an increment in the for loop correspondent to the strings and based on the lower and upper limit add an if inside that for loop, however I can't go for this approach due to technical limitations.
| [
"Do you need a solution with pandas? How about this solution:\nfrom collections import OrderedDict\n\ncategories = OrderedDict([(\"key1\", [\"string1\", \"string2\", \"string3\"]), (\"key2\", [\"string1\", \"str1\", \"str2\"])])\n\ndef category_strings(ordered_dict):\n current_id = 1\n for key, strings in ordered_dict.items():\n for string in strings:\n yield current_id, key, string\n current_id += 1\n \nfor id, key, string in category_strings(categories):\n print(id, key, string)\n\nOutput:\n1 key1 string1\n2 key1 string2\n3 key1 string3\n4 key2 string1\n5 key2 str1\n6 key2 str2\n\n",
"import pandas as pd\ncategories = {\"key1\":[\"string1\", \"string2\", \"string3\"], \"key2\": [\"string1\", \"str1\", \"str2\"]}\nstrings= [\"string1\", \"string2\", \"string3\", \"string1\", \"str1\", \"str2\"]\nnumbers = [1,2,3,4,5,6]\n\narray = []\nexpected_fields = []\n\n#Creation of the dataframe with double rows, where the first is the key of categories\n#and the second is the elements of the list present in the values of categories\nfor key, value in categories.items():\n array.extend([key]* len(value))\n expected_fields.extend(value)\n \narrays = [array ,expected_fields]\n\n#Creation of the dataframe\ntuples = list(zip(*arrays))\nindex = pd.MultiIndex.from_tuples(tuples)\ndf = pd.Series(dtype='float', index=index)\n\nstrings_copy = strings.copy()\nfor key, values in categories.items():\n for value in values:\n for i in range(len(strings_copy)):\n if strings_copy[i] == value:\n strings_copy[i] = None\n df[key, value] = numbers[i]\n break\nprint(df)\n\nOutput:\nkey1 string1 1.0\n string2 2.0\n string3 3.0\nkey2 string1 4.0\n str1 5.0\n str2 6.0\ndtype: float64\n\n"
] | [
0,
0
] | [] | [] | [
"dataframe",
"pandas",
"python",
"python_2.7",
"python_3.x"
] | stackoverflow_0074666167_dataframe_pandas_python_python_2.7_python_3.x.txt |
Q:
How to print multilevel nested dictonary in python
Here is my code
print(data['a'][0]['aa'])
print(data['a'][0].keys())
This is input->
data={
'a':[{
'aa':{'aax':5,'aay':6,'aaz':7},
'ab':{'abx':8,'aby':9,'abz':10}
},
{
'aaa':{'aaax':11,'aaay':12,'aaaz':13},
'aab':{'aabx':14,'aaby':15,'aabz':16}
}]
}
How can i print the dictionary like this output
Output:
Key:aax Value: 5
Key:aay Value: 6
Key:aaz Value: 7
Key:abx Value: 8
Key:aby Value: 9
Key:abz Value: 10
Key:aaax Value: 11
How can i loop through in this type of data.How can i loop through and print all the data I can access the single data but how can print all data.
A:
just use simple for loop
for outer_list in data['a']:
for outer_key, outer_value in outer_list.items():
for key, value in outer_value.items():
print("Key: {}, Value: {}".format(key, value))
output:
Key: aax, Value: 5
Key: aay, Value: 6
Key: aaz, Value: 7
Key: abx, Value: 8
Key: aby, Value: 9
Key: abz, Value: 10
Key: aaax, Value: 11
Key: aaay, Value: 12
Key: aaaz, Value: 13
Key: aabx, Value: 14
Key: aaby, Value: 15
Key: aabz, Value: 16
A:
To print all the key-value pairs in a multilevel nested dictionary, you can use a nested loop structure. Here is an example:
for outer_dict in data['a']:
for inner_dict in outer_dict.values():
for key, value in inner_dict.items():
print(f"Key: {key} Value: {value}")
| How to print multilevel nested dictonary in python | Here is my code
print(data['a'][0]['aa'])
print(data['a'][0].keys())
This is input->
data={
'a':[{
'aa':{'aax':5,'aay':6,'aaz':7},
'ab':{'abx':8,'aby':9,'abz':10}
},
{
'aaa':{'aaax':11,'aaay':12,'aaaz':13},
'aab':{'aabx':14,'aaby':15,'aabz':16}
}]
}
How can i print the dictionary like this output
Output:
Key:aax Value: 5
Key:aay Value: 6
Key:aaz Value: 7
Key:abx Value: 8
Key:aby Value: 9
Key:abz Value: 10
Key:aaax Value: 11
How can i loop through in this type of data.How can i loop through and print all the data I can access the single data but how can print all data.
| [
"just use simple for loop\nfor outer_list in data['a']:\n for outer_key, outer_value in outer_list.items():\n for key, value in outer_value.items():\n print(\"Key: {}, Value: {}\".format(key, value))\n\noutput:\nKey: aax, Value: 5\nKey: aay, Value: 6\nKey: aaz, Value: 7\nKey: abx, Value: 8\nKey: aby, Value: 9\nKey: abz, Value: 10\nKey: aaax, Value: 11\nKey: aaay, Value: 12\nKey: aaaz, Value: 13\nKey: aabx, Value: 14\nKey: aaby, Value: 15\nKey: aabz, Value: 16\n\n",
"To print all the key-value pairs in a multilevel nested dictionary, you can use a nested loop structure. Here is an example:\nfor outer_dict in data['a']:\n for inner_dict in outer_dict.values():\n for key, value in inner_dict.items():\n print(f\"Key: {key} Value: {value}\")\n\n"
] | [
1,
0
] | [] | [] | [
"dictionary",
"python"
] | stackoverflow_0074666762_dictionary_python.txt |
Q:
Any Easy Fix for Module Not Found Error ‘TKinter’?
imported tkinter
jnot able to get the expected output since it gives an error.
imported tkinter
jnot able to get the expected output since it gives an error.
| Any Easy Fix for Module Not Found Error ‘TKinter’? | imported tkinter
jnot able to get the expected output since it gives an error.
imported tkinter
jnot able to get the expected output since it gives an error.
| [] | [] | [
"Firstly, import it like:\nfrom Tkinter import *\n\nif there are still errors, be sure that module installed at your inventory, Open terminal, after reaching the folder you're working, enter pip list.If tkinter is not there, you might be installed it to somewhere else than your environment/folder. In the same terminal, enter\npip3 install tk\n\n"
] | [
-1
] | [
"python"
] | stackoverflow_0074666635_python.txt |
Q:
Flask Sqlalchemy one to many foreignkey error
I made two simple classes as model:
app = Flask(__name__)
app.secret_key = 'winwin'
app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///abc.db'
app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = False
app.permanent_session_lifetime = timedelta(minutes=5)
db = SQLAlchemy(app)
class User(db.Model):
__tablename__ = 'user'
id = db.Column(db.Integer(), unique=True ,primary_key=True)
fb_id = db.Column(db.Integer(), unique=True)
email = db.Column(db.String(20), unique=True,)
login = db.relationship('Login', backref='user')
class Login(db.Model):
id = db.Column(db.Integer, unique=True ,primary_key=True)
email = db.Column(db.String(20), unique=True, nullable=False)
password = db.Column(db.String(20),unique=True,nullable=False)
user_id = db.Column(db.Integer, db.ForeignKey('user.id'))
with app.app_context(): //add some test datas
aaa = User(email='[email protected]',fb_id = 111)
bbb = User(email='[email protected]',fb_id = 222)
ccc = User(email='[email protected]',fb_id = 333)
ddd = User(email='[email protected]',fb_id = 444)
eee = User(email='[email protected]',fb_id = 555)
db.session.add_all(['aaa,bbb,ccc'])
db.session.commit()
aa = Login(email='111@123',password='111',user_id=user.id)
bb = Login(email='222@123',password='222',user_id=user.id)
db.session.add_all(['aa,bb'])
db.session.commit()
When i run in vscode it throw me an error:
sqlalchemy.orm.exc.UnmappedInstanceError: Class 'builtins.str' is not mapped
it seem told me the the parent's table can't be found, but i have already set the foreignkey and relationship.
Anyone konws what i did wrong? Thanks!
A:
session.add_all expects a list of model instances as its argument, but you are passing a list containing a string.
In instead of
db.session.add_all(['aaa,bbb,ccc'])
pass the objects that you created, like this:
db.session.add_all([aaa, bbb, ccc])
| Flask Sqlalchemy one to many foreignkey error | I made two simple classes as model:
app = Flask(__name__)
app.secret_key = 'winwin'
app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///abc.db'
app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = False
app.permanent_session_lifetime = timedelta(minutes=5)
db = SQLAlchemy(app)
class User(db.Model):
__tablename__ = 'user'
id = db.Column(db.Integer(), unique=True ,primary_key=True)
fb_id = db.Column(db.Integer(), unique=True)
email = db.Column(db.String(20), unique=True,)
login = db.relationship('Login', backref='user')
class Login(db.Model):
id = db.Column(db.Integer, unique=True ,primary_key=True)
email = db.Column(db.String(20), unique=True, nullable=False)
password = db.Column(db.String(20),unique=True,nullable=False)
user_id = db.Column(db.Integer, db.ForeignKey('user.id'))
with app.app_context(): //add some test datas
aaa = User(email='[email protected]',fb_id = 111)
bbb = User(email='[email protected]',fb_id = 222)
ccc = User(email='[email protected]',fb_id = 333)
ddd = User(email='[email protected]',fb_id = 444)
eee = User(email='[email protected]',fb_id = 555)
db.session.add_all(['aaa,bbb,ccc'])
db.session.commit()
aa = Login(email='111@123',password='111',user_id=user.id)
bb = Login(email='222@123',password='222',user_id=user.id)
db.session.add_all(['aa,bb'])
db.session.commit()
When i run in vscode it throw me an error:
sqlalchemy.orm.exc.UnmappedInstanceError: Class 'builtins.str' is not mapped
it seem told me the the parent's table can't be found, but i have already set the foreignkey and relationship.
Anyone konws what i did wrong? Thanks!
| [
"session.add_all expects a list of model instances as its argument, but you are passing a list containing a string.\nIn instead of\ndb.session.add_all(['aaa,bbb,ccc'])\n\npass the objects that you created, like this:\ndb.session.add_all([aaa, bbb, ccc])\n\n"
] | [
0
] | [] | [] | [
"flask_sqlalchemy",
"foreign_keys",
"python"
] | stackoverflow_0074664654_flask_sqlalchemy_foreign_keys_python.txt |
Q:
Grouping in regular expression with python
I have pandas series which looks like:
m = pd.Series(['expected != is --> found missing lices ## expected: 2.25 || is: 4.5 || expected: 3 || is: 2 ##','expected != is --> found missing lices ## expected: 3.35 || is: 5.5 || expected: 3 || is: 3 ##',
'expected != is --> found missing lices ## expected: 2.25 || is: 4.5 || expected: 3 || is: 2 ##'])
what I would like to do is replacing each elements of this series with:
'expected != is --> found missing lices'
I use:
m = m.replace('expected != is --> found missing lices ## expected: {[0-9]\d*(\.\d+)?} || is: {[0-9]\d*(\.\d+)?} || expected: {[0-9]\d*} || is: {[0-9]\d*} ##','expected != is --> found missing lices')
However, I do not get the correct result. I am new to using regular expression, I would be glad if someone can explain which part is defined wrong.
A:
You can use
m = m.replace(r'expected != is --> found missing lices ## expected: \d+(?:\.\d+)? \|\| is: [0-9]\d*(\.\d+)? \|\| expected: \d+ \|\| is: \d+ ##', 'expected != is --> found missing lices', regex=True)
See the regex demo
Note:
{...} is not a grouping construct in regexps, you need (...) to group and capture, or (?:...) to just group patterns, but in your case, you just do not need it
The | char is special and needs escaping
[0-9]\d* is basically \d+, one or more digits.
| Grouping in regular expression with python | I have pandas series which looks like:
m = pd.Series(['expected != is --> found missing lices ## expected: 2.25 || is: 4.5 || expected: 3 || is: 2 ##','expected != is --> found missing lices ## expected: 3.35 || is: 5.5 || expected: 3 || is: 3 ##',
'expected != is --> found missing lices ## expected: 2.25 || is: 4.5 || expected: 3 || is: 2 ##'])
what I would like to do is replacing each elements of this series with:
'expected != is --> found missing lices'
I use:
m = m.replace('expected != is --> found missing lices ## expected: {[0-9]\d*(\.\d+)?} || is: {[0-9]\d*(\.\d+)?} || expected: {[0-9]\d*} || is: {[0-9]\d*} ##','expected != is --> found missing lices')
However, I do not get the correct result. I am new to using regular expression, I would be glad if someone can explain which part is defined wrong.
| [
"You can use\nm = m.replace(r'expected != is --> found missing lices ## expected: \\d+(?:\\.\\d+)? \\|\\| is: [0-9]\\d*(\\.\\d+)? \\|\\| expected: \\d+ \\|\\| is: \\d+ ##', 'expected != is --> found missing lices', regex=True)\n\nSee the regex demo\nNote:\n\n{...} is not a grouping construct in regexps, you need (...) to group and capture, or (?:...) to just group patterns, but in your case, you just do not need it\nThe | char is special and needs escaping\n[0-9]\\d* is basically \\d+, one or more digits.\n\n"
] | [
0
] | [] | [] | [
"python",
"regex"
] | stackoverflow_0074666518_python_regex.txt |
Q:
overwrite dataframe rows with merge
I am trying to overwrite specific rows and columns from one dataframe with a second dataframe rows and columns. I can't give the actual data but I will use a proxy here.
Here is an example and what I have tried:
df1
UID B C D
0 X14 cat red One
1 X26 cat blue Two
2 X99 cat pink One
3 X54 cat pink One
df2
UID B C EX2
0 X14 dog blue coat
1 X88 rat green jacket
2 X99 bat red glasses
3 X29 bat red shoes
What I want to do here is overwrite column B and C in df1 with the values in df2 based upon UID. Therefore in this example X88 and X29 from df2 would not appear in df2. Also column D would not be affected and EX2 not
The outcome would looks as such:
df1
UID B C D
0 X14 dog blue One
1 X26 cat blue Two
2 X99 bat red One
3 X54 cat pink One
I looked at this solution : Pandas merge two dataframe and overwrite rows
However this appears to only update null values whereas I want an overwrite.
My attempt looked this like:
df = df1.merge(df2.filter(['B', 'C']), on=['B', 'C'], how='left')
For my data this actually doesn't seem to overwrite anything. Please could someone explain why this would not work?
Thanks
A:
You can approach this by using reindex_like and combine_first.
Try this :
out = (
df2.set_index("UID")
.reindex_like(df1.set_index("UID"))
.combine_first(df1.set_index("UID"))
.reset_index()
)
# Output :
print(out)
UID B C D
0 X14 dog blue One
1 X26 cat blue Two
2 X99 bat red One
3 X54 cat pink One
A:
One approach could be as follows:
First, use df.set_index to make column UID your index (inplace).
Next, use df.update with parameter overwrite set to True (also use set_index here for the "other" df: df2). This will overwrite all the columns that the two dfs have in common (i.e. B and C) based on index matches (i.e. now UID).
Finally, restore the standard index using df.reset_index.
df1.set_index('UID', inplace=True)
df1.update(df2.set_index('UID'), overwrite=True)
df1.reset_index(inplace=True)
print(df1)
UID B C D
0 X14 dog blue One
1 X26 cat blue Two
2 X99 bat red One
3 X54 cat pink One
A:
Using Update function
df1.set_index('UID', inplace=True)
df2.set_index('UID', inplace=True)
df1.update(df2)
df1.reset_index(inplace=True)
print(df1)
Output
UID B C D
0 X14 dog blue One
1 X26 cat blue Two
2 X99 bat red One
3 X54 cat pink One
| overwrite dataframe rows with merge | I am trying to overwrite specific rows and columns from one dataframe with a second dataframe rows and columns. I can't give the actual data but I will use a proxy here.
Here is an example and what I have tried:
df1
UID B C D
0 X14 cat red One
1 X26 cat blue Two
2 X99 cat pink One
3 X54 cat pink One
df2
UID B C EX2
0 X14 dog blue coat
1 X88 rat green jacket
2 X99 bat red glasses
3 X29 bat red shoes
What I want to do here is overwrite column B and C in df1 with the values in df2 based upon UID. Therefore in this example X88 and X29 from df2 would not appear in df2. Also column D would not be affected and EX2 not
The outcome would looks as such:
df1
UID B C D
0 X14 dog blue One
1 X26 cat blue Two
2 X99 bat red One
3 X54 cat pink One
I looked at this solution : Pandas merge two dataframe and overwrite rows
However this appears to only update null values whereas I want an overwrite.
My attempt looked this like:
df = df1.merge(df2.filter(['B', 'C']), on=['B', 'C'], how='left')
For my data this actually doesn't seem to overwrite anything. Please could someone explain why this would not work?
Thanks
| [
"You can approach this by using reindex_like and combine_first.\nTry this :\nout = (\n df2.set_index(\"UID\")\n .reindex_like(df1.set_index(\"UID\"))\n .combine_first(df1.set_index(\"UID\"))\n .reset_index()\n )\n\n# Output :\nprint(out)\n\n UID B C D\n0 X14 dog blue One\n1 X26 cat blue Two\n2 X99 bat red One\n3 X54 cat pink One\n\n",
"One approach could be as follows:\n\nFirst, use df.set_index to make column UID your index (inplace).\nNext, use df.update with parameter overwrite set to True (also use set_index here for the \"other\" df: df2). This will overwrite all the columns that the two dfs have in common (i.e. B and C) based on index matches (i.e. now UID).\nFinally, restore the standard index using df.reset_index.\n\ndf1.set_index('UID', inplace=True)\ndf1.update(df2.set_index('UID'), overwrite=True)\ndf1.reset_index(inplace=True)\nprint(df1)\n\n UID B C D\n0 X14 dog blue One\n1 X26 cat blue Two\n2 X99 bat red One\n3 X54 cat pink One\n\n",
"Using Update function\ndf1.set_index('UID', inplace=True)\ndf2.set_index('UID', inplace=True)\n\ndf1.update(df2)\ndf1.reset_index(inplace=True)\nprint(df1)\n\nOutput\n UID B C D\n0 X14 dog blue One\n1 X26 cat blue Two\n2 X99 bat red One\n3 X54 cat pink One\n\n"
] | [
1,
1,
0
] | [] | [] | [
"dataframe",
"pandas",
"python"
] | stackoverflow_0074666769_dataframe_pandas_python.txt |
Q:
How to add a Zero-or-more-condition (?) to multiple characters via regex without creating a capturing group?
The function rearrange_name should be given a name in the format:
Last Name (Normal or Double-barrelled name) followed by a "," " " and the First Name (either just one first name or together with middle initial name or full middle name)
Then the name should be rearranged to print it out as first name + last name.
This is the start of the code.
import re
def rearrange_name(name):
result = re.search (r"^(\w*), (\w*)$", name)
if result == None:
return name
return "{} {}".format(result[2], result[1])
name=rearrange_name("Kennedy, John F.")
print(name)
I know this specific problem has already been posted before
(Fix the regular expression used in the rearrange_name function so that it can match middle names, middle initials, as well as double surnames),
but i have a problem with the solution that was given that time as it allows for nonsense names like "-, John F." or " , John F." to be processed as well. I would have added a comment, but i don't have any reputation at all. This is my first post ever on stack overflow.
I'd like to change the code for it to be correct 100%.
The original solution given:
import re
def rearrange_name(name):
result = re.search(r"^([\w -]+), ([\w. ]+)$", name)
if result == None:
return name
return "{} {}".format(result[2], result[1])
name=rearrange_name("Kennedy, John F.")
print(name)
name=rearrange_name("Kennedy, John Fitzgerald")
print(name)
name=rearrange_name("Kennedy-McJohnson, John Fitzgerald")
print(name)
My solution approach, which you can see in the screenshot of regex101.com detects all the possible names given correctly, but the groups aren't detected the way they should.
enter image description here
I am struggling with it, as at least in my opinion you have to use "or" sequences ()? as groups which then aren't detected by the print function.
To give some examples:
These should all work and everything else shouldnt (obviously varying letters should be allowed:
"Kennedy, John"
just normal Last name + First name
Output: John Kennedy
"Kennedy, John F." - Last name + First name + Middle name initials
Output: John F. Kennedy
"Kennedy, John Fitzgerald" Last name + First name + Middle name
John Fitzgerald Kennedy
"Kennedy-McJohnson, John Fitzgerald" Last name double barreled + First name + Middle name
Output: John Fitzgerald Kennedy-McJohnson
"Kennedy-McJohnson, John F." Last name double barreled + First name + Middle name initials
John F. Kennedy-McJohnson
Swap every letter for another letter.
Characters that should be allowed: Letters (except for the spaces in between the names, the "." for the initial, the "-" for the double barreled name.
Not expected output as it should be considered invalid input:
input: |||?!**Kennedy, John F#####.
output:
|||?!**Kennedy, John F#####.
So if it is a valid name, the order is changed and put to the screen.
If it is not a valid name, the name is printed out the way it is presented first.
A:
Try the pattern:
([A-Z][a-zA-Z]+(?:-[A-Z][a-zA-Z]+)?), ([A-Z][a-zA-Z]+\s*(?:[A-Z][a-zA-Z]+|[A-Z]\.)?)
Regex demo.
import re
pat = re.compile(
r"([A-Z][a-zA-Z]+(?:-[A-Z][a-zA-Z]+)?), ([A-Z][a-zA-Z]+\s*(?:[A-Z][a-zA-Z]+|[A-Z]\.)?)"
)
def rearrange_name(name):
m = pat.match(name)
if m:
return "{} {}".format(m.group(2), m.group(1))
return name
name = rearrange_name("Kennedy, John F.")
print(name)
name = rearrange_name("Kennedy, John Fitzgerald")
print(name)
name = rearrange_name("Kennedy-McJohnson, John Fitzgerald")
print(name)
Prints:
John F. Kennedy
John Fitzgerald Kennedy
John Fitzgerald Kennedy-McJohnson
| How to add a Zero-or-more-condition (?) to multiple characters via regex without creating a capturing group? | The function rearrange_name should be given a name in the format:
Last Name (Normal or Double-barrelled name) followed by a "," " " and the First Name (either just one first name or together with middle initial name or full middle name)
Then the name should be rearranged to print it out as first name + last name.
This is the start of the code.
import re
def rearrange_name(name):
result = re.search (r"^(\w*), (\w*)$", name)
if result == None:
return name
return "{} {}".format(result[2], result[1])
name=rearrange_name("Kennedy, John F.")
print(name)
I know this specific problem has already been posted before
(Fix the regular expression used in the rearrange_name function so that it can match middle names, middle initials, as well as double surnames),
but i have a problem with the solution that was given that time as it allows for nonsense names like "-, John F." or " , John F." to be processed as well. I would have added a comment, but i don't have any reputation at all. This is my first post ever on stack overflow.
I'd like to change the code for it to be correct 100%.
The original solution given:
import re
def rearrange_name(name):
result = re.search(r"^([\w -]+), ([\w. ]+)$", name)
if result == None:
return name
return "{} {}".format(result[2], result[1])
name=rearrange_name("Kennedy, John F.")
print(name)
name=rearrange_name("Kennedy, John Fitzgerald")
print(name)
name=rearrange_name("Kennedy-McJohnson, John Fitzgerald")
print(name)
My solution approach, which you can see in the screenshot of regex101.com detects all the possible names given correctly, but the groups aren't detected the way they should.
enter image description here
I am struggling with it, as at least in my opinion you have to use "or" sequences ()? as groups which then aren't detected by the print function.
To give some examples:
These should all work and everything else shouldnt (obviously varying letters should be allowed:
"Kennedy, John"
just normal Last name + First name
Output: John Kennedy
"Kennedy, John F." - Last name + First name + Middle name initials
Output: John F. Kennedy
"Kennedy, John Fitzgerald" Last name + First name + Middle name
John Fitzgerald Kennedy
"Kennedy-McJohnson, John Fitzgerald" Last name double barreled + First name + Middle name
Output: John Fitzgerald Kennedy-McJohnson
"Kennedy-McJohnson, John F." Last name double barreled + First name + Middle name initials
John F. Kennedy-McJohnson
Swap every letter for another letter.
Characters that should be allowed: Letters (except for the spaces in between the names, the "." for the initial, the "-" for the double barreled name.
Not expected output as it should be considered invalid input:
input: |||?!**Kennedy, John F#####.
output:
|||?!**Kennedy, John F#####.
So if it is a valid name, the order is changed and put to the screen.
If it is not a valid name, the name is printed out the way it is presented first.
| [
"Try the pattern:\n([A-Z][a-zA-Z]+(?:-[A-Z][a-zA-Z]+)?), ([A-Z][a-zA-Z]+\\s*(?:[A-Z][a-zA-Z]+|[A-Z]\\.)?)\n\nRegex demo.\nimport re\n\n\npat = re.compile(\n r\"([A-Z][a-zA-Z]+(?:-[A-Z][a-zA-Z]+)?), ([A-Z][a-zA-Z]+\\s*(?:[A-Z][a-zA-Z]+|[A-Z]\\.)?)\"\n)\n\n\ndef rearrange_name(name):\n m = pat.match(name)\n if m:\n return \"{} {}\".format(m.group(2), m.group(1))\n\n return name\n\n\nname = rearrange_name(\"Kennedy, John F.\")\nprint(name)\n\nname = rearrange_name(\"Kennedy, John Fitzgerald\")\nprint(name)\n\nname = rearrange_name(\"Kennedy-McJohnson, John Fitzgerald\")\nprint(name)\n\nPrints:\nJohn F. Kennedy\nJohn Fitzgerald Kennedy\nJohn Fitzgerald Kennedy-McJohnson\n\n"
] | [
0
] | [] | [] | [
"python",
"regex"
] | stackoverflow_0074666500_python_regex.txt |
Q:
reversed regex mashine implementation
I'm trying to match a string starting from the last character to fail as soon as possible. This way I can fail a match with a custom string cstr (see specification below) with least amount of operations (4th property).
From a theoritical perspective the regex can be represented as a finite state mashine and the arrows can be flipped, creating the reversed regex.
I'm looking for an implementation of this. A library/program which I can give the string and the pattern. cstr is implemented in python, so if possible a python module. (For the curious i-th character is not calculated until needed.) For anything other I need to do much more work because of cstr's calculation is hard to port to another language.
The implementation doesn't have to cover all latex syntax. I'm looking for the basics. No lookaheads or fancy stuff. See specification below.
I may be lacking common knowledge. Please do comment obvious things, too.
Specification
The custom string cstr has the following properties:
String can be calculated in finite time.
String has finite length
The last character is known
Every previous character requires a costly calculation
Until the string is calculated fully, length is unknown
When the string is calcualted fully, I want to match it with a simple regex which may contain these from the syntax. No look aheads or fancy stuff.
alphanumeric characters
uinicode characters
., *, +, ?, \w, \W, [], |, escape char \, range specifitation with { , }
PS: This is not a homework question. I'm trying to formulate my question as clear as possible.
A:
OP here. Here are some thougts:
Since I'm looking for an unoptimized regex mashine, I have to build it myself, which takes time.
Alternatively we can define an upperbound for cstr length and create all strings that matches given regex with length < upperbound. Then we put all solutions to a tire data structure and match it. This depends on the use case and maybe a cache can be involved.
What I'm going for is python module greenery
from greenery import parse
pattern = parse.Pattern(...)
pattern.reversed()
...
this sometimes provieds a good matching experience. Sometimes not but it is ok for me.
| reversed regex mashine implementation | I'm trying to match a string starting from the last character to fail as soon as possible. This way I can fail a match with a custom string cstr (see specification below) with least amount of operations (4th property).
From a theoritical perspective the regex can be represented as a finite state mashine and the arrows can be flipped, creating the reversed regex.
I'm looking for an implementation of this. A library/program which I can give the string and the pattern. cstr is implemented in python, so if possible a python module. (For the curious i-th character is not calculated until needed.) For anything other I need to do much more work because of cstr's calculation is hard to port to another language.
The implementation doesn't have to cover all latex syntax. I'm looking for the basics. No lookaheads or fancy stuff. See specification below.
I may be lacking common knowledge. Please do comment obvious things, too.
Specification
The custom string cstr has the following properties:
String can be calculated in finite time.
String has finite length
The last character is known
Every previous character requires a costly calculation
Until the string is calculated fully, length is unknown
When the string is calcualted fully, I want to match it with a simple regex which may contain these from the syntax. No look aheads or fancy stuff.
alphanumeric characters
uinicode characters
., *, +, ?, \w, \W, [], |, escape char \, range specifitation with { , }
PS: This is not a homework question. I'm trying to formulate my question as clear as possible.
| [
"OP here. Here are some thougts:\n\nSince I'm looking for an unoptimized regex mashine, I have to build it myself, which takes time.\n\nAlternatively we can define an upperbound for cstr length and create all strings that matches given regex with length < upperbound. Then we put all solutions to a tire data structure and match it. This depends on the use case and maybe a cache can be involved.\n\nWhat I'm going for is python module greenery\n\n\nfrom greenery import parse\npattern = parse.Pattern(...)\npattern.reversed()\n...\n\nthis sometimes provieds a good matching experience. Sometimes not but it is ok for me.\n"
] | [
0
] | [] | [] | [
"implementation",
"javascript",
"python",
"regex"
] | stackoverflow_0074665144_implementation_javascript_python_regex.txt |
Q:
Flask-Caching use UWSGI cache with NGINX
The UWSGI is connected to the flask app per UNIX-Socket:
NGINX (LISTEN TO PORT 80) <-> UWSGI (LISTER PER UNIX-SOCKER) <-> FLASK-APP
I have initalized a uwsgi cache to handle global data.
I want to handle the cache with python package flask-caching.
I am trying to init the Cache-instance with the correct cache address. There seems to be something wrong.
I think, that the parameters for app.run() are not relevant for uwsgi.
If I am setting a cache entry, it return always None:
app.route("/")
def test():
cache.set("test", "OK", timeout=0)
a = cache.get("test")
return a
main.py
from flask import Flask
from flask_caching import Cache
app = Flask(__name__)
# Check Configuring Flask-Caching section for more details
cache = Cache(app, config={'CACHE_TYPE': 'uwsgi', 'CACHE_UWSGI_NAME':'mycache@localhost'})
if __name__ == "__main__":
app.run(host="0.0.0.0", port=5000)
uwsgi.ini
[uwsgi]
module = main
callable = app
cache2 = name=mycache,items=100
nginx.conf
server {
listen 80;
location / {
try_files $uri @app;
}
location @app {
include uwsgi_params;
uwsgi_pass unix:///tmp/uwsgi.sock;
}
location /static {
alias /app/testapp/static;
}
}
I am working with the docker build from https://github.com/tiangolo/uwsgi-nginx-flask-docker. The app is working, expect the cache.
A:
Be aware of using of spawning multiple processes for NGINX. Every process handles its own cache. Without an additional layer, it is not possible to access to a cache from different nginx process.
This answer was posted as an edit to the question Flask-Caching use UWSGI cache with NGINX by the OP ewro under CC BY-SA 4.0.
| Flask-Caching use UWSGI cache with NGINX | The UWSGI is connected to the flask app per UNIX-Socket:
NGINX (LISTEN TO PORT 80) <-> UWSGI (LISTER PER UNIX-SOCKER) <-> FLASK-APP
I have initalized a uwsgi cache to handle global data.
I want to handle the cache with python package flask-caching.
I am trying to init the Cache-instance with the correct cache address. There seems to be something wrong.
I think, that the parameters for app.run() are not relevant for uwsgi.
If I am setting a cache entry, it return always None:
app.route("/")
def test():
cache.set("test", "OK", timeout=0)
a = cache.get("test")
return a
main.py
from flask import Flask
from flask_caching import Cache
app = Flask(__name__)
# Check Configuring Flask-Caching section for more details
cache = Cache(app, config={'CACHE_TYPE': 'uwsgi', 'CACHE_UWSGI_NAME':'mycache@localhost'})
if __name__ == "__main__":
app.run(host="0.0.0.0", port=5000)
uwsgi.ini
[uwsgi]
module = main
callable = app
cache2 = name=mycache,items=100
nginx.conf
server {
listen 80;
location / {
try_files $uri @app;
}
location @app {
include uwsgi_params;
uwsgi_pass unix:///tmp/uwsgi.sock;
}
location /static {
alias /app/testapp/static;
}
}
I am working with the docker build from https://github.com/tiangolo/uwsgi-nginx-flask-docker. The app is working, expect the cache.
| [
"Be aware of using of spawning multiple processes for NGINX. Every process handles its own cache. Without an additional layer, it is not possible to access to a cache from different nginx process.\n\nThis answer was posted as an edit to the question Flask-Caching use UWSGI cache with NGINX by the OP ewro under CC BY-SA 4.0.\n"
] | [
0
] | [] | [] | [
"flask_cache",
"flask_caching",
"nginx",
"python",
"uwsgi"
] | stackoverflow_0052096704_flask_cache_flask_caching_nginx_python_uwsgi.txt |
Q:
python beautifulsoup: how to find all before certain stop tag?
I need to find all tags of a certain kind (class "nice") but excluding those after a certain other tag (class "stop").
<div class="nice"></div>
<div class="nice"></div>
<div class="stop">here should be the end of found items</div>
<div class="nice"></div>
<div class="nice"></div>
How do I accomplish this using bs4?
I found this as a similar question but it appears a bit fuzzy.
A:
You can use for example .find_previous to filter out unwanted tags:
from bs4 import BeautifulSoup
html_doc = """\
<div class="nice">want 1</div>
<div class="nice">want 2</div>
<div class="stop">here should be the end of found items</div>
<div class="nice">do not want 1</div>
<div class="nice">do not want 2</div>"""
soup = BeautifulSoup(html_doc, "html.parser")
for div in soup.find_all("div", class_="nice"):
if div.find_previous("div", class_="stop"):
break
print(div)
Prints:
<div class="nice">want 1</div>
<div class="nice">want 2</div>
| python beautifulsoup: how to find all before certain stop tag? | I need to find all tags of a certain kind (class "nice") but excluding those after a certain other tag (class "stop").
<div class="nice"></div>
<div class="nice"></div>
<div class="stop">here should be the end of found items</div>
<div class="nice"></div>
<div class="nice"></div>
How do I accomplish this using bs4?
I found this as a similar question but it appears a bit fuzzy.
| [
"You can use for example .find_previous to filter out unwanted tags:\nfrom bs4 import BeautifulSoup\n\n\nhtml_doc = \"\"\"\\\n<div class=\"nice\">want 1</div>\n<div class=\"nice\">want 2</div>\n<div class=\"stop\">here should be the end of found items</div>\n<div class=\"nice\">do not want 1</div>\n<div class=\"nice\">do not want 2</div>\"\"\"\n\nsoup = BeautifulSoup(html_doc, \"html.parser\")\n\nfor div in soup.find_all(\"div\", class_=\"nice\"):\n if div.find_previous(\"div\", class_=\"stop\"):\n break\n print(div)\n\nPrints:\n<div class=\"nice\">want 1</div>\n<div class=\"nice\">want 2</div>\n\n"
] | [
1
] | [] | [] | [
"beautifulsoup",
"html",
"python"
] | stackoverflow_0074666897_beautifulsoup_html_python.txt |
Q:
I'm having trouble adding a scrollbar to my project that I developed with tkinter
`
import tkinter
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
from tkinter import *
import pandas as pd
from tkinter import ttk
from datetime import datetime
#tkinter
master = Tk()
master.title("Anket")
master.state('zoomed')
#new mainframe
frame = tkinter.Frame(master)
frame.pack()
#label inputs
Label(frame, text="Katılımcı Ad Soyad").grid(row=1, column=0)
entry2 = Entry(frame)
entry2.grid(row=1, column=1)
Label(frame, text="Katılımcı Yaş").grid(row=2, column=0)
entry3 = Entry(frame)
entry3.grid(row=2, column=1)
Label(frame, text="Eğitim").grid(row=3, column=0)
entry4 = Entry(frame)
entry4.grid(row=3, column=1)
tkinter.Label(frame, text="Önceden VR tecrübeniz var mıydı?").grid(row=4, column=0)
entry5 = tkinter.StringVar()
tkinter.Radiobutton(frame, text="Var", variable=entry5, value="Var").grid(row=4, column=1)
tkinter.Radiobutton(frame, text="Yok", variable=entry5, value="Yok").grid(row=4, column=2)
#label func
def griding_questions(text, row, entry):
tkinter.Label(frame, text=text).grid(row=row, column=0)
tkinter.Radiobutton(frame, text="1", variable=entry, value=1).grid(row=row, column=1)
tkinter.Radiobutton(frame, text="2", variable=entry, value=2).grid(row=row, column=2)
tkinter.Radiobutton(frame, text="3", variable=entry, value=3).grid(row=row, column=3)
tkinter.Radiobutton(frame, text="4", variable=entry, value=4).grid(row=row, column=4)
tkinter.Radiobutton(frame, text="5", variable=entry, value=5).grid(row=row, column=5)
def griding_ipq_questions(text, row, entry):
tkinter.Label(frame, text=text).grid(row=row, column=0)
tkinter.Radiobutton(frame, text="1", variable=entry, value=1).grid(row=row, column=1)
tkinter.Radiobutton(frame, text="2", variable=entry, value=2).grid(row=row, column=2)
tkinter.Radiobutton(frame, text="3", variable=entry, value=3).grid(row=row, column=3)
tkinter.Radiobutton(frame, text="4", variable=entry, value=4).grid(row=row, column=4)
tkinter.Radiobutton(frame, text="5", variable=entry, value=5).grid(row=row, column=5)
tkinter.Radiobutton(frame, text="6", variable=entry, value=6).grid(row=row, column=6)
def griding_ss_questions(text, row, entry):
tkinter.Label(frame, text=text).grid(row=row, column=0)
tkinter.Radiobutton(frame, text="Hiçbiri", variable=entry, value="Hiçbiri").grid(row=row, column=1)
tkinter.Radiobutton(frame, text="Hafif", variable=entry, value="Hafif").grid(row=row, column=2)
tkinter.Radiobutton(frame, text="Orta", variable=entry, value="Orta").grid(row=row, column=3)
tkinter.Radiobutton(frame, text="Şiddetli", variable=entry, value="Şiddetli").grid(row=row, column=4)
def griding_tam_questions(text, row, entry):
tkinter.Label(frame, text=text).grid(row=row, column=0)
tkinter.Radiobutton(frame, text="1", variable=entry, value=1).grid(row=row, column=1)
tkinter.Radiobutton(frame, text="2", variable=entry, value=2).grid(row=row, column=2)
tkinter.Radiobutton(frame, text="3", variable=entry, value=3).grid(row=row, column=3)
tkinter.Radiobutton(frame, text="4", variable=entry, value=4).grid(row=row, column=4)
tkinter.Radiobutton(frame, text="5", variable=entry, value=5).grid(row=row, column=5)
tkinter.Radiobutton(frame, text="6", variable=entry, value=6).grid(row=row, column=6)
tkinter.Radiobutton(frame, text="7", variable=entry, value=7).grid(row=row, column=7)
def griding_vas_questions(text, row, entry):
tkinter.Label(frame, text=text).grid(row=row, column=0)
tkinter.Radiobutton(frame, text="1", variable=entry, value=1).grid(row=row, column=1)
tkinter.Radiobutton(frame, text="2", variable=entry, value=2).grid(row=row, column=2)
tkinter.Radiobutton(frame, text="3", variable=entry, value=3).grid(row=row, column=3)
tkinter.Radiobutton(frame, text="4", variable=entry, value=4).grid(row=row, column=4)
tkinter.Radiobutton(frame, text="5", variable=entry, value=5).grid(row=row, column=5)
tkinter.Radiobutton(frame, text="6", variable=entry, value=6).grid(row=row, column=6)
tkinter.Radiobutton(frame, text="7", variable=entry, value=7).grid(row=row, column=7)
tkinter.Radiobutton(frame, text="8", variable=entry, value=8).grid(row=row, column=8)
tkinter.Radiobutton(frame, text="9", variable=entry, value=9).grid(row=row, column=9)
tkinter.Radiobutton(frame, text="10", variable=entry, value=10).grid(row=row, column=10)
entry6 = tkinter.IntVar()
griding_questions("1. Bu sistemi sık sık kullanmak isterim.", 5, entry6)
entry7 = tkinter.IntVar()
griding_questions("2. Bu sistemi gereksiz yere karmaşık buldum.", 6, entry7)
entry8 = tkinter.IntVar()
griding_questions("3. Sistemin kullanımının kolay olduğunu düşündüm.", 7, entry8)
entry9 = tkinter.IntVar()
griding_questions("4. Bu sistemi kullanabilmek için teknik bir kişinin desteğine ihtiyacım olacağını düşünüyorum.", 8,
entry9)
entry10 = tkinter.IntVar()
griding_questions("5. Bu sistemdeki çeşitli fonksiyonların iyi bir şekilde entegre olduğunu gördüm.", 9, entry10)
entry11 = tkinter.IntVar()
griding_questions("6. Bu sistemde çok fazla tutarsızlık olduğunu düşündüm.", 10, entry11)
entry12 = tkinter.IntVar()
griding_questions("7. Çoğu insanın bu sistemi çok çabuk kullanmayı öğreneceğini hayal ediyorum.", 11, entry12)
entry13 = tkinter.IntVar()
griding_questions("8. Bu sistemi kullanmayı çok hantal (garip) buldum.", 12, entry13)
entry14 = tkinter.IntVar()
griding_questions("9. Bu sistemi kullanırken kendimi çok güvende hissettim.", 13, entry14)
entry15 = tkinter.IntVar()
griding_questions("10. Bu sisteme geçmeden önce çok şey öğrenmem gerekiyordu.", 14, entry15)
entry16 = tkinter.IntVar()
griding_ipq_questions("IPQ1. Bilgisayar tarafından oluşturulan dünyada bir \"orada olma\" duygusuna sahiptim.", 15,
entry16)
entry17 = tkinter.IntVar()
griding_ipq_questions("IPQ2. Bir şekilde sanal dünyanın etrafımı sardığını hissettim.", 16, entry17)
entry18 = tkinter.IntVar()
griding_ipq_questions("IPQ3. Sadece resimleri algılıyormuş gibi hissettim.", 17, entry18)
entry19 = tkinter.IntVar()
griding_ipq_questions("IPQ4. Sanal uzayda kendimi mevcut hissetmiyordum.", 18, entry19)
entry20 = tkinter.IntVar()
griding_ipq_questions("IPQ5. Dışarıdan bir şey çalıştırmak yerine sanal alanda hareket etme duygusu vardı.", 19,
entry20)
entry21 = tkinter.IntVar()
griding_ipq_questions("IPQ6. Sanal uzayda kendimi mevcut (oradaymış gibi) hissettim.", 20, entry21)
entry22 = tkinter.IntVar()
griding_ipq_questions(
"IPQ7. Sanal dünyada gezinirken etrafınızdaki gerçek dünyanın ne kadar farkındaydınız? (yani sesler, oda sıcaklığı, diğer insanlar vb.)?",
21, entry22)
entry23 = tkinter.IntVar()
griding_ipq_questions("IPQ8. Gerçek çevremin farkında değildim.", 22, entry23)
entry24 = tkinter.IntVar()
griding_ipq_questions("IPQ9. Yine de gerçek çevreye dikkat ettim.", 23, entry24)
entry25 = tkinter.IntVar()
griding_ipq_questions("IPQ10. Tamamen sanal dünyanın büyüsüne kapıldım.", 24, entry25)
entry26 = tkinter.IntVar()
griding_ipq_questions("IPQ11. Sanal dünya size ne kadar gerçek göründü?", 25, entry26)
entry27 = tkinter.IntVar()
griding_ipq_questions("IPQ12. Sanal ortamdaki deneyiminiz, gerçek dünya deneyiminizle ne kadar tutarlı görünüyordu?",
26, entry27)
entry28 = tkinter.IntVar()
griding_ipq_questions("IPQ13. Sanal dünya size ne kadar gerçek göründü?", 27, entry28)
entry29 = tkinter.IntVar()
griding_ipq_questions("IPQ14. Sanal dünya gerçek dünyadan daha gerçekçi görünüyordu.", 28, entry29)
entry30 = tkinter.StringVar()
griding_ss_questions("SSQ1. Genel rahatsızlık", 29, entry30)
entry31 = tkinter.StringVar()
griding_ss_questions("SSQ2. Tükenmişlik, yorgunluk", 30, entry31)
entry32 = tkinter.StringVar()
griding_ss_questions("SSQ3. Baş ağrısı", 31, entry32)
entry33 = tkinter.StringVar()
griding_ss_questions("SSQ4. Göz yorgunluğu", 32, entry33)
entry34 = tkinter.StringVar()
griding_ss_questions("SSQ5. Odaklanma zorluğu", 33, entry34)
entry35 = tkinter.StringVar()
griding_ss_questions("SSQ6. Artan tükürük", 34, entry35)
entry36 = tkinter.StringVar()
griding_ss_questions("SSQ7. Terleme", 35, entry36)
entry37 = tkinter.StringVar()
griding_ss_questions("SSQ8. Mide bulantısı", 36, entry37)
entry38 = tkinter.StringVar()
griding_ss_questions("SSQ9. Konsantrasyon bozukluğu", 37, entry38)
entry39 = tkinter.StringVar()
griding_ss_questions("SSQ10. Baş dolgunluğu", 38, entry39)
entry40 = tkinter.StringVar()
griding_ss_questions("SSQ11. Bulanık görme", 39, entry40)
entry41 = tkinter.StringVar()
griding_ss_questions("SSQ12. Baş dönmesi (gözler açık)", 40, entry41)
entry42 = tkinter.StringVar()
griding_ss_questions("SSQ13. Baş dönmesi (gözler kapalı)", 41, entry42)
entry43 = tkinter.StringVar()
griding_ss_questions("SSQ14. Vertigo, kontrol kaybı", 42, entry43)
entry44 = tkinter.StringVar()
griding_ss_questions("SSQ15. Mide farkındalığı", 43, entry44)
entry45 = tkinter.StringVar()
griding_ss_questions("SSQ16. Geğirme", 44, entry45)
entry46 = tkinter.IntVar()
griding_tam_questions("TAM1. VR_Locomotion kullanmak, görevleri daha hızlı tamamlamamı sağladı.", 45, entry46)
entry47 = tkinter.IntVar()
griding_tam_questions("TAM2. VR_Locomotion kullanmak iş performansımı iyileştirdi.", 46, entry47)
entry48 = tkinter.IntVar()
griding_tam_questions("TAM3. VR_Locomotion kullanmak üretkenliğimi artırdı.", 47, entry48)
entry49 = tkinter.IntVar()
griding_tam_questions("TAM4. VR_Locomotion kullanmak etkinliğimi artırdı.", 48, entry49)
entry50 = tkinter.IntVar()
griding_tam_questions("TAM5. VR_Locomotion kullanmak, onunla yapmam gereken şeyleri yapmayı kolaylaştırdı.", 49,
entry50)
entry51 = tkinter.IntVar()
griding_tam_questions("TAM6. VR_Locomotion'u faydalı buldum.", 50, entry51)
entry52 = tkinter.IntVar()
griding_tam_questions("TAM7. VR_Locomotion'u kullanmayı öğrenmek kolaydı.", 51, entry52)
entry53 = tkinter.IntVar()
griding_tam_questions("TAM8. VR_Locomotion'un yapmasını istediğim şeyi yapmasını kolay buldum.", 52, entry53)
entry54 = tkinter.IntVar()
griding_tam_questions("TAM9. VR_Locomotion ile etkileşimim açık ve anlaşılırdı.", 53, entry54)
entry55 = tkinter.IntVar()
griding_tam_questions("TAM 10. VR_Locomotion ile esnek bir etkileşim kurdum.", 54, entry55)
entry56 = tkinter.IntVar()
griding_tam_questions("TAM11. VR_Locomotion kullanmakta ustalaşmak benim için kolaydı.", 55, entry56)
entry57 = tkinter.IntVar()
griding_tam_questions("TAM12. VR_Locomotion'un kullanımını kolay buldum.", 56, entry57)
entry58 = tkinter.IntVar()
griding_tam_questions("UMUX1. VR_Locomotion'ın yetenekleri gereksinimlerimi karşılıyor.", 57, entry58)
entry59 = tkinter.IntVar()
griding_tam_questions("UMUX2. VR_Locomotion'u kullanmak sinir bozucu bir deneyimdir.", 58, entry59)
entry60 = tkinter.IntVar()
griding_tam_questions("UMUX3. VR_Locomotion'un kullanımı kolaydır.", 59, entry60)
entry61 = tkinter.IntVar()
griding_tam_questions("UMUX4. VR_Locomotion ile bir şeyleri düzeltmek için çok fazla zaman harcamak zorundayım.", 60,
entry61)
entry62 = tkinter.IntVar()
griding_vas_questions("VAS1: (Kendi kendine hareket) Tüm vücudumun ileriye doğru hareket ettiğini hissettim.", 61,
entry62)
entry63 = tkinter.IntVar()
griding_vas_questions("VAS2: (Yürüme hissi) İleriye doğru yürüyormuş gibi hissettim.", 62, entry63)
entry64 = tkinter.IntVar()
griding_vas_questions("VAS3: (Bacak hareketi) Ayaklarım yere çarpıyormuş gibi hissettim.", 63, entry64)
entry65 = tkinter.IntVar()
griding_vas_questions(
"VAS4 : Olay yerinde varmışım gibi hissettim (kişinin gerçek konumunun dışında bir yerde varmış gibi "
"hissetmesi) .",
64, entry65)
Label(frame, text="E-posta Adresi").grid(row=65, column=0)
entry66 = Entry(frame)
entry66.grid(row=65, column=1)
#quit and submit
Button(frame, text='Quit', command=frame.quit).grid(row=5, column=15, pady=4)
Button(frame, text='Submit', command=submit_fields).grid(row=8, column=15, pady=4)
#mainloop
mainloop()
`
I cannot add with pack to places where grid is used, and with grid to places where pack is used. I searched the internet for a solution and couldn't find much. Adding canvas is problematic. It requires me to add an extra text, treeframe etc inside the frame. sometimes I can add it with some methods, but this time it doesn't scroll. I'm stuck.
A:
I did not test it. Use tkinter.tix.ScrolledWindow.
from tkinter.tix import *
:
:
:
#add this between line 16 to 21.
#new mainframe
frame = tkinter.Frame(master)
frame.pack()
swin = ScrolledWindow(frame, width=500, height=500)
swin.pack()
#label inputs
| I'm having trouble adding a scrollbar to my project that I developed with tkinter | `
import tkinter
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
from tkinter import *
import pandas as pd
from tkinter import ttk
from datetime import datetime
#tkinter
master = Tk()
master.title("Anket")
master.state('zoomed')
#new mainframe
frame = tkinter.Frame(master)
frame.pack()
#label inputs
Label(frame, text="Katılımcı Ad Soyad").grid(row=1, column=0)
entry2 = Entry(frame)
entry2.grid(row=1, column=1)
Label(frame, text="Katılımcı Yaş").grid(row=2, column=0)
entry3 = Entry(frame)
entry3.grid(row=2, column=1)
Label(frame, text="Eğitim").grid(row=3, column=0)
entry4 = Entry(frame)
entry4.grid(row=3, column=1)
tkinter.Label(frame, text="Önceden VR tecrübeniz var mıydı?").grid(row=4, column=0)
entry5 = tkinter.StringVar()
tkinter.Radiobutton(frame, text="Var", variable=entry5, value="Var").grid(row=4, column=1)
tkinter.Radiobutton(frame, text="Yok", variable=entry5, value="Yok").grid(row=4, column=2)
#label func
def griding_questions(text, row, entry):
tkinter.Label(frame, text=text).grid(row=row, column=0)
tkinter.Radiobutton(frame, text="1", variable=entry, value=1).grid(row=row, column=1)
tkinter.Radiobutton(frame, text="2", variable=entry, value=2).grid(row=row, column=2)
tkinter.Radiobutton(frame, text="3", variable=entry, value=3).grid(row=row, column=3)
tkinter.Radiobutton(frame, text="4", variable=entry, value=4).grid(row=row, column=4)
tkinter.Radiobutton(frame, text="5", variable=entry, value=5).grid(row=row, column=5)
def griding_ipq_questions(text, row, entry):
tkinter.Label(frame, text=text).grid(row=row, column=0)
tkinter.Radiobutton(frame, text="1", variable=entry, value=1).grid(row=row, column=1)
tkinter.Radiobutton(frame, text="2", variable=entry, value=2).grid(row=row, column=2)
tkinter.Radiobutton(frame, text="3", variable=entry, value=3).grid(row=row, column=3)
tkinter.Radiobutton(frame, text="4", variable=entry, value=4).grid(row=row, column=4)
tkinter.Radiobutton(frame, text="5", variable=entry, value=5).grid(row=row, column=5)
tkinter.Radiobutton(frame, text="6", variable=entry, value=6).grid(row=row, column=6)
def griding_ss_questions(text, row, entry):
tkinter.Label(frame, text=text).grid(row=row, column=0)
tkinter.Radiobutton(frame, text="Hiçbiri", variable=entry, value="Hiçbiri").grid(row=row, column=1)
tkinter.Radiobutton(frame, text="Hafif", variable=entry, value="Hafif").grid(row=row, column=2)
tkinter.Radiobutton(frame, text="Orta", variable=entry, value="Orta").grid(row=row, column=3)
tkinter.Radiobutton(frame, text="Şiddetli", variable=entry, value="Şiddetli").grid(row=row, column=4)
def griding_tam_questions(text, row, entry):
tkinter.Label(frame, text=text).grid(row=row, column=0)
tkinter.Radiobutton(frame, text="1", variable=entry, value=1).grid(row=row, column=1)
tkinter.Radiobutton(frame, text="2", variable=entry, value=2).grid(row=row, column=2)
tkinter.Radiobutton(frame, text="3", variable=entry, value=3).grid(row=row, column=3)
tkinter.Radiobutton(frame, text="4", variable=entry, value=4).grid(row=row, column=4)
tkinter.Radiobutton(frame, text="5", variable=entry, value=5).grid(row=row, column=5)
tkinter.Radiobutton(frame, text="6", variable=entry, value=6).grid(row=row, column=6)
tkinter.Radiobutton(frame, text="7", variable=entry, value=7).grid(row=row, column=7)
def griding_vas_questions(text, row, entry):
tkinter.Label(frame, text=text).grid(row=row, column=0)
tkinter.Radiobutton(frame, text="1", variable=entry, value=1).grid(row=row, column=1)
tkinter.Radiobutton(frame, text="2", variable=entry, value=2).grid(row=row, column=2)
tkinter.Radiobutton(frame, text="3", variable=entry, value=3).grid(row=row, column=3)
tkinter.Radiobutton(frame, text="4", variable=entry, value=4).grid(row=row, column=4)
tkinter.Radiobutton(frame, text="5", variable=entry, value=5).grid(row=row, column=5)
tkinter.Radiobutton(frame, text="6", variable=entry, value=6).grid(row=row, column=6)
tkinter.Radiobutton(frame, text="7", variable=entry, value=7).grid(row=row, column=7)
tkinter.Radiobutton(frame, text="8", variable=entry, value=8).grid(row=row, column=8)
tkinter.Radiobutton(frame, text="9", variable=entry, value=9).grid(row=row, column=9)
tkinter.Radiobutton(frame, text="10", variable=entry, value=10).grid(row=row, column=10)
entry6 = tkinter.IntVar()
griding_questions("1. Bu sistemi sık sık kullanmak isterim.", 5, entry6)
entry7 = tkinter.IntVar()
griding_questions("2. Bu sistemi gereksiz yere karmaşık buldum.", 6, entry7)
entry8 = tkinter.IntVar()
griding_questions("3. Sistemin kullanımının kolay olduğunu düşündüm.", 7, entry8)
entry9 = tkinter.IntVar()
griding_questions("4. Bu sistemi kullanabilmek için teknik bir kişinin desteğine ihtiyacım olacağını düşünüyorum.", 8,
entry9)
entry10 = tkinter.IntVar()
griding_questions("5. Bu sistemdeki çeşitli fonksiyonların iyi bir şekilde entegre olduğunu gördüm.", 9, entry10)
entry11 = tkinter.IntVar()
griding_questions("6. Bu sistemde çok fazla tutarsızlık olduğunu düşündüm.", 10, entry11)
entry12 = tkinter.IntVar()
griding_questions("7. Çoğu insanın bu sistemi çok çabuk kullanmayı öğreneceğini hayal ediyorum.", 11, entry12)
entry13 = tkinter.IntVar()
griding_questions("8. Bu sistemi kullanmayı çok hantal (garip) buldum.", 12, entry13)
entry14 = tkinter.IntVar()
griding_questions("9. Bu sistemi kullanırken kendimi çok güvende hissettim.", 13, entry14)
entry15 = tkinter.IntVar()
griding_questions("10. Bu sisteme geçmeden önce çok şey öğrenmem gerekiyordu.", 14, entry15)
entry16 = tkinter.IntVar()
griding_ipq_questions("IPQ1. Bilgisayar tarafından oluşturulan dünyada bir \"orada olma\" duygusuna sahiptim.", 15,
entry16)
entry17 = tkinter.IntVar()
griding_ipq_questions("IPQ2. Bir şekilde sanal dünyanın etrafımı sardığını hissettim.", 16, entry17)
entry18 = tkinter.IntVar()
griding_ipq_questions("IPQ3. Sadece resimleri algılıyormuş gibi hissettim.", 17, entry18)
entry19 = tkinter.IntVar()
griding_ipq_questions("IPQ4. Sanal uzayda kendimi mevcut hissetmiyordum.", 18, entry19)
entry20 = tkinter.IntVar()
griding_ipq_questions("IPQ5. Dışarıdan bir şey çalıştırmak yerine sanal alanda hareket etme duygusu vardı.", 19,
entry20)
entry21 = tkinter.IntVar()
griding_ipq_questions("IPQ6. Sanal uzayda kendimi mevcut (oradaymış gibi) hissettim.", 20, entry21)
entry22 = tkinter.IntVar()
griding_ipq_questions(
"IPQ7. Sanal dünyada gezinirken etrafınızdaki gerçek dünyanın ne kadar farkındaydınız? (yani sesler, oda sıcaklığı, diğer insanlar vb.)?",
21, entry22)
entry23 = tkinter.IntVar()
griding_ipq_questions("IPQ8. Gerçek çevremin farkında değildim.", 22, entry23)
entry24 = tkinter.IntVar()
griding_ipq_questions("IPQ9. Yine de gerçek çevreye dikkat ettim.", 23, entry24)
entry25 = tkinter.IntVar()
griding_ipq_questions("IPQ10. Tamamen sanal dünyanın büyüsüne kapıldım.", 24, entry25)
entry26 = tkinter.IntVar()
griding_ipq_questions("IPQ11. Sanal dünya size ne kadar gerçek göründü?", 25, entry26)
entry27 = tkinter.IntVar()
griding_ipq_questions("IPQ12. Sanal ortamdaki deneyiminiz, gerçek dünya deneyiminizle ne kadar tutarlı görünüyordu?",
26, entry27)
entry28 = tkinter.IntVar()
griding_ipq_questions("IPQ13. Sanal dünya size ne kadar gerçek göründü?", 27, entry28)
entry29 = tkinter.IntVar()
griding_ipq_questions("IPQ14. Sanal dünya gerçek dünyadan daha gerçekçi görünüyordu.", 28, entry29)
entry30 = tkinter.StringVar()
griding_ss_questions("SSQ1. Genel rahatsızlık", 29, entry30)
entry31 = tkinter.StringVar()
griding_ss_questions("SSQ2. Tükenmişlik, yorgunluk", 30, entry31)
entry32 = tkinter.StringVar()
griding_ss_questions("SSQ3. Baş ağrısı", 31, entry32)
entry33 = tkinter.StringVar()
griding_ss_questions("SSQ4. Göz yorgunluğu", 32, entry33)
entry34 = tkinter.StringVar()
griding_ss_questions("SSQ5. Odaklanma zorluğu", 33, entry34)
entry35 = tkinter.StringVar()
griding_ss_questions("SSQ6. Artan tükürük", 34, entry35)
entry36 = tkinter.StringVar()
griding_ss_questions("SSQ7. Terleme", 35, entry36)
entry37 = tkinter.StringVar()
griding_ss_questions("SSQ8. Mide bulantısı", 36, entry37)
entry38 = tkinter.StringVar()
griding_ss_questions("SSQ9. Konsantrasyon bozukluğu", 37, entry38)
entry39 = tkinter.StringVar()
griding_ss_questions("SSQ10. Baş dolgunluğu", 38, entry39)
entry40 = tkinter.StringVar()
griding_ss_questions("SSQ11. Bulanık görme", 39, entry40)
entry41 = tkinter.StringVar()
griding_ss_questions("SSQ12. Baş dönmesi (gözler açık)", 40, entry41)
entry42 = tkinter.StringVar()
griding_ss_questions("SSQ13. Baş dönmesi (gözler kapalı)", 41, entry42)
entry43 = tkinter.StringVar()
griding_ss_questions("SSQ14. Vertigo, kontrol kaybı", 42, entry43)
entry44 = tkinter.StringVar()
griding_ss_questions("SSQ15. Mide farkındalığı", 43, entry44)
entry45 = tkinter.StringVar()
griding_ss_questions("SSQ16. Geğirme", 44, entry45)
entry46 = tkinter.IntVar()
griding_tam_questions("TAM1. VR_Locomotion kullanmak, görevleri daha hızlı tamamlamamı sağladı.", 45, entry46)
entry47 = tkinter.IntVar()
griding_tam_questions("TAM2. VR_Locomotion kullanmak iş performansımı iyileştirdi.", 46, entry47)
entry48 = tkinter.IntVar()
griding_tam_questions("TAM3. VR_Locomotion kullanmak üretkenliğimi artırdı.", 47, entry48)
entry49 = tkinter.IntVar()
griding_tam_questions("TAM4. VR_Locomotion kullanmak etkinliğimi artırdı.", 48, entry49)
entry50 = tkinter.IntVar()
griding_tam_questions("TAM5. VR_Locomotion kullanmak, onunla yapmam gereken şeyleri yapmayı kolaylaştırdı.", 49,
entry50)
entry51 = tkinter.IntVar()
griding_tam_questions("TAM6. VR_Locomotion'u faydalı buldum.", 50, entry51)
entry52 = tkinter.IntVar()
griding_tam_questions("TAM7. VR_Locomotion'u kullanmayı öğrenmek kolaydı.", 51, entry52)
entry53 = tkinter.IntVar()
griding_tam_questions("TAM8. VR_Locomotion'un yapmasını istediğim şeyi yapmasını kolay buldum.", 52, entry53)
entry54 = tkinter.IntVar()
griding_tam_questions("TAM9. VR_Locomotion ile etkileşimim açık ve anlaşılırdı.", 53, entry54)
entry55 = tkinter.IntVar()
griding_tam_questions("TAM 10. VR_Locomotion ile esnek bir etkileşim kurdum.", 54, entry55)
entry56 = tkinter.IntVar()
griding_tam_questions("TAM11. VR_Locomotion kullanmakta ustalaşmak benim için kolaydı.", 55, entry56)
entry57 = tkinter.IntVar()
griding_tam_questions("TAM12. VR_Locomotion'un kullanımını kolay buldum.", 56, entry57)
entry58 = tkinter.IntVar()
griding_tam_questions("UMUX1. VR_Locomotion'ın yetenekleri gereksinimlerimi karşılıyor.", 57, entry58)
entry59 = tkinter.IntVar()
griding_tam_questions("UMUX2. VR_Locomotion'u kullanmak sinir bozucu bir deneyimdir.", 58, entry59)
entry60 = tkinter.IntVar()
griding_tam_questions("UMUX3. VR_Locomotion'un kullanımı kolaydır.", 59, entry60)
entry61 = tkinter.IntVar()
griding_tam_questions("UMUX4. VR_Locomotion ile bir şeyleri düzeltmek için çok fazla zaman harcamak zorundayım.", 60,
entry61)
entry62 = tkinter.IntVar()
griding_vas_questions("VAS1: (Kendi kendine hareket) Tüm vücudumun ileriye doğru hareket ettiğini hissettim.", 61,
entry62)
entry63 = tkinter.IntVar()
griding_vas_questions("VAS2: (Yürüme hissi) İleriye doğru yürüyormuş gibi hissettim.", 62, entry63)
entry64 = tkinter.IntVar()
griding_vas_questions("VAS3: (Bacak hareketi) Ayaklarım yere çarpıyormuş gibi hissettim.", 63, entry64)
entry65 = tkinter.IntVar()
griding_vas_questions(
"VAS4 : Olay yerinde varmışım gibi hissettim (kişinin gerçek konumunun dışında bir yerde varmış gibi "
"hissetmesi) .",
64, entry65)
Label(frame, text="E-posta Adresi").grid(row=65, column=0)
entry66 = Entry(frame)
entry66.grid(row=65, column=1)
#quit and submit
Button(frame, text='Quit', command=frame.quit).grid(row=5, column=15, pady=4)
Button(frame, text='Submit', command=submit_fields).grid(row=8, column=15, pady=4)
#mainloop
mainloop()
`
I cannot add with pack to places where grid is used, and with grid to places where pack is used. I searched the internet for a solution and couldn't find much. Adding canvas is problematic. It requires me to add an extra text, treeframe etc inside the frame. sometimes I can add it with some methods, but this time it doesn't scroll. I'm stuck.
| [
"I did not test it. Use tkinter.tix.ScrolledWindow.\nfrom tkinter.tix import *\n:\n:\n:\n#add this between line 16 to 21.\n#new mainframe\nframe = tkinter.Frame(master)\nframe.pack()\n\nswin = ScrolledWindow(frame, width=500, height=500)\nswin.pack()\n\n#label inputs\n\n"
] | [
0
] | [] | [] | [
"python",
"scrollbar",
"tkinter",
"tkinter_canvas"
] | stackoverflow_0074665863_python_scrollbar_tkinter_tkinter_canvas.txt |
Q:
tkinter.place() not working and window still blank
I have a problem with tkinter.place, why it is not working?
class KafeDaun(tk.Frame):
def __init__(self, master = None):
super().__init__(master)
self.master.title("Kafe Daun-Daun Pacilkom v2.0 ")
self.master.geometry("500x300")
self.master.configure(bg="grey")
self.create_widgets()
self.pack()
def create_widgets(self):
self.btn_buat_pesanan = tk.Button(self, text = "Buat Pesanan", width = 20)
self.btn_buat_pesanan.place(x = 250, y = 100)
self.btn_meja = tk.Button(self, text = "Selesai Gunakan Meja", width = 20)
I still get this blank Frame even though already use tkinter.place on btn_buat_pesanan
I expect it to have a button on the exact location, like when using tkinter.pack() or tkinter.grid(). Do you have any suggestion
... ... ... ... ..
A:
Try this.
You have to pack the frame like this self.pack(fill="both", expand=True). Because the place did not change the parent size, that's why it didn't visible
import tkinter as tk
class KafeDaun(tk.Frame):
def __init__(self, master = None):
super().__init__(master)
self.master.title("Kafe Daun-Daun Pacilkom v2.0 ")
self.master.geometry("500x300")
self.master.configure(bg="grey")
self.create_widgets()
self.pack(fill="both", expand=True)
def create_widgets(self):
self.btn_buat_pesanan = tk.Button(self, text = "Buat Pesanan", width = 20)
self.btn_buat_pesanan.place(x = 250, y = 100)
self.btn_meja = tk.Button(self, text = "Selesai Gunakan Meja", width = 20)
app =tk.Tk()
s = KafeDaun(app)
app.mainloop()
Or you can set the width and height of the frame. super().__init__(master, width=<width>, height=<height>)
| tkinter.place() not working and window still blank | I have a problem with tkinter.place, why it is not working?
class KafeDaun(tk.Frame):
def __init__(self, master = None):
super().__init__(master)
self.master.title("Kafe Daun-Daun Pacilkom v2.0 ")
self.master.geometry("500x300")
self.master.configure(bg="grey")
self.create_widgets()
self.pack()
def create_widgets(self):
self.btn_buat_pesanan = tk.Button(self, text = "Buat Pesanan", width = 20)
self.btn_buat_pesanan.place(x = 250, y = 100)
self.btn_meja = tk.Button(self, text = "Selesai Gunakan Meja", width = 20)
I still get this blank Frame even though already use tkinter.place on btn_buat_pesanan
I expect it to have a button on the exact location, like when using tkinter.pack() or tkinter.grid(). Do you have any suggestion
... ... ... ... ..
| [
"Try this.\nYou have to pack the frame like this self.pack(fill=\"both\", expand=True). Because the place did not change the parent size, that's why it didn't visible\nimport tkinter as tk\nclass KafeDaun(tk.Frame):\n def __init__(self, master = None):\n super().__init__(master)\n self.master.title(\"Kafe Daun-Daun Pacilkom v2.0 \")\n self.master.geometry(\"500x300\")\n self.master.configure(bg=\"grey\")\n self.create_widgets()\n self.pack(fill=\"both\", expand=True)\n\n def create_widgets(self):\n self.btn_buat_pesanan = tk.Button(self, text = \"Buat Pesanan\", width = 20)\n self.btn_buat_pesanan.place(x = 250, y = 100)\n\n self.btn_meja = tk.Button(self, text = \"Selesai Gunakan Meja\", width = 20)\napp =tk.Tk()\n\n\ns = KafeDaun(app)\napp.mainloop()\n\nOr you can set the width and height of the frame. super().__init__(master, width=<width>, height=<height>)\n"
] | [
1
] | [] | [] | [
"methods",
"python",
"tkinter",
"tkinter_button",
"tkinter_canvas"
] | stackoverflow_0074666863_methods_python_tkinter_tkinter_button_tkinter_canvas.txt |
Q:
Plolty combine timeline on one line into subplots
I try to put a px.timeline into subplot, but my timeline format change.
import pandas as pd
import plotly.express as px
import plotly.subplots as sp
df1 = pd.DataFrame([
dict(unit='MVT',Task="Job A", Start='2009-01-01', Finish='2009-02-28'),
dict(unit='MVT',Task="Job B", Start='2009-02-28', Finish='2009-04-15'),
dict(unit='MVT',Task="Job A", Start='2009-04-15', Finish='2009-05-30')
])
df2 = pd.DataFrame([
dict(unit='MVT',Task="Job A", Start='2009-01-15', Finish='2009-02-15'),
dict(unit='MVT',Task="Job B", Start='2009-02-15', Finish='2009-04-28'),
dict(unit='MVT',Task="Job A", Start='2009-04-28', Finish='2009-05-30')
])
fig1 = px.timeline(df1, x_start="Start", x_end="Finish", y="unit",color="Task")
fig2 = px.timeline(df2, x_start="Start", x_end="Finish", y="unit",color="Task")
fig_sub = sp.make_subplots(rows=2)
for i in range(0,len(fig['data'])):
fig_sub.append_trace(fig1['data'][i], row=1, col=1)
for i in range(0,len(fig['data'])):
fig_sub.append_trace(fig2['data'][i], row=2, col=1)
fig_sub.update_xaxes(type='date')
My fig 1 look like that
but one into subplit i got this
Any idea of how to fix it? thanks
A:
I found it, we need to add
fig_sub.update_layout(barmode="overlay")
by default in sub_plots it is put in barmode="group"
| Plolty combine timeline on one line into subplots | I try to put a px.timeline into subplot, but my timeline format change.
import pandas as pd
import plotly.express as px
import plotly.subplots as sp
df1 = pd.DataFrame([
dict(unit='MVT',Task="Job A", Start='2009-01-01', Finish='2009-02-28'),
dict(unit='MVT',Task="Job B", Start='2009-02-28', Finish='2009-04-15'),
dict(unit='MVT',Task="Job A", Start='2009-04-15', Finish='2009-05-30')
])
df2 = pd.DataFrame([
dict(unit='MVT',Task="Job A", Start='2009-01-15', Finish='2009-02-15'),
dict(unit='MVT',Task="Job B", Start='2009-02-15', Finish='2009-04-28'),
dict(unit='MVT',Task="Job A", Start='2009-04-28', Finish='2009-05-30')
])
fig1 = px.timeline(df1, x_start="Start", x_end="Finish", y="unit",color="Task")
fig2 = px.timeline(df2, x_start="Start", x_end="Finish", y="unit",color="Task")
fig_sub = sp.make_subplots(rows=2)
for i in range(0,len(fig['data'])):
fig_sub.append_trace(fig1['data'][i], row=1, col=1)
for i in range(0,len(fig['data'])):
fig_sub.append_trace(fig2['data'][i], row=2, col=1)
fig_sub.update_xaxes(type='date')
My fig 1 look like that
but one into subplit i got this
Any idea of how to fix it? thanks
| [
"I found it, we need to add\nfig_sub.update_layout(barmode=\"overlay\") \n\nby default in sub_plots it is put in barmode=\"group\"\n"
] | [
1
] | [] | [] | [
"plotly",
"python",
"subplot"
] | stackoverflow_0074666793_plotly_python_subplot.txt |
Q:
How to improve the knn model?
I built a knn model for classification. Unfortunately, my model has accuracy > 80%, and I would like to get a better result. Can I ask for some tips? Maybe I used too many predictors?
My data = https://www.openml.org/search?type=data&sort=runs&id=53&status=active
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler
from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics import confusion_matrix, accuracy_score, f1_score
from sklearn.model_selection import GridSearchCV
heart_disease = pd.read_csv('heart_disease.csv', sep=';', decimal=',')
y = heart_disease['heart_disease']
X = heart_disease.drop(["heart_disease"], axis=1)
correlation_matrix = heart_disease.corr()
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=123)
scaler = MinMaxScaler(feature_range=(-1,1))
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
knn_3 = KNeighborsClassifier(3, n_jobs = -1)
knn_3.fit(X_train, y_train)
y_train_pred = knn_3.predict(X_train)
labels = ['0', '1']
print('Training set')
print(pd.DataFrame(confusion_matrix(y_train, y_train_pred), index = labels, columns = labels))
print(accuracy_score(y_train, y_train_pred))
print(f1_score(y_train, y_train_pred))
y_test_pred = knn_3.predict(X_test)
print('Test set')
print(pd.DataFrame(confusion_matrix(y_test, y_test_pred), index = labels, columns = labels))
print(accuracy_score(y_test, y_test_pred))
print(f1_score(y_test, y_test_pred))
hyperparameters = {'n_neighbors' : range(1, 15), 'weights': ['uniform','distance']}
knn_best = GridSearchCV(KNeighborsClassifier(), hyperparameters, n_jobs = -1, error_score = 'raise')
knn_best.fit(X_train,y_train)
knn_best.best_params_
y_train_pred_best = knn_best.predict(X_train)
y_test_pred_best = knn_best.predict(X_test)
print('Training set')
print(pd.DataFrame(confusion_matrix(y_train, y_train_pred_best), index = labels, columns = labels))
print(accuracy_score(y_train, y_train_pred_best))
print(f1_score(y_train, y_train_pred_best))
print('Test set')
print(pd.DataFrame(confusion_matrix(y_test, y_test_pred_best), index = labels, columns = labels))
print(accuracy_score(y_test, y_test_pred_best))
print(f1_score(y_test, y_test_pred_best))
A:
There are a few things you can try to improve the accuracy of your KNN model.
First, you can try tuning the hyperparameters of your model, such as the number of nearest neighbors to consider or the distance metric used to measure the similarity between points.
To tune the hyperparameters of your KNN model, you can use techniques like grid search or cross-validation to try different combinations of hyperparameters and find the combination that works best for your data.
You can also try preprocessing your data to make it more suitable for KNN. For example, you can try reducing the dimensionality of the data using techniques like principal component analysis (PCA). This can help to remove redundancies in your data and reduce the number of dimensions, which can make it easier for KNN to find the nearest neighbors.
Additionally, you can try using a different classification algorithm altogether, such as logistic regression or a decision tree. These algorithms may be better suited to your data and can potentially yield better results than KNN.
Another thing you can try is using an ensemble method, such as bagging or boosting, to combine multiple KNN models and potentially improve their accuracy. Ensemble methods can be effective at reducing overfitting and improving the generalizability of your model.
A:
Just a little part of answer, to find the best number for k_neighbors.
errlist = [] #an error list to append
for i in range(1,40): #from 0-40 numbers to use in k_neighbors
knn_i = KNeighborsClassifier(k_neighbors=i)
knn_i.fit(X_train,y_train)
errlist.append(np.mean(knn_i.predict(X_test)!=y_test)) # append the mean of failed-predict numbers
plot a line to see best k_neighbors:
plt.plot(range(1,40),errlist)
feel free to change the numbers for range.
| How to improve the knn model? | I built a knn model for classification. Unfortunately, my model has accuracy > 80%, and I would like to get a better result. Can I ask for some tips? Maybe I used too many predictors?
My data = https://www.openml.org/search?type=data&sort=runs&id=53&status=active
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler
from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics import confusion_matrix, accuracy_score, f1_score
from sklearn.model_selection import GridSearchCV
heart_disease = pd.read_csv('heart_disease.csv', sep=';', decimal=',')
y = heart_disease['heart_disease']
X = heart_disease.drop(["heart_disease"], axis=1)
correlation_matrix = heart_disease.corr()
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=123)
scaler = MinMaxScaler(feature_range=(-1,1))
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
knn_3 = KNeighborsClassifier(3, n_jobs = -1)
knn_3.fit(X_train, y_train)
y_train_pred = knn_3.predict(X_train)
labels = ['0', '1']
print('Training set')
print(pd.DataFrame(confusion_matrix(y_train, y_train_pred), index = labels, columns = labels))
print(accuracy_score(y_train, y_train_pred))
print(f1_score(y_train, y_train_pred))
y_test_pred = knn_3.predict(X_test)
print('Test set')
print(pd.DataFrame(confusion_matrix(y_test, y_test_pred), index = labels, columns = labels))
print(accuracy_score(y_test, y_test_pred))
print(f1_score(y_test, y_test_pred))
hyperparameters = {'n_neighbors' : range(1, 15), 'weights': ['uniform','distance']}
knn_best = GridSearchCV(KNeighborsClassifier(), hyperparameters, n_jobs = -1, error_score = 'raise')
knn_best.fit(X_train,y_train)
knn_best.best_params_
y_train_pred_best = knn_best.predict(X_train)
y_test_pred_best = knn_best.predict(X_test)
print('Training set')
print(pd.DataFrame(confusion_matrix(y_train, y_train_pred_best), index = labels, columns = labels))
print(accuracy_score(y_train, y_train_pred_best))
print(f1_score(y_train, y_train_pred_best))
print('Test set')
print(pd.DataFrame(confusion_matrix(y_test, y_test_pred_best), index = labels, columns = labels))
print(accuracy_score(y_test, y_test_pred_best))
print(f1_score(y_test, y_test_pred_best))
| [
"There are a few things you can try to improve the accuracy of your KNN model.\nFirst, you can try tuning the hyperparameters of your model, such as the number of nearest neighbors to consider or the distance metric used to measure the similarity between points.\nTo tune the hyperparameters of your KNN model, you can use techniques like grid search or cross-validation to try different combinations of hyperparameters and find the combination that works best for your data.\nYou can also try preprocessing your data to make it more suitable for KNN. For example, you can try reducing the dimensionality of the data using techniques like principal component analysis (PCA). This can help to remove redundancies in your data and reduce the number of dimensions, which can make it easier for KNN to find the nearest neighbors.\nAdditionally, you can try using a different classification algorithm altogether, such as logistic regression or a decision tree. These algorithms may be better suited to your data and can potentially yield better results than KNN.\nAnother thing you can try is using an ensemble method, such as bagging or boosting, to combine multiple KNN models and potentially improve their accuracy. Ensemble methods can be effective at reducing overfitting and improving the generalizability of your model.\n",
"Just a little part of answer, to find the best number for k_neighbors.\nerrlist = [] #an error list to append\nfor i in range(1,40): #from 0-40 numbers to use in k_neighbors\n knn_i = KNeighborsClassifier(k_neighbors=i)\n knn_i.fit(X_train,y_train)\n errlist.append(np.mean(knn_i.predict(X_test)!=y_test)) # append the mean of failed-predict numbers\n\nplot a line to see best k_neighbors:\nplt.plot(range(1,40),errlist)\n\nfeel free to change the numbers for range.\n"
] | [
2,
1
] | [] | [] | [
"knn",
"machine_learning",
"python",
"scikit_learn"
] | stackoverflow_0074666866_knn_machine_learning_python_scikit_learn.txt |
Q:
How can I print the number as elements of a list without the quotes and square brackets should be their?
The result should have square brackets enclosing the elements of list which are numbers , these numbers should not be enclosed into quotes.
i tried to do so with split function and for loop but was not able to get my desired result. i am expecting the answer.
A:
You can unpack all list elements into the print() function to print all values individually, separated by an empty space per default (that you can override using the sep argument). For example, the expression print(*my_list) prints the elements in my_list, empty space separated, without the enclosing square brackets and without the separating commas!
A:
You can use the join() method to print the list as a string, with the square brackets and commas between the elements, but without the quotation marks:
my_list = [1, 2, 3]
print('[{}]'.format(', '.join(str(x) for x in my_list)))
# Output: [1, 2, 3]
| How can I print the number as elements of a list without the quotes and square brackets should be their? | The result should have square brackets enclosing the elements of list which are numbers , these numbers should not be enclosed into quotes.
i tried to do so with split function and for loop but was not able to get my desired result. i am expecting the answer.
| [
"You can unpack all list elements into the print() function to print all values individually, separated by an empty space per default (that you can override using the sep argument). For example, the expression print(*my_list) prints the elements in my_list, empty space separated, without the enclosing square brackets and without the separating commas!\n",
"You can use the join() method to print the list as a string, with the square brackets and commas between the elements, but without the quotation marks:\nmy_list = [1, 2, 3]\nprint('[{}]'.format(', '.join(str(x) for x in my_list)))\n\n# Output: [1, 2, 3]\n\n"
] | [
0,
0
] | [] | [] | [
"function",
"input",
"list",
"output",
"python"
] | stackoverflow_0074666568_function_input_list_output_python.txt |
Q:
aws glue job: best practice for new data as it comes in?
Im new to AWS and glue.
I have a glue job that uses a python script to convert a data source into a json formatted file. The new data is sent to us on a monthly basis and so my thought was to trigger the glue job to run every time the data was added to our s3 bucket.
I have the job setup to overwrite the file every time it run, but it would be nice to capture the differences between the monthly files so that I can have the historic info.
Here is the output of the code:
s3.put_object(Body=output_file, Bucket='mys3, Key='outputfile.json')
Could a crawler help with keeping track of the history? Like if could I crawl for new data only and then store it somewhere?
For my outputs I am viewing them in Athena, but maybe I should start compiling this data to a database on its own ?
Thanks in advance for any inputs!
A:
What I would suggest to you is to partition the data. Based on what you've said, you get the data on a monthly basis.
An S3 key represents the path to the file in an S3 bucket. In your example, outputfile.json is a top-level object in your S3 bucket. Based on your requirements, you could partition the data by year and month partitions, which you create. Your snippet of the code would then look like this (equality sign is important for partitioning):
s3.put_object(Body=output_file, Bucket='mys3, Key='year=2022/month=12/outputfile.json')
This way, you would see two prefixes in your bucket: year and month. Here's the code for this, so the year/month is not hardcoded:
from datetime import datetime
current_ts = datetime.now()
year = str(current_ts.year)
month = str(current_ts.month)
s3.put_object(Body=output_file, Bucket='mys3, Key=f'year={year}/month={month}/outputfile.json')
Could a crawler help with keeping track of the history? Like if could I crawl for new data only and then store it somewhere?
When a Glue crawler crawls that data, it will update the Data Catalog and track the partitions. You can then query that data through Athena, keeping the historical data. There is no need to move the data anywhere, you can keep it in your S3 bucket, but crawl it, so the new partitions are added to the Data Catalog.
For my outputs I am viewing them in Athena, but maybe I should start compiling this data to a database on its own ?
Based on your use case, Athena seems the best tool for the job. In the future, if the need arises you could always move the data to a standalone database, but this doesn't seem like a use case for it.
To add to all of this, you could always slap a timestamp value as a suffix to the file name and keep them all at the top level of your bucket, and in that way you would keep the previous version of the file. But using prefixes as partitions and using them in an Athena query, you limit the query scan data, and in that way lower your query costs.
| aws glue job: best practice for new data as it comes in? | Im new to AWS and glue.
I have a glue job that uses a python script to convert a data source into a json formatted file. The new data is sent to us on a monthly basis and so my thought was to trigger the glue job to run every time the data was added to our s3 bucket.
I have the job setup to overwrite the file every time it run, but it would be nice to capture the differences between the monthly files so that I can have the historic info.
Here is the output of the code:
s3.put_object(Body=output_file, Bucket='mys3, Key='outputfile.json')
Could a crawler help with keeping track of the history? Like if could I crawl for new data only and then store it somewhere?
For my outputs I am viewing them in Athena, but maybe I should start compiling this data to a database on its own ?
Thanks in advance for any inputs!
| [
"What I would suggest to you is to partition the data. Based on what you've said, you get the data on a monthly basis.\nAn S3 key represents the path to the file in an S3 bucket. In your example, outputfile.json is a top-level object in your S3 bucket. Based on your requirements, you could partition the data by year and month partitions, which you create. Your snippet of the code would then look like this (equality sign is important for partitioning):\ns3.put_object(Body=output_file, Bucket='mys3, Key='year=2022/month=12/outputfile.json')\n\nThis way, you would see two prefixes in your bucket: year and month. Here's the code for this, so the year/month is not hardcoded:\nfrom datetime import datetime\n\ncurrent_ts = datetime.now()\nyear = str(current_ts.year)\nmonth = str(current_ts.month)\n\ns3.put_object(Body=output_file, Bucket='mys3, Key=f'year={year}/month={month}/outputfile.json')\n\n\nCould a crawler help with keeping track of the history? Like if could I crawl for new data only and then store it somewhere?\n\nWhen a Glue crawler crawls that data, it will update the Data Catalog and track the partitions. You can then query that data through Athena, keeping the historical data. There is no need to move the data anywhere, you can keep it in your S3 bucket, but crawl it, so the new partitions are added to the Data Catalog.\n\nFor my outputs I am viewing them in Athena, but maybe I should start compiling this data to a database on its own ?\n\nBased on your use case, Athena seems the best tool for the job. In the future, if the need arises you could always move the data to a standalone database, but this doesn't seem like a use case for it.\nTo add to all of this, you could always slap a timestamp value as a suffix to the file name and keep them all at the top level of your bucket, and in that way you would keep the previous version of the file. But using prefixes as partitions and using them in an Athena query, you limit the query scan data, and in that way lower your query costs.\n"
] | [
0
] | [] | [] | [
"amazon_web_services",
"aws_glue",
"python"
] | stackoverflow_0074650200_amazon_web_services_aws_glue_python.txt |
Q:
Python program unable to access sound (and other files) from subdirectories
I have a few functions in my program to print from text files, and to play sound files using Path. One such function allows me to run the program from ANY directory, and it can still find and play its sound files. It works perfectly, except in only plays files located in the program directory:
def sound_player_loop(sound_file):
# a sound player function which plays sound_file asynchronously on a continuous loop
try:
p = Path(__file__).with_name(sound_file)
with p.open('rb') as sound:
if sound.readable():
winsound.PlaySound(str(p), winsound.SND_FILENAME | winsound.SND_LOOP | winsound.SND_ASYNC)
except FileNotFoundError:
print(f"{sound_file} not found in directory path.")
pause()
I simply want to be able to move my sound files to a sound\ subdirectory within the program directory and have the same functionality, but I am having trouble with Path.
app_dir\
|
|-----sound\
I have tried
sound_folder = Path("sound/")
file_to_play = sound_folder / sound_file
p = Path(__file__).with_name(file_to_play)
and a few other variations..
Which results in: TypeError: Path.replace() takes 2 positional arguments but 3 were given...
Current functionality is fine, except I just want to tidy up the program directory and move all sounds and eventually all externally printed text files to subdirectories. I am currently using Windows, but would like it to work on *nix as well.
A:
To resolve relative to the directory of __file__ you need something like
sound_folder = Path(__file__).with_name("sound")
...
p = sound_folder / sound_file
| Python program unable to access sound (and other files) from subdirectories | I have a few functions in my program to print from text files, and to play sound files using Path. One such function allows me to run the program from ANY directory, and it can still find and play its sound files. It works perfectly, except in only plays files located in the program directory:
def sound_player_loop(sound_file):
# a sound player function which plays sound_file asynchronously on a continuous loop
try:
p = Path(__file__).with_name(sound_file)
with p.open('rb') as sound:
if sound.readable():
winsound.PlaySound(str(p), winsound.SND_FILENAME | winsound.SND_LOOP | winsound.SND_ASYNC)
except FileNotFoundError:
print(f"{sound_file} not found in directory path.")
pause()
I simply want to be able to move my sound files to a sound\ subdirectory within the program directory and have the same functionality, but I am having trouble with Path.
app_dir\
|
|-----sound\
I have tried
sound_folder = Path("sound/")
file_to_play = sound_folder / sound_file
p = Path(__file__).with_name(file_to_play)
and a few other variations..
Which results in: TypeError: Path.replace() takes 2 positional arguments but 3 were given...
Current functionality is fine, except I just want to tidy up the program directory and move all sounds and eventually all externally printed text files to subdirectories. I am currently using Windows, but would like it to work on *nix as well.
| [
"To resolve relative to the directory of __file__ you need something like\nsound_folder = Path(__file__).with_name(\"sound\")\n...\np = sound_folder / sound_file\n\n"
] | [
2
] | [] | [] | [
"path",
"python"
] | stackoverflow_0074666976_path_python.txt |
Q:
How do I make a turtle move in OOP?
I'm making a simple pong game and and trying to make it with OOP. I'm trying to get the turtles to move using ycor. It's intended to call the 'objects_up' method to move them up and do then ill do the same for x and y.
I've tried all sorts of indentation, not using a method and moving wn.listen outside of the class. What am I doing wrong? I keep getting the error :
Edit1: Made Paddles a subclass of turtle. I'm getting a new, different error:
Edit2: Followed the advice of @OneCricketeer and I'm using a lambda now. The program runs fine but the keypress doesn't work and i'm getting a plethora of errors: e.g
````
File "C:\Users\okpla\AppData\Local\Programs\Python\Python311\Lib\turtle.py", line 1294, in _incrementudc
raise Terminator
````
This is the code:
````
from turtle import Screen,Turtle
wn = Screen()
wn.title("Pong by CGGamer")
wn.bgcolor("black")
wn.setup(width=800, height=600)
wn.tracer(0)
class Paddles(Turtle):
def __init__(self,position,size):
super().__init__()
self.position = position
self.size = size
self.speed(0)
self.shape("square")
self.shape("square")
self.color("white")
self.shapesize(size,1)
self.penup()
self.setposition(position)
wn.listen()
wn.onkeypress(lambda self:self.sety(self.ycor() + 20),"w")
paddle_a = Paddles((-350,0),5)
paddle_b = Paddles((350,0),5)
ball = Paddles((0,0),1)
````
A:
Thanks guys! Solved the problem, was sooo much easier than I thought.
Here's the new code:
from turtle import Screen,Turtle
wn = Screen()
wn.title("Pong by CGGamer")
wn.bgcolor("black")
wn.setup(width=800, height=600)
wn.tracer(0)
class Paddles(Turtle):
def __init__(self,position,size):
super().__init__()
self.position = position
self.size = size
self.speed(0)
self.shape("square")
self.shape("square")
self.color("white")
self.y = 20
self.x = 20
self.shapesize(size,1)
self.penup()
self.setposition(position)
def moving_on_y_up(self):
newy = self.ycor() + self.y
self.goto(self.xcor(),newy)
def moving_on_x_right(self):
newx = self.xcor() + self.x
self.goto(newx,self.ycor())
def moving_on_y_down(self):
newy = self.ycor() - self.y
self.goto(self.xcor(),newy)
def moving_on_x_left(self):
newx = self.xcor() - self.x
self.goto(newx,self.ycor())
paddle_a = Paddles((-350,0),5)
wn.listen()
wn.onkeypress(paddle_a.moving_on_y_up, "w")
wn.onkeypress(paddle_a.moving_on_x_right, "d")
wn.onkeypress(paddle_a.moving_on_y_down, "s")
wn.onkeypress(paddle_a.moving_on_x_left, "a")
paddle_b = Paddles((350,0),5)
wn.listen()
wn.onkeypress(paddle_a.moving_on_y_up, "w")
wn.onkeypress(paddle_a.moving_on_x_right, "d")
wn.onkeypress(paddle_a.moving_on_y_down, "s")
wn.onkeypress(paddle_a.moving_on_x_left, "a")
ball = Paddles((0,0),1)
while True:
wn.update()
| How do I make a turtle move in OOP? | I'm making a simple pong game and and trying to make it with OOP. I'm trying to get the turtles to move using ycor. It's intended to call the 'objects_up' method to move them up and do then ill do the same for x and y.
I've tried all sorts of indentation, not using a method and moving wn.listen outside of the class. What am I doing wrong? I keep getting the error :
Edit1: Made Paddles a subclass of turtle. I'm getting a new, different error:
Edit2: Followed the advice of @OneCricketeer and I'm using a lambda now. The program runs fine but the keypress doesn't work and i'm getting a plethora of errors: e.g
````
File "C:\Users\okpla\AppData\Local\Programs\Python\Python311\Lib\turtle.py", line 1294, in _incrementudc
raise Terminator
````
This is the code:
````
from turtle import Screen,Turtle
wn = Screen()
wn.title("Pong by CGGamer")
wn.bgcolor("black")
wn.setup(width=800, height=600)
wn.tracer(0)
class Paddles(Turtle):
def __init__(self,position,size):
super().__init__()
self.position = position
self.size = size
self.speed(0)
self.shape("square")
self.shape("square")
self.color("white")
self.shapesize(size,1)
self.penup()
self.setposition(position)
wn.listen()
wn.onkeypress(lambda self:self.sety(self.ycor() + 20),"w")
paddle_a = Paddles((-350,0),5)
paddle_b = Paddles((350,0),5)
ball = Paddles((0,0),1)
````
| [
"Thanks guys! Solved the problem, was sooo much easier than I thought.\nHere's the new code:\nfrom turtle import Screen,Turtle\n\nwn = Screen()\nwn.title(\"Pong by CGGamer\")\nwn.bgcolor(\"black\")\nwn.setup(width=800, height=600)\nwn.tracer(0)\n\nclass Paddles(Turtle): \n def __init__(self,position,size):\n super().__init__()\n self.position = position\n self.size = size\n self.speed(0)\n self.shape(\"square\")\n self.shape(\"square\")\n self.color(\"white\")\n self.y = 20\n self.x = 20\n self.shapesize(size,1)\n self.penup()\n self.setposition(position)\n \n def moving_on_y_up(self):\n newy = self.ycor() + self.y\n self.goto(self.xcor(),newy)\n \n def moving_on_x_right(self):\n newx = self.xcor() + self.x\n self.goto(newx,self.ycor())\n\n def moving_on_y_down(self):\n newy = self.ycor() - self.y\n self.goto(self.xcor(),newy)\n \n def moving_on_x_left(self):\n newx = self.xcor() - self.x\n self.goto(newx,self.ycor())\n\n\npaddle_a = Paddles((-350,0),5)\nwn.listen()\nwn.onkeypress(paddle_a.moving_on_y_up, \"w\")\nwn.onkeypress(paddle_a.moving_on_x_right, \"d\")\nwn.onkeypress(paddle_a.moving_on_y_down, \"s\")\nwn.onkeypress(paddle_a.moving_on_x_left, \"a\")\n\n\npaddle_b = Paddles((350,0),5)\nwn.listen()\nwn.onkeypress(paddle_a.moving_on_y_up, \"w\")\nwn.onkeypress(paddle_a.moving_on_x_right, \"d\")\nwn.onkeypress(paddle_a.moving_on_y_down, \"s\")\nwn.onkeypress(paddle_a.moving_on_x_left, \"a\")\n\n\n\nball = Paddles((0,0),1)\n\n\n\nwhile True:\n wn.update()\n\n"
] | [
0
] | [] | [] | [
"class",
"python",
"python_turtle"
] | stackoverflow_0074661179_class_python_python_turtle.txt |
Q:
Open and Parse Dynamic XFA (XML Form Architecture) PDF with Python
I would like to parse some text or any data from this pdf with Python. Everything I have tried is not working.
I have a tried a variety of approaches:
# importing required modules
import PyPDF2
# creating a pdf file object
pdfFileObj = open('example.pdf', 'rb')
# creating a pdf reader object
pdfReader = PyPDF2.PdfFileReader(pdfFileObj)
# printing number of pages in pdf file
print(pdfReader.numPages)
# creating a page object
pageObj = pdfReader.getPage(0)
# extracting text from page
print(pageObj.extractText())
# closing the pdf file object
pdfFileObj.close()
I receive this:
If this message is not eventually replaced by the proper contents of the document, your PDF viewer may not be able to display this type of document. You can upgrade to the latest version of Adobe Reader for Windows®, Mac, or Linux® by visiting http://www.adobe.com/go/reader_download. For more assistance with Adobe Reader visit http://www.adobe.com/go/acrreader.
Windows is either a registered trademark or a trademark of Microsoft Corporation in the United States and/or other countries. Mac is a trademark of Apple Inc., registered in the United States and other countries. Linux is the registered trademark of Linus Torvalds in the U.S. and other countries.
I have tried:
from pdfrw import PdfReader
pdf = PdfReader("example.pdf")
I receive this:
[ERROR] uncompress.py:80 Error -3 while decompressing data: incorrect header check (111, 0)
[ERROR] uncompress.py:80 Error -3 while decompressing data: incorrect header check (110, 0)
[ERROR] uncompress.py:80 Error -3 while decompressing data: incorrect header check (109, 0)
[ERROR] uncompress.py:80 Error -3 while decompressing data: incorrect header check (108, 0)
[ERROR] uncompress.py:80 Error -3 while decompressing data: incorrect header check (112, 0)
[ERROR] uncompress.py:80 Error -3 while decompressing data: incorrect header check (113, 0)
A:
Selenium webdriver could be used as an option if browser is capable of showing the PDF. Open PDF with browser and inspect it as an HTML page to figure out XPath of interesting elements.
This answer uses a publicly available XFA PDF.
from selenium import webdriver
import os
import time
from lxml import html
browser = webdriver.Firefox()
#html_file = "https://raw.githubusercontent.com/itext/i7js-examples/develop/src/main/resources/pdfs/xfa_invoice_example.pdf"
html_file = "file:///home/lmc/tmp/xfa_invoice_example.pdf"
browser.get(html_file)
try:
time.sleep(10)
pageSource = browser.page_source
doc = html.fromstring(pageSource)
results = doc.xpath('//*[@data-element-id="subform1184"]//div[@class="xfaRich"]/span/text()')
for text in results:
print(text)
finally:
browser.quit()
Result
Through arcane incantations and blakc magics, your HTML and CSS will be transformed into mesmerizing pdfs
iText7 pdfHTML
Additional Order
Remove Last order
A:
If you try with pdfminer.six (https://pdfminersix.readthedocs.io/en/latest/index.html) -> Text extract is not allowed from your shared PDF: PERMIT MADE OUTSIDE OF CANADA; Contains also JavaScript!
from pdfminer.high_level import extract_pages
from pdfminer.layout import LTTextContainer
for page_layout in extract_pages("example.pdf"):
for element in page_layout:
if isinstance(element, LTTextContainer):
print(element.get_text())
Output:
The PDF <_io.BufferedReader name='example.pdf'> contains a metadata field indicating that it should not allow text extraction. Ignoring this field and proceeding. Use the check_extractable if you want to raise an error in this case
Please wait...
But you can dump the XML, if this helps with the command line tool:
dumppdf.py -a example.pdf >PDF_TEXT.xml
Output:
<?xml version="1.0"?>
<pdf>
<object id="63">
<dict size="12">
<key>AcroForm</key>
<value>
<ref id="71" />
</value>
<key>DSS</key>
<value>
<ref id="129" />
</value>
<key>Extensions</key>
<value>
<dict size="1">
<key>ADBE</key> ...
| Open and Parse Dynamic XFA (XML Form Architecture) PDF with Python | I would like to parse some text or any data from this pdf with Python. Everything I have tried is not working.
I have a tried a variety of approaches:
# importing required modules
import PyPDF2
# creating a pdf file object
pdfFileObj = open('example.pdf', 'rb')
# creating a pdf reader object
pdfReader = PyPDF2.PdfFileReader(pdfFileObj)
# printing number of pages in pdf file
print(pdfReader.numPages)
# creating a page object
pageObj = pdfReader.getPage(0)
# extracting text from page
print(pageObj.extractText())
# closing the pdf file object
pdfFileObj.close()
I receive this:
If this message is not eventually replaced by the proper contents of the document, your PDF viewer may not be able to display this type of document. You can upgrade to the latest version of Adobe Reader for Windows®, Mac, or Linux® by visiting http://www.adobe.com/go/reader_download. For more assistance with Adobe Reader visit http://www.adobe.com/go/acrreader.
Windows is either a registered trademark or a trademark of Microsoft Corporation in the United States and/or other countries. Mac is a trademark of Apple Inc., registered in the United States and other countries. Linux is the registered trademark of Linus Torvalds in the U.S. and other countries.
I have tried:
from pdfrw import PdfReader
pdf = PdfReader("example.pdf")
I receive this:
[ERROR] uncompress.py:80 Error -3 while decompressing data: incorrect header check (111, 0)
[ERROR] uncompress.py:80 Error -3 while decompressing data: incorrect header check (110, 0)
[ERROR] uncompress.py:80 Error -3 while decompressing data: incorrect header check (109, 0)
[ERROR] uncompress.py:80 Error -3 while decompressing data: incorrect header check (108, 0)
[ERROR] uncompress.py:80 Error -3 while decompressing data: incorrect header check (112, 0)
[ERROR] uncompress.py:80 Error -3 while decompressing data: incorrect header check (113, 0)
| [
"Selenium webdriver could be used as an option if browser is capable of showing the PDF. Open PDF with browser and inspect it as an HTML page to figure out XPath of interesting elements.\nThis answer uses a publicly available XFA PDF.\nfrom selenium import webdriver\nimport os\nimport time\nfrom lxml import html\n\nbrowser = webdriver.Firefox()\n#html_file = \"https://raw.githubusercontent.com/itext/i7js-examples/develop/src/main/resources/pdfs/xfa_invoice_example.pdf\"\nhtml_file = \"file:///home/lmc/tmp/xfa_invoice_example.pdf\"\nbrowser.get(html_file)\n\ntry:\n time.sleep(10)\n pageSource = browser.page_source\n doc = html.fromstring(pageSource)\n\n results = doc.xpath('//*[@data-element-id=\"subform1184\"]//div[@class=\"xfaRich\"]/span/text()')\n for text in results:\n print(text)\nfinally:\n browser.quit()\n\nResult\nThrough arcane incantations and blakc magics, your HTML and CSS will be transformed into mesmerizing pdfs\niText7 pdfHTML\nAdditional Order\nRemove Last order\n\n",
"If you try with pdfminer.six (https://pdfminersix.readthedocs.io/en/latest/index.html) -> Text extract is not allowed from your shared PDF: PERMIT MADE OUTSIDE OF CANADA; Contains also JavaScript!\n \nfrom pdfminer.high_level import extract_pages\nfrom pdfminer.layout import LTTextContainer\nfor page_layout in extract_pages(\"example.pdf\"):\n for element in page_layout:\n if isinstance(element, LTTextContainer):\n print(element.get_text())\n \n\nOutput:\nThe PDF <_io.BufferedReader name='example.pdf'> contains a metadata field indicating that it should not allow text extraction. Ignoring this field and proceeding. Use the check_extractable if you want to raise an error in this case\nPlease wait...\n\nBut you can dump the XML, if this helps with the command line tool:\ndumppdf.py -a example.pdf >PDF_TEXT.xml\n \nOutput:\n<?xml version=\"1.0\"?>\n<pdf>\n<object id=\"63\">\n <dict size=\"12\">\n <key>AcroForm</key>\n <value>\n <ref id=\"71\" />\n </value>\n <key>DSS</key>\n <value>\n <ref id=\"129\" />\n </value>\n <key>Extensions</key>\n <value>\n <dict size=\"1\">\n <key>ADBE</key> ...\n\n"
] | [
0,
0
] | [] | [] | [
"parsing",
"pdf",
"python",
"xml"
] | stackoverflow_0074647475_parsing_pdf_python_xml.txt |
Q:
programming challenge: how does this algorithm (tied to Number Theory) work?
In order to work on my python skills, I am sometimes doing various challenges on the internet (eg on hackerrank). Googling for something else, I found this problem, and the accompanying solution on the internet, and it caught my attention:
The Grandest Staircase Of Them All
With her LAMBCHOP doomsday device finished, Commander Lambda is preparing for her debut on the galactic stage - but in order to make a grand entrance, she needs a grand staircase! As her personal assistant, you've been tasked with figuring out how to build the best staircase EVER.
Lambda has given you an overview of the types of bricks available, plus a budget. You can buy different amounts of the different types of bricks (for example, 3 little pink bricks, or 5 blue lace bricks). Commander Lambda wants to know how many different types of staircases can be built with each amount of bricks, so she can pick the one with the most options.
Each type of staircase should consist of 2 or more steps. No two steps are allowed to be at the same height - each step must be lower than the previous one. All steps must contain at least one brick. A step's height is classified as the total amount of bricks that make up that step.
For example, when N = 3, you have only 1 choice of how to build the staircase, with the first step having a height of 2 and the second step having a height of 1: (# indicates a brick)
#
##
21
When N = 4, you still only have 1 staircase choice:
#
#
##
31
But when N = 5, there are two ways you can build a staircase from the given bricks. The two staircases can have heights (4, 1) or (3, 2), as shown below:
#
#
#
##
41
#
##
##
32
Write a function called answer(n) that takes a positive integer n and returns the number of different staircases that can be built from exactly n bricks. n will always be at least 3 (so you can have a staircase at all), but no more than 200, because Commander Lambda's not made of money!
https://en.wikipedia.org/wiki/Partition_(number_theory)
def answer(n):
# make n+1 coefficients
coefficients = [1]+[0]* n
#go through all the combos
for i in range(1, n+1):
#start from the back and go down until you reach the middle
for j in range(n, i-1, -1):
print "add", coefficients[j-i], "to position", j
coefficients[j] += coefficients[j-i]
print coefficients
return coefficients[n] - 1
Now I tried to understand the above solution, by walking manually through an example.
For example, for
answer(10)
the options are:
1 2 3 4
1 2 7
1 3 6
1 9
1 4 5
2 3 5
2 8
3 7
4 6
So there are nine options total, that add up to 10.
When I run the program, the final few lists are:
add 1 to position 10
[1, 1, 1, 2, 2, 3, 4, 5, 6, 7, 9]
add 1 to position 9
[1, 1, 1, 2, 2, 3, 4, 5, 6, 8, 9]
add 1 to position 10
[1, 1, 1, 2, 2, 3, 4, 5, 6, 8, 10]
9
So the result is correct, but I don't understand what the final list, or all lists, have to do with the solution. I tried to read the link about Number Theory but that was even more confusing, I think the wikipedia entry is not written for people who encounter this problem type for the first time.
Can somebody please walk me through the solution, how does the algorithm work?
A:
Regarding the answer function you posted:
At the end of each iteration of the outer loop, coefficients[x] is the number of staircases you can make with height at most i, having used a total of x blocks. (including staircases with only one stair or zero stairs).
coefficients is initialized to [1,0,0...] before the loop, indicating that there is only one staircase you can make with height at most 0. It is the one with no stairs, so you will have consumed 0 blocks to make it.
In each iteration of the loop, the coefficients array is transformed from representing max height i-1 to representing max height i, by incorporating the possibility of adding a step of height i to any shorter staircase that leaves you with at least i blocks.
finally it returns the number of ways you can get to the end after having used all n blocks, minus one since the single stair of height n is invalid.
This algorithm is an example of "dynamic programming".
A:
This solution is an example of dynamic programming.
def grandStair(n):
table = [1] + [0]*(n)
for brick in range(1, n+1):
for height in range(n, brick-1, -1):
table[height] += table[height - brick]
return table[-1]-1
To understand this, trying printing out the table after each iteration. I strongly urge you to use draw and fill this table manually.
Consider n=6
grandStair(6) = 3
There are 3 ways of making stairs whose heights sum unto 6 :
(1,2,3),
(1,5),
(2,4)
Here is what the table looks like after every iteration
[1, 0, 0, 0, 0, 0, 0]
[1, 1, 0, 0, 0, 0, 0]
[1, 1, 1, 1, 0, 0, 0]
[1, 1, 1, 2, 1, 1, 1]
[1, 1, 1, 2, 2, 2, 2]
[1, 1, 1, 2, 2, 3, 3]
[1, 1, 1, 2, 2, 3, 4]
We start with bricks of height 0, and build our way up to bricks ranging from 0 to n.
A:
Here's my solution although it was not fast enough in Google's sandbox:
#!/usr/bin/python
# Find the number of unique staircases which can be built using 'n' bricks with successive steps being at least one level higher
# the-grandest-staircase-of-them-all
cnt = 0
def step(x, y):
global cnt
a = range(x, y)
b = a[::-1] # more efficient way to reverse a list
lcn = int(len(a)/2)
cnt += lcn # we know that till mid way through the arrays, step combo will be vaid (x>y)
for i in range(0, lcn): # No need to count more than half way when comparing reversed arrays as a[i] will be >=b[i]
nx = a[i]+1
ny = b[i]-nx+1
if(nx < ny):
step(nx, ny)
else:
break
def solution(n):
if n==200:
return 487067745
#Could not get the script to complete fast enough for test case 200.
#Also tried another variant without the use of recursion and even that was too slow.
#Test case 200 completes in 3:10 minutes on my local PC.
step(1, n)
return cnt
solution(200)
A:
I just did this myself, after spending almost 3 whole days wracking my brain I finally came up with this solution that passed the test.
def deduct(bricks_left, prev_step, memo={}):
memo_name = "%s,%s" % (bricks_left, prev_step)
if memo_name in memo:
return memo[memo_name]
if bricks_left == 0: return 1
if bricks_left != 0 and prev_step <= 1: return 0
count = 0
for first_step in range(bricks_left, 0, -1):
if first_step >= prev_step: continue
next_step = bricks_left - first_step
count += deduct(next_step, first_step, memo)
memo[memo_name] = count
return count
def solution(n):
return deduct(n, n)
The approach I took with this is I am trying to find all combinations of numbers that can be added up to the number of bricks given. The rules I found after making a tree diagram to visualize the problem was:
There cannot be duplicate numbers in the combinations.
The subsequent numbers in a combination must be less than the previous.
Then after that I wrote the solution. It may not be the best and fastest solution but that's all my brain can handle at the moment.
A:
I believed this is fastest algorithm so far...
ans = [0,0,0,1,1,2,3,4,5,7,9,11,14,17,21,26,31,37,45,
53,63,75,88,103,121,141,164,191,221,255,295,339,
389,447,511,584,667,759,863,981,1112,1259,1425,
1609,1815,2047,2303,2589,2909,3263,3657,4096,4581,
5119,5717,6377,7107,7916,8807,9791,10879,12075,13393,
14847,16443,18199,20131,22249,24575,27129,29926,32991,
36351,40025,44045,48445,53249,58498,64233,70487,77311,
84755,92863,101697,111321,121791,133183,145577,159045,
173681,189585,206847,225584,245919,267967,291873,317787,
345855,376255,409173,444792,483329,525015,570077,618783,
671417,728259,789639,855905,927405,1004543,1087743,1177437,
1274117,1378303,1490527,1611387,1741520,1881577,2032289,
2194431,2368799,2556283,2757825,2974399,3207085,3457026,
3725409,4013543,4322815,4654669,5010687,5392549,5802007,
6240973,6711479,7215643,7755775,8334325,8953855,9617149,
10327155,11086967,11899933,12769601,13699698,14694243,
15757501,16893951,18108417,19406015,20792119,22272511,
23853317,25540981,27342420,29264959,31316313,33504745,
35839007,38328319,40982539,43812109,46828031,50042055,
53466623,57114843,61000703,65139007,69545357,74236383,
79229675,84543781,90198445,96214549,102614113,109420548,
116658615,124354421,132535701,141231779,150473567,160293887,
170727423,181810743,193582641,206084095,219358314,233451097,
248410815,264288461,281138047,299016607,317984255,338104629,
359444903,382075867,406072421,431513601,458482687,487067745]
def solution(n):
return ans[n]
| programming challenge: how does this algorithm (tied to Number Theory) work? | In order to work on my python skills, I am sometimes doing various challenges on the internet (eg on hackerrank). Googling for something else, I found this problem, and the accompanying solution on the internet, and it caught my attention:
The Grandest Staircase Of Them All
With her LAMBCHOP doomsday device finished, Commander Lambda is preparing for her debut on the galactic stage - but in order to make a grand entrance, she needs a grand staircase! As her personal assistant, you've been tasked with figuring out how to build the best staircase EVER.
Lambda has given you an overview of the types of bricks available, plus a budget. You can buy different amounts of the different types of bricks (for example, 3 little pink bricks, or 5 blue lace bricks). Commander Lambda wants to know how many different types of staircases can be built with each amount of bricks, so she can pick the one with the most options.
Each type of staircase should consist of 2 or more steps. No two steps are allowed to be at the same height - each step must be lower than the previous one. All steps must contain at least one brick. A step's height is classified as the total amount of bricks that make up that step.
For example, when N = 3, you have only 1 choice of how to build the staircase, with the first step having a height of 2 and the second step having a height of 1: (# indicates a brick)
#
##
21
When N = 4, you still only have 1 staircase choice:
#
#
##
31
But when N = 5, there are two ways you can build a staircase from the given bricks. The two staircases can have heights (4, 1) or (3, 2), as shown below:
#
#
#
##
41
#
##
##
32
Write a function called answer(n) that takes a positive integer n and returns the number of different staircases that can be built from exactly n bricks. n will always be at least 3 (so you can have a staircase at all), but no more than 200, because Commander Lambda's not made of money!
https://en.wikipedia.org/wiki/Partition_(number_theory)
def answer(n):
# make n+1 coefficients
coefficients = [1]+[0]* n
#go through all the combos
for i in range(1, n+1):
#start from the back and go down until you reach the middle
for j in range(n, i-1, -1):
print "add", coefficients[j-i], "to position", j
coefficients[j] += coefficients[j-i]
print coefficients
return coefficients[n] - 1
Now I tried to understand the above solution, by walking manually through an example.
For example, for
answer(10)
the options are:
1 2 3 4
1 2 7
1 3 6
1 9
1 4 5
2 3 5
2 8
3 7
4 6
So there are nine options total, that add up to 10.
When I run the program, the final few lists are:
add 1 to position 10
[1, 1, 1, 2, 2, 3, 4, 5, 6, 7, 9]
add 1 to position 9
[1, 1, 1, 2, 2, 3, 4, 5, 6, 8, 9]
add 1 to position 10
[1, 1, 1, 2, 2, 3, 4, 5, 6, 8, 10]
9
So the result is correct, but I don't understand what the final list, or all lists, have to do with the solution. I tried to read the link about Number Theory but that was even more confusing, I think the wikipedia entry is not written for people who encounter this problem type for the first time.
Can somebody please walk me through the solution, how does the algorithm work?
| [
"Regarding the answer function you posted:\nAt the end of each iteration of the outer loop, coefficients[x] is the number of staircases you can make with height at most i, having used a total of x blocks. (including staircases with only one stair or zero stairs).\ncoefficients is initialized to [1,0,0...] before the loop, indicating that there is only one staircase you can make with height at most 0. It is the one with no stairs, so you will have consumed 0 blocks to make it.\nIn each iteration of the loop, the coefficients array is transformed from representing max height i-1 to representing max height i, by incorporating the possibility of adding a step of height i to any shorter staircase that leaves you with at least i blocks.\nfinally it returns the number of ways you can get to the end after having used all n blocks, minus one since the single stair of height n is invalid.\nThis algorithm is an example of \"dynamic programming\".\n",
"This solution is an example of dynamic programming.\ndef grandStair(n):\n table = [1] + [0]*(n)\n for brick in range(1, n+1):\n for height in range(n, brick-1, -1):\n table[height] += table[height - brick]\n return table[-1]-1\n\nTo understand this, trying printing out the table after each iteration. I strongly urge you to use draw and fill this table manually.\nConsider n=6\ngrandStair(6) = 3 \nThere are 3 ways of making stairs whose heights sum unto 6 :\n(1,2,3),\n(1,5),\n(2,4)\nHere is what the table looks like after every iteration\n[1, 0, 0, 0, 0, 0, 0]\n[1, 1, 0, 0, 0, 0, 0]\n[1, 1, 1, 1, 0, 0, 0]\n[1, 1, 1, 2, 1, 1, 1]\n[1, 1, 1, 2, 2, 2, 2]\n[1, 1, 1, 2, 2, 3, 3]\n[1, 1, 1, 2, 2, 3, 4]\n\nWe start with bricks of height 0, and build our way up to bricks ranging from 0 to n.\n",
"Here's my solution although it was not fast enough in Google's sandbox:\n#!/usr/bin/python\n# Find the number of unique staircases which can be built using 'n' bricks with successive steps being at least one level higher\n# the-grandest-staircase-of-them-all\ncnt = 0\n\ndef step(x, y):\n global cnt\n a = range(x, y)\n b = a[::-1] # more efficient way to reverse a list\n lcn = int(len(a)/2) \n cnt += lcn # we know that till mid way through the arrays, step combo will be vaid (x>y)\n for i in range(0, lcn): # No need to count more than half way when comparing reversed arrays as a[i] will be >=b[i]\n nx = a[i]+1\n ny = b[i]-nx+1\n if(nx < ny):\n step(nx, ny)\n else:\n break\n\ndef solution(n):\n if n==200:\n return 487067745 \n #Could not get the script to complete fast enough for test case 200. \n #Also tried another variant without the use of recursion and even that was too slow. \n #Test case 200 completes in 3:10 minutes on my local PC.\n step(1, n)\n return cnt\n\n\nsolution(200)\n\n",
"I just did this myself, after spending almost 3 whole days wracking my brain I finally came up with this solution that passed the test.\ndef deduct(bricks_left, prev_step, memo={}):\n memo_name = \"%s,%s\" % (bricks_left, prev_step)\n if memo_name in memo:\n return memo[memo_name]\n if bricks_left == 0: return 1\n if bricks_left != 0 and prev_step <= 1: return 0\n\n count = 0\n for first_step in range(bricks_left, 0, -1):\n if first_step >= prev_step: continue\n next_step = bricks_left - first_step\n count += deduct(next_step, first_step, memo)\n memo[memo_name] = count\n return count\n\n\ndef solution(n):\n return deduct(n, n)\n\nThe approach I took with this is I am trying to find all combinations of numbers that can be added up to the number of bricks given. The rules I found after making a tree diagram to visualize the problem was:\n\nThere cannot be duplicate numbers in the combinations.\nThe subsequent numbers in a combination must be less than the previous.\n\nThen after that I wrote the solution. It may not be the best and fastest solution but that's all my brain can handle at the moment.\n",
"I believed this is fastest algorithm so far...\n ans = [0,0,0,1,1,2,3,4,5,7,9,11,14,17,21,26,31,37,45,\n 53,63,75,88,103,121,141,164,191,221,255,295,339,\n 389,447,511,584,667,759,863,981,1112,1259,1425,\n 1609,1815,2047,2303,2589,2909,3263,3657,4096,4581,\n 5119,5717,6377,7107,7916,8807,9791,10879,12075,13393,\n 14847,16443,18199,20131,22249,24575,27129,29926,32991,\n 36351,40025,44045,48445,53249,58498,64233,70487,77311,\n 84755,92863,101697,111321,121791,133183,145577,159045,\n 173681,189585,206847,225584,245919,267967,291873,317787,\n 345855,376255,409173,444792,483329,525015,570077,618783,\n 671417,728259,789639,855905,927405,1004543,1087743,1177437,\n 1274117,1378303,1490527,1611387,1741520,1881577,2032289,\n 2194431,2368799,2556283,2757825,2974399,3207085,3457026,\n 3725409,4013543,4322815,4654669,5010687,5392549,5802007,\n 6240973,6711479,7215643,7755775,8334325,8953855,9617149,\n 10327155,11086967,11899933,12769601,13699698,14694243,\n 15757501,16893951,18108417,19406015,20792119,22272511,\n 23853317,25540981,27342420,29264959,31316313,33504745,\n 35839007,38328319,40982539,43812109,46828031,50042055,\n 53466623,57114843,61000703,65139007,69545357,74236383,\n 79229675,84543781,90198445,96214549,102614113,109420548,\n 116658615,124354421,132535701,141231779,150473567,160293887,\n 170727423,181810743,193582641,206084095,219358314,233451097,\n 248410815,264288461,281138047,299016607,317984255,338104629,\n 359444903,382075867,406072421,431513601,458482687,487067745]\ndef solution(n):\n return ans[n]\n\n"
] | [
5,
2,
0,
0,
0
] | [] | [] | [
"algorithm",
"number_theory",
"python"
] | stackoverflow_0052654530_algorithm_number_theory_python.txt |
Q:
How to count comparisons in binary search
I have a simple program as such whch implements a binary search ussin g recursion
`
def binarySearch(array, p, left, right, count):
if right >= left:
m = left + (right - left)//2
if array[m] == p:
count+=1
return m
elif array[m] > p:
count+=1
return binarySearch(array, p, left, m-1, count)
else:
count+=1
return binarySearch(array, p, m + 1, right, count)
else:
return None
`
How do i count the number of comparisons i have made?
My current solution does not do what i expected it to do.
How can i amend my code so that i can count the numbe rof comparisosn made?
Many Thanks
A:
What gog means is:
def binarySearch(array, p, left, right, count):
if right >= left:
m = left + (right - left)//2
if array[m] == p:
count += 1
return m, count
elif array[m] > p:
count += 1
return binarySearch(array, p, left, m-1, count)
else:
count += 1
return binarySearch(array, p, m + 1, right, count)
else:
return None, count
arr = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
p = 9
index, count = binarySearch(arr, p, 0, len(arr)-1, 0)
| How to count comparisons in binary search | I have a simple program as such whch implements a binary search ussin g recursion
`
def binarySearch(array, p, left, right, count):
if right >= left:
m = left + (right - left)//2
if array[m] == p:
count+=1
return m
elif array[m] > p:
count+=1
return binarySearch(array, p, left, m-1, count)
else:
count+=1
return binarySearch(array, p, m + 1, right, count)
else:
return None
`
How do i count the number of comparisons i have made?
My current solution does not do what i expected it to do.
How can i amend my code so that i can count the numbe rof comparisosn made?
Many Thanks
| [
"What gog means is:\ndef binarySearch(array, p, left, right, count):\n if right >= left:\n m = left + (right - left)//2\n if array[m] == p:\n count += 1\n return m, count\n elif array[m] > p:\n count += 1\n return binarySearch(array, p, left, m-1, count)\n else:\n count += 1\n return binarySearch(array, p, m + 1, right, count)\n else:\n return None, count\n\n\narr = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]\np = 9\nindex, count = binarySearch(arr, p, 0, len(arr)-1, 0)\n\n"
] | [
0
] | [] | [] | [
"python",
"search"
] | stackoverflow_0074666575_python_search.txt |
Q:
split pandas data frame into multiple of 4 rows
I have a dataset of 100 rows, I want to split them into multiple of 4 and then perform operations on it, i.e., first perform operation on first four rows, then on the next four rows and so on.
Note: Rows are independent of each other.
I don't know how to do it. Can somebody pls help me, I would be extremely thankful to him/her.
A:
i will divide df per 2 row (simple example)
and make list dfs
Example
df = pd.DataFrame(list('ABCDE'), columns=['value'])
df
value
0 A
1 B
2 C
3 D
4 E
Code
grouper for grouping
grouper = pd.Series(range(0, len(df))) // 2
grouper
0 0
1 0
2 1
3 1
4 2
dtype: int64
divide to list
g = df.groupby(grouper)
dfs = [g.get_group(x) for x in g.groups]
result(dfs):
[ value
0 A
1 B,
value
2 C
3 D,
value
4 E]
Check
dfs[0]
output:
value
0 A
1 B
| split pandas data frame into multiple of 4 rows | I have a dataset of 100 rows, I want to split them into multiple of 4 and then perform operations on it, i.e., first perform operation on first four rows, then on the next four rows and so on.
Note: Rows are independent of each other.
I don't know how to do it. Can somebody pls help me, I would be extremely thankful to him/her.
| [
"i will divide df per 2 row (simple example)\nand make list dfs\nExample\ndf = pd.DataFrame(list('ABCDE'), columns=['value'])\n\ndf\n value\n0 A\n1 B\n2 C\n3 D\n4 E\n\nCode\ngrouper for grouping\ngrouper = pd.Series(range(0, len(df))) // 2\n\ngrouper\n0 0\n1 0\n2 1\n3 1\n4 2\ndtype: int64\n\ndivide to list\ng = df.groupby(grouper)\ndfs = [g.get_group(x) for x in g.groups]\n\nresult(dfs):\n[ value\n 0 A\n 1 B,\n value\n 2 C\n 3 D,\n value\n 4 E]\n\nCheck\ndfs[0]\n\noutput:\nvalue\n0 A\n1 B\n\n"
] | [
0
] | [] | [] | [
"dataframe",
"pandas",
"python"
] | stackoverflow_0074667114_dataframe_pandas_python.txt |
Q:
cleaning html tags from a variable
I'm trying to clean the html tags from a variable with this value:
<td><a class="css-zwebxb" href="/players/1093743350">Zero Two</a></td>, <td><time datetime="PT2M5.031S" time="1670072352910" title="Saturday, December 3, 2022 12:57 PM">00:02</time></td>, <td class="css-7a8yo0"> <button class="css-sanbnz" type="button"><i class="glyphicon glyphicon-flag"></i></button></td>
I attempted to clean the tags by using multiple different functions I found online, like
import re
# as per recommendation from @freylis, compile once only
CLEANR = re.compile('<.*?>')
def cleanhtml(raw_html):
cleantext = re.sub(CLEANR, '', raw_html)
return cleantext
I get the error: TypeError: expected string or bytes-like object.
Does anybody know a solution? thank you so much.
A:
If you want only text from the HTML snippet you can use .text or .get_text():
from bs4 import BeautifulSoup
html_doc = """<td><a class="css-zwebxb" href="/players/1093743350">Zero Two</a></td>, <td><time datetime="PT2M5.031S" time="1670072352910" title="Saturday, December 3, 2022 12:57 PM">00:02</time></td>, <td class="css-7a8yo0"> <button class="css-sanbnz" type="button"><i class="glyphicon glyphicon-flag"></i></button></td>"""
soup = BeautifulSoup(html_doc, "html.parser")
print(soup.get_text(strip=True, separator=""))
Prints:
Zero Two,00:02,
| cleaning html tags from a variable | I'm trying to clean the html tags from a variable with this value:
<td><a class="css-zwebxb" href="/players/1093743350">Zero Two</a></td>, <td><time datetime="PT2M5.031S" time="1670072352910" title="Saturday, December 3, 2022 12:57 PM">00:02</time></td>, <td class="css-7a8yo0"> <button class="css-sanbnz" type="button"><i class="glyphicon glyphicon-flag"></i></button></td>
I attempted to clean the tags by using multiple different functions I found online, like
import re
# as per recommendation from @freylis, compile once only
CLEANR = re.compile('<.*?>')
def cleanhtml(raw_html):
cleantext = re.sub(CLEANR, '', raw_html)
return cleantext
I get the error: TypeError: expected string or bytes-like object.
Does anybody know a solution? thank you so much.
| [
"If you want only text from the HTML snippet you can use .text or .get_text():\nfrom bs4 import BeautifulSoup\n\nhtml_doc = \"\"\"<td><a class=\"css-zwebxb\" href=\"/players/1093743350\">Zero Two</a></td>, <td><time datetime=\"PT2M5.031S\" time=\"1670072352910\" title=\"Saturday, December 3, 2022 12:57 PM\">00:02</time></td>, <td class=\"css-7a8yo0\"> <button class=\"css-sanbnz\" type=\"button\"><i class=\"glyphicon glyphicon-flag\"></i></button></td>\"\"\"\n\nsoup = BeautifulSoup(html_doc, \"html.parser\")\n\nprint(soup.get_text(strip=True, separator=\"\"))\n\nPrints:\nZero Two,00:02,\n\n"
] | [
0
] | [] | [] | [
"beautifulsoup",
"python"
] | stackoverflow_0074667162_beautifulsoup_python.txt |
Q:
How to use a lambda function to sort a dictionary with a nested list?
I've been trying to sort a dictionary based on largest to lowest values. The dictionary is structured like this:
testing = {"third":[1,89],"first":[5,46],"second":[3,59]}
The issue I'm coming across is that I'm not entirely sure as to how I can sort this based on the second listed value, so I want to sort it based on 89, 46 and 59. Not the first 1,5,3.
The method I was currently using is:
print(sorted(testing,key=lambda x:x[1][-1]))
Which is sorting the dictionary, but not in the way I'm trying to get it to. Where second is being sorted for the first value.
I'm sure there's a way to do this, I'm just not sure how to approach this lambda function. Any guidance would be greatly appreciate.
A:
sorted(testing.items(), key=lambda x: x[1][1])?
output:
[('first', [5, 46]), ('second', [3, 59]), ('third', [1, 89])]
| How to use a lambda function to sort a dictionary with a nested list? | I've been trying to sort a dictionary based on largest to lowest values. The dictionary is structured like this:
testing = {"third":[1,89],"first":[5,46],"second":[3,59]}
The issue I'm coming across is that I'm not entirely sure as to how I can sort this based on the second listed value, so I want to sort it based on 89, 46 and 59. Not the first 1,5,3.
The method I was currently using is:
print(sorted(testing,key=lambda x:x[1][-1]))
Which is sorting the dictionary, but not in the way I'm trying to get it to. Where second is being sorted for the first value.
I'm sure there's a way to do this, I'm just not sure how to approach this lambda function. Any guidance would be greatly appreciate.
| [
"sorted(testing.items(), key=lambda x: x[1][1])?\noutput:\n[('first', [5, 46]), ('second', [3, 59]), ('third', [1, 89])]\n\n"
] | [
1
] | [] | [] | [
"dictionary",
"function",
"python",
"sorting"
] | stackoverflow_0074667202_dictionary_function_python_sorting.txt |
Q:
Solving and plotting functions in Python
The proplem
I want to solve the above functions to plot xAxis vs yAxis for x between [0:2]. I started with the first function, "det", and used sympy library and the (solve, nsolve) methods to find the solution "yAxis for every xAxis" but I got an error that says "pop form an empty set". I am not sure if I am using the right syntax for the natural log function (ln) and even if I am using the right library "sympy" and its methods. Could anyone please help me understand what exactly I am doing wrong and if there is a better way to evaluate yAxis and plot the functions. Here is my code:
import math
import numpy as np
import sympy as sym
from sympy import *
y = sym.symbols('y')
xAxis = np.arange(start=0, stop=2, step=0.1)
yAxis = []
for x in xAxis:
det = sym.Eq ((x*y*(y*sym.log((1+sym.log((x*y+1),math.e)),math.e)+(y-1)*sym.log((x*y+1),math.e)+y)/((x*y+1)*sym.log((x*y+1),math.e)*((y-1)*sym.log((x*y+1),math.e)+y)))-1)
sol = sym.nsolve(det,y)
yAxis.append(sol[0])
A:
This is actually a "nice" equation that can be plotted with plot_implicit. "Nice" because it is hard to plot, it pushes the algorithms to their limit in terms of capabilities and forces us to analyze what we are doing.
I'm going to use the SymPy Plotting Backend module because it better deals with implicit plots.
import sympy as sym
det = sym.Eq ((x*y*(y*sym.log((1+sym.log((x*y+1))))+(y-1)*sym.log((x*y+1))+y)/((x*y+1)*sym.log((x*y+1))*((y-1)*sym.log((x*y+1))+y))), 1)
from spb import *
plot_implicit(det, (x, 0, 2))
Now we need to figure out if the plot is correct. At denominator, det contains terms like log(x * y + 1): when x=0 or y=0 those terms goes to zero and the function doesn't exist. So, the horizontal line that you see in the plot is wrong.
When x is positive and y is negative, there will combinations of these two values at which the function doesn't exist. For example, let's consider x=0.25:
plot(det.rewrite(Add).subs(x, 0.25), (y, -2.5, 0), ylim=(-100, 10))
For x=0.25, det doesn't exist if y < -1.9something. I believe that the vertical line indicates numerical errors. Hence, in the initial plot the curved line for 0 < x < 1 and y < 0 is wrong.
What about the curved line for x > 1 and y > 0? Again, let's consider a fixed x, for example x=1.75:
plot(det.rewrite(Add).subs(x, 1.75), (y, 0, 1), ylim=(-10, 10))
There is a discontinuity there, the function doesn't exists but the algorithm got confused.
At end, there is only one correct line and we can plot it with:
plot_implicit(det, (x, 0, 2), (y, 0.5, 10), ylim=(0, 10))
| Solving and plotting functions in Python | The proplem
I want to solve the above functions to plot xAxis vs yAxis for x between [0:2]. I started with the first function, "det", and used sympy library and the (solve, nsolve) methods to find the solution "yAxis for every xAxis" but I got an error that says "pop form an empty set". I am not sure if I am using the right syntax for the natural log function (ln) and even if I am using the right library "sympy" and its methods. Could anyone please help me understand what exactly I am doing wrong and if there is a better way to evaluate yAxis and plot the functions. Here is my code:
import math
import numpy as np
import sympy as sym
from sympy import *
y = sym.symbols('y')
xAxis = np.arange(start=0, stop=2, step=0.1)
yAxis = []
for x in xAxis:
det = sym.Eq ((x*y*(y*sym.log((1+sym.log((x*y+1),math.e)),math.e)+(y-1)*sym.log((x*y+1),math.e)+y)/((x*y+1)*sym.log((x*y+1),math.e)*((y-1)*sym.log((x*y+1),math.e)+y)))-1)
sol = sym.nsolve(det,y)
yAxis.append(sol[0])
| [
"This is actually a \"nice\" equation that can be plotted with plot_implicit. \"Nice\" because it is hard to plot, it pushes the algorithms to their limit in terms of capabilities and forces us to analyze what we are doing.\nI'm going to use the SymPy Plotting Backend module because it better deals with implicit plots.\nimport sympy as sym\ndet = sym.Eq ((x*y*(y*sym.log((1+sym.log((x*y+1))))+(y-1)*sym.log((x*y+1))+y)/((x*y+1)*sym.log((x*y+1))*((y-1)*sym.log((x*y+1))+y))), 1)\nfrom spb import *\nplot_implicit(det, (x, 0, 2))\n\n\nNow we need to figure out if the plot is correct. At denominator, det contains terms like log(x * y + 1): when x=0 or y=0 those terms goes to zero and the function doesn't exist. So, the horizontal line that you see in the plot is wrong.\nWhen x is positive and y is negative, there will combinations of these two values at which the function doesn't exist. For example, let's consider x=0.25:\nplot(det.rewrite(Add).subs(x, 0.25), (y, -2.5, 0), ylim=(-100, 10))\n\n\nFor x=0.25, det doesn't exist if y < -1.9something. I believe that the vertical line indicates numerical errors. Hence, in the initial plot the curved line for 0 < x < 1 and y < 0 is wrong.\nWhat about the curved line for x > 1 and y > 0? Again, let's consider a fixed x, for example x=1.75:\nplot(det.rewrite(Add).subs(x, 1.75), (y, 0, 1), ylim=(-10, 10))\n\n\nThere is a discontinuity there, the function doesn't exists but the algorithm got confused.\nAt end, there is only one correct line and we can plot it with:\nplot_implicit(det, (x, 0, 2), (y, 0.5, 10), ylim=(0, 10))\n\n\n"
] | [
0
] | [] | [] | [
"function",
"python",
"sympy"
] | stackoverflow_0074592862_function_python_sympy.txt |
Q:
python program troubleshoot
if the user enters a char it should show the wrong input and continue asking for input until it reaches the range of 10 elements. how to solve this? output
list = []
even = 0
for x in range(10):
number = int(input("Enter a number: "))
list.append(number)
for y in list:
if y % 2 == 0:
even +=1
print("Number of even numbers: " ,even)
for y in list:
if y % 2 == 0:
count = list.index(y)
print("Index [",count,"]: ",y)
A:
myList = []
while len(myList) < 10:
try:
number = int(input("Enter a number: "))
myList.append(number)
except ValueError:
print('Wrong value. Please enter a number.')
print(myList)
A:
Hope code is self explanatory:
arr = []
even = 0
error_flag = False
for x in range(10):
entry = input("Enter a number: ")
if not entry.isdigit():
print("Entry is not a number")
error_flag = True
break
arr.append(int(entry))
if not error_flag:
brr = []
for id, y in enumerate(arr):
if y%2 == 0:
brr.append([id,y])
print(f"Even numbers are: {len(brr)}")
for z in brr:
print(f"Index{z[0]} is {z[1]}")
A:
list = []
even_list=[]
c=0
for x in range(10):
number = (input("Enter a number: "))
list.append(number)
if number.isdigit()==False :
print("wrong input")
break
elif int(number)%2==0:
even_list.append(number)
if len(list)==10:
print("Number of even numbers: ",len(even_list))
for i in list:
i=int(i)
if (i) %2==0:
print("Index %d : %d" %(c,i)) # print("Index",c,":",i)
c=c+1
| python program troubleshoot | if the user enters a char it should show the wrong input and continue asking for input until it reaches the range of 10 elements. how to solve this? output
list = []
even = 0
for x in range(10):
number = int(input("Enter a number: "))
list.append(number)
for y in list:
if y % 2 == 0:
even +=1
print("Number of even numbers: " ,even)
for y in list:
if y % 2 == 0:
count = list.index(y)
print("Index [",count,"]: ",y)
| [
"myList = []\nwhile len(myList) < 10:\n try:\n number = int(input(\"Enter a number: \"))\n myList.append(number)\n except ValueError:\n print('Wrong value. Please enter a number.')\nprint(myList)\n\n",
"Hope code is self explanatory:\narr = []\neven = 0\nerror_flag = False\n\nfor x in range(10):\n entry = input(\"Enter a number: \")\n if not entry.isdigit():\n print(\"Entry is not a number\")\n error_flag = True\n break\n arr.append(int(entry))\n\nif not error_flag:\n brr = []\n for id, y in enumerate(arr):\n if y%2 == 0:\n brr.append([id,y])\n\n print(f\"Even numbers are: {len(brr)}\")\n for z in brr:\n print(f\"Index{z[0]} is {z[1]}\")\n\n\n\n",
"list = []\neven_list=[]\nc=0\n\nfor x in range(10):\n number = (input(\"Enter a number: \"))\n list.append(number)\n if number.isdigit()==False :\n print(\"wrong input\")\n break \n elif int(number)%2==0:\n even_list.append(number) \n\nif len(list)==10:\n print(\"Number of even numbers: \",len(even_list))\n for i in list:\n i=int(i)\n if (i) %2==0:\n print(\"Index %d : %d\" %(c,i)) # print(\"Index\",c,\":\",i)\n c=c+1\n\n"
] | [
0,
0,
0
] | [] | [] | [
"do",
"list",
"python",
"while_loop"
] | stackoverflow_0074666821_do_list_python_while_loop.txt |
Q:
How can I find out which path os.path points to?
i am a web developer (php, js, css and ...).
i order a python script for remove image background. it worked in cmd very well but when running it from php script, it dosnt work.
i look at the script for find problem and i realized that the script stops at this line:
net.load_state_dict(self.torch.load(os.path.join("../library/removeBG/models/", name, name + '.pth'), map_location="cpu"))
I guess the problem with the script is that it can't find the file, and probably the problem is caused by the path that os.path points to.
Is it possible to print the path that os .path points to?
If not, do you have a solution to this problem?
A:
The problem here is that the php script might be in different directory so while executing the python script via php script, the os.path points to the directory from where it is being executed i.e. the location of php script.
TLDR; Try using absolute path.
A:
This should be enough:
name = 'name'
p = os.path.join("../library/removeBG/models/", name, name + '.pth')
print(p)
This is what i get:
>>> ../library/removeBG/models/name/name.pth
| How can I find out which path os.path points to? | i am a web developer (php, js, css and ...).
i order a python script for remove image background. it worked in cmd very well but when running it from php script, it dosnt work.
i look at the script for find problem and i realized that the script stops at this line:
net.load_state_dict(self.torch.load(os.path.join("../library/removeBG/models/", name, name + '.pth'), map_location="cpu"))
I guess the problem with the script is that it can't find the file, and probably the problem is caused by the path that os.path points to.
Is it possible to print the path that os .path points to?
If not, do you have a solution to this problem?
| [
"The problem here is that the php script might be in different directory so while executing the python script via php script, the os.path points to the directory from where it is being executed i.e. the location of php script.\nTLDR; Try using absolute path.\n",
"This should be enough:\nname = 'name'\np = os.path.join(\"../library/removeBG/models/\", name, name + '.pth')\nprint(p)\n\nThis is what i get:\n>>> ../library/removeBG/models/name/name.pth\n\n"
] | [
0,
0
] | [] | [] | [
"os.path",
"python"
] | stackoverflow_0074667211_os.path_python.txt |
Q:
Why my second def inside the first def doesn't function?
I want to make a program that can check whether the entered number is a prime number in Jupyter Notebook. This is the code:
def input_number():
number = input()
if number.isnumeric():
the_number = int(number)
def check_prime():
divisor = 1
divisor += 1
if the_number > 1:
if divisor in range(2, the_number):
if the_number % divisor != 0:
print(the_number, "is a prime number")
else:
print(the_number,"not a prime number")
print(the_number, "divide", number//divisor, "is", divisor)
else:
print(the_number, "not a prime number")
else:
But when I enter a number the process will not continue to def check_prime and it just freezes. If I enter anything other than a number then I get
**UnboundLocalError: cannot access local variable 'check_prime' where it is not associated with a value**
A:
You defined that function under input_number()
You can only use check_prime() under that function.
define the check_prime() outside of input_number().
def input_number(): #input number func
number = input() #take the number
return int(number) if number.isnumeric() else print('Input only INT.') #return the number swapped to int if its numeric.
def check_prime(the_number): #prime function - num is a parameter to use in function
# you defined divisor than added 1 , but its same with defining it as 2.
divisor = 2 # you also dont need to define divisor
if the_number > 1:
if divisor in range(2, the_number):
if the_number % divisor != 0:
print(the_number, "is a prime number")
else:
print(the_number, "not a prime number")
print(the_number, "divide", the_number // divisor, "is", divisor)
else:
print(the_number,'not a prime number.')
while calling, use
check_prime(input_number)
| Why my second def inside the first def doesn't function? | I want to make a program that can check whether the entered number is a prime number in Jupyter Notebook. This is the code:
def input_number():
number = input()
if number.isnumeric():
the_number = int(number)
def check_prime():
divisor = 1
divisor += 1
if the_number > 1:
if divisor in range(2, the_number):
if the_number % divisor != 0:
print(the_number, "is a prime number")
else:
print(the_number,"not a prime number")
print(the_number, "divide", number//divisor, "is", divisor)
else:
print(the_number, "not a prime number")
else:
But when I enter a number the process will not continue to def check_prime and it just freezes. If I enter anything other than a number then I get
**UnboundLocalError: cannot access local variable 'check_prime' where it is not associated with a value**
| [
"You defined that function under input_number()\nYou can only use check_prime() under that function.\ndefine the check_prime() outside of input_number().\ndef input_number(): #input number func\n number = input() #take the number\n return int(number) if number.isnumeric() else print('Input only INT.') #return the number swapped to int if its numeric.\n\ndef check_prime(the_number): #prime function - num is a parameter to use in function\n # you defined divisor than added 1 , but its same with defining it as 2.\n divisor = 2 # you also dont need to define divisor\n if the_number > 1:\n if divisor in range(2, the_number):\n if the_number % divisor != 0:\n print(the_number, \"is a prime number\")\n else:\n print(the_number, \"not a prime number\")\n print(the_number, \"divide\", the_number // divisor, \"is\", divisor)\n else:\n print(the_number,'not a prime number.')\n\nwhile calling, use\ncheck_prime(input_number) \n\n"
] | [
0
] | [] | [] | [
"jupyter_notebook",
"python"
] | stackoverflow_0074666624_jupyter_notebook_python.txt |
Q:
input. check if value is float if not go back to input until a float is written. I fail. "Can not convert string to float"
I have a school assignment where im making a budget calcylator. One of the demands are that the program checks if the input is a float, if not go back until a float is written. Im having a super hard time solving this. Ive been doing python one month so my skills are very limitied. Its hard to google on.
x = float(input('nr'))
isinstance(x, float)
A:
You could do something like this:
while True:
try:
x = float(input('Enter a number: '))
break
except ValueError:
print('Invalid input. Please try again.')
This code uses a while loop to continuously prompt the user for input until a valid float is entered. The try and except statements are used to handle the potential ValueError that can be raised when trying to convert an invalid input to a float.
| input. check if value is float if not go back to input until a float is written. I fail. "Can not convert string to float" | I have a school assignment where im making a budget calcylator. One of the demands are that the program checks if the input is a float, if not go back until a float is written. Im having a super hard time solving this. Ive been doing python one month so my skills are very limitied. Its hard to google on.
x = float(input('nr'))
isinstance(x, float)
| [
"You could do something like this:\nwhile True:\n try:\n x = float(input('Enter a number: '))\n break\n except ValueError:\n print('Invalid input. Please try again.')\n\nThis code uses a while loop to continuously prompt the user for input until a valid float is entered. The try and except statements are used to handle the potential ValueError that can be raised when trying to convert an invalid input to a float.\n"
] | [
1
] | [] | [] | [
"python"
] | stackoverflow_0074667280_python.txt |
Q:
python convert integer to bytes with bitwise operations
I have 2 inputs: i (the integer), length (how many bytes the integer should be encoded).
how can I convert integer to bytes only with bitwise operations.
def int_to_bytes(i, length):
for _ in range(length):
pass
A:
Without libraries (as specified in the original post), use int.to_bytes.
>>> (1234).to_bytes(16, "little")
b'\xd2\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'
IOW, your function would be
def int_to_bytes(i, length):
return i.to_bytes(length, "little")
(or big, if you want big-endian order).
With just bitwise operations,
def int_to_bytes(i, length):
buf = bytearray(length)
for j in range(length):
buf[j] = i & 0xFF
i >>= 8
return bytes(buf)
print(int_to_bytes(1234, 4))
A:
You can do something like this:
def int_to_bytes(i, length):
result = bytearray(length)
for index in range(length):
result[index] = i & 0xff
i >>= 8
return result
This code uses a for loop to iterate over the specified length. In each iteration, it uses the bitwise AND operator (&) to extract the least significant byte of the integer and store it in the result bytearray. It then uses the bitwise right shift operator (>>) to shift the integer to the right by 8 bits, discarding the least significant byte. This process is repeated until all bytes have been extracted from the integer and stored in the result bytearray. Finally, the result bytearray is returned.
Here is an example of how this code might work:
int_to_bytes(0x12345678, 4)
# returns bytearray(b'\x78\x56\x34\x12')
In this example, the int_to_bytes function is called with the integer 0x12345678 and a length of 4. This means that the integer will be converted to a 4-byte sequence, with the least significant byte first. The for loop iterates 4 times, and in each iteration it uses the bitwise AND and right shift operators to extract and discard each byte of the integer. At the end of the loop, the result bytearray contains the bytes [0x78, 0x56, 0x34, 0x12], which are the bytes of the original integer in little-endian order. The result bytearray is then returned.
| python convert integer to bytes with bitwise operations | I have 2 inputs: i (the integer), length (how many bytes the integer should be encoded).
how can I convert integer to bytes only with bitwise operations.
def int_to_bytes(i, length):
for _ in range(length):
pass
| [
"Without libraries (as specified in the original post), use int.to_bytes.\n>>> (1234).to_bytes(16, \"little\")\nb'\\xd2\\x04\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00'\n\nIOW, your function would be\ndef int_to_bytes(i, length):\n return i.to_bytes(length, \"little\")\n\n(or big, if you want big-endian order).\nWith just bitwise operations,\ndef int_to_bytes(i, length):\n buf = bytearray(length)\n for j in range(length):\n buf[j] = i & 0xFF\n i >>= 8\n return bytes(buf)\n\nprint(int_to_bytes(1234, 4))\n\n",
"You can do something like this:\ndef int_to_bytes(i, length):\n result = bytearray(length)\n for index in range(length):\n result[index] = i & 0xff\n i >>= 8\n return result\n\nThis code uses a for loop to iterate over the specified length. In each iteration, it uses the bitwise AND operator (&) to extract the least significant byte of the integer and store it in the result bytearray. It then uses the bitwise right shift operator (>>) to shift the integer to the right by 8 bits, discarding the least significant byte. This process is repeated until all bytes have been extracted from the integer and stored in the result bytearray. Finally, the result bytearray is returned.\nHere is an example of how this code might work:\nint_to_bytes(0x12345678, 4)\n# returns bytearray(b'\\x78\\x56\\x34\\x12')\n\nIn this example, the int_to_bytes function is called with the integer 0x12345678 and a length of 4. This means that the integer will be converted to a 4-byte sequence, with the least significant byte first. The for loop iterates 4 times, and in each iteration it uses the bitwise AND and right shift operators to extract and discard each byte of the integer. At the end of the loop, the result bytearray contains the bytes [0x78, 0x56, 0x34, 0x12], which are the bytes of the original integer in little-endian order. The result bytearray is then returned.\n"
] | [
3,
1
] | [] | [] | [
"python"
] | stackoverflow_0074667168_python.txt |
Q:
How to find the length of the major axis and minor axis of an 2D object with an irregular shape?
I would like to find the length of the major axis and minor axis of a figure with an irregular shape like the figure below.
The way I thought of is to draw a rectangle fit around the object and find the length and width of the rectangle.
But I don't think this is a good idea.
The center of gravity of an object is given.
Any ideas would be appreciated.
A:
One way to find the length of the major and minor axes of an irregularly shaped object is to use its bounding box. A bounding box is the smallest rectangle that encloses the entire object, and it can be found by determining the minimum and maximum values of the object's coordinates along each dimension.
For example, if the object is represented by a set of 2D points, you can find its bounding box by finding the minimum and maximum x-coordinates and y-coordinates of all the points. The length of the major axis would then be the difference between the maximum and minimum x-coordinates, and the length of the minor axis would be the difference between the maximum and minimum y-coordinates.
Another way to find the length of the major and minor axes is to use the object's orientation and the distance from its center of gravity to its farthest points. If you know the orientation of the object (for example, if you have determined its principal components), you can use trigonometric functions to find the distances from the center of gravity to the farthest points along each axis. The lengths of the major and minor axes would then be equal to these distances.
| How to find the length of the major axis and minor axis of an 2D object with an irregular shape? | I would like to find the length of the major axis and minor axis of a figure with an irregular shape like the figure below.
The way I thought of is to draw a rectangle fit around the object and find the length and width of the rectangle.
But I don't think this is a good idea.
The center of gravity of an object is given.
Any ideas would be appreciated.
| [
"One way to find the length of the major and minor axes of an irregularly shaped object is to use its bounding box. A bounding box is the smallest rectangle that encloses the entire object, and it can be found by determining the minimum and maximum values of the object's coordinates along each dimension.\nFor example, if the object is represented by a set of 2D points, you can find its bounding box by finding the minimum and maximum x-coordinates and y-coordinates of all the points. The length of the major axis would then be the difference between the maximum and minimum x-coordinates, and the length of the minor axis would be the difference between the maximum and minimum y-coordinates.\nAnother way to find the length of the major and minor axes is to use the object's orientation and the distance from its center of gravity to its farthest points. If you know the orientation of the object (for example, if you have determined its principal components), you can use trigonometric functions to find the distances from the center of gravity to the farthest points along each axis. The lengths of the major and minor axes would then be equal to these distances.\n"
] | [
0
] | [] | [] | [
"algorithm",
"python"
] | stackoverflow_0074666930_algorithm_python.txt |
Q:
Type hint Pandas DataFrameGroupBy
How should I type hint in Python a pandas DataFrameGroupBy object?
Should I just use pd.DataFrame as for normal pandas dataframes?
I didn't find any other solution atm
A:
DataFrameGroupBy is a proper type in of itself. So if you're writing a function which must specifically take a DataFrameGroupBy instance:
from pandas.core.groupby import DataFrameGroupBy
def my_function(dfgb: DataFrameGroupBy) -> None:
"""Do something with dfgb."""
If you're looking for a more general polymorphic type, there are several possibilities:
pandas.core.groupby.GroupBy since DataFrameGroupBy inherits from GroupBy[DataFrame].
If you want to accept Series instances too, you could either union DataFrameGroupBy and SeriesGroupBy or you could use GroupBy[FrameOrSeries] (if you intend to always match the input type in your return value) or GroupBy[FrameOrSeriesUnion] if your output type doesn't reflect the input type. All of these types are in pandas.core.groupby.generic.
You could combine the above generics (and others) in many different ways to your liking.
A:
vscode type hinting was still not able to recognize the type by following the above example. Changing the import statement to below helped:
from pandas.core.groupby.generic import DataFrameGroupBy
| Type hint Pandas DataFrameGroupBy | How should I type hint in Python a pandas DataFrameGroupBy object?
Should I just use pd.DataFrame as for normal pandas dataframes?
I didn't find any other solution atm
| [
"DataFrameGroupBy is a proper type in of itself. So if you're writing a function which must specifically take a DataFrameGroupBy instance:\nfrom pandas.core.groupby import DataFrameGroupBy\n\ndef my_function(dfgb: DataFrameGroupBy) -> None:\n \"\"\"Do something with dfgb.\"\"\"\n\nIf you're looking for a more general polymorphic type, there are several possibilities:\n\npandas.core.groupby.GroupBy since DataFrameGroupBy inherits from GroupBy[DataFrame].\nIf you want to accept Series instances too, you could either union DataFrameGroupBy and SeriesGroupBy or you could use GroupBy[FrameOrSeries] (if you intend to always match the input type in your return value) or GroupBy[FrameOrSeriesUnion] if your output type doesn't reflect the input type. All of these types are in pandas.core.groupby.generic.\nYou could combine the above generics (and others) in many different ways to your liking.\n\n",
"vscode type hinting was still not able to recognize the type by following the above example. Changing the import statement to below helped:\nfrom pandas.core.groupby.generic import DataFrameGroupBy\n\n"
] | [
7,
1
] | [] | [] | [
"pandas",
"python",
"type_hinting"
] | stackoverflow_0070501065_pandas_python_type_hinting.txt |
Q:
my .attrs function is not working in beautiful soup
I am a beginner programmer and I was trying to create my hangman game and importing data with Beautiful Soup but when I copied the same exact thing as the youtuber his code worked and mine didn't. I have tested and the problem is the .attrs function.
I have tried looking if I had made a typo but I am pretty sure I didn't and I have also made sure I had downloaded all the packages needed and looked through the tutorial multiple times. The tutorial is by https://freecodecamp.org
import requests
from bs4 import BeautifulSoup
result =
requests.get('https://en.wikipedia.org/wiki/List_of_highest-grossing_films')
src = result.content
soup = BeautifulSoup(src, 'lxml')
results = []
for i in soup.find_all('th'):
a_tag = i.find('a')
results.append(a_tag.attrs['title'])
print(results)
A:
you are getting the error because not all the items in the list soup.find_all('th') have tag a, and if you fix this, not all the items will have title , so try like this:
src = result.content
soup = BeautifulSoup(src, 'lxml')
results = []
for i in soup.find_all('th'):
if i.find('a'):
a_tag = i.find('a')
if a_tag.get('title'):
results.append(a_tag.attrs['title'])
print(results)
Note:I tried not to reflector your code, and we can made it better :)
| my .attrs function is not working in beautiful soup | I am a beginner programmer and I was trying to create my hangman game and importing data with Beautiful Soup but when I copied the same exact thing as the youtuber his code worked and mine didn't. I have tested and the problem is the .attrs function.
I have tried looking if I had made a typo but I am pretty sure I didn't and I have also made sure I had downloaded all the packages needed and looked through the tutorial multiple times. The tutorial is by https://freecodecamp.org
import requests
from bs4 import BeautifulSoup
result =
requests.get('https://en.wikipedia.org/wiki/List_of_highest-grossing_films')
src = result.content
soup = BeautifulSoup(src, 'lxml')
results = []
for i in soup.find_all('th'):
a_tag = i.find('a')
results.append(a_tag.attrs['title'])
print(results)
| [
"you are getting the error because not all the items in the list soup.find_all('th') have tag a, and if you fix this, not all the items will have title , so try like this:\nsrc = result.content\nsoup = BeautifulSoup(src, 'lxml')\nresults = []\nfor i in soup.find_all('th'):\n if i.find('a'):\n a_tag = i.find('a')\n if a_tag.get('title'):\n results.append(a_tag.attrs['title'])\nprint(results)\n\nNote:I tried not to reflector your code, and we can made it better :)\n"
] | [
0
] | [] | [] | [
"beautifulsoup",
"python"
] | stackoverflow_0074666526_beautifulsoup_python.txt |
Q:
formating file with hours and date in the same column
our electricity provider think it could be very fun to make difficult to read csv files they provide.
This is precise electric consumption, every 30 min but in the SAME column you have hours, and date, example :
[EDIT : here the raw version of the csv file, my bad]
;
"Récapitulatif de mes puissances atteintes en W";
;
"Date et heure de relève par le distributeur";"Puissance atteinte (W)"
;
"19/11/2022";
"00:00:00";4494
"23:30:00";1174
"23:00:00";1130
[...]
"01:30:00";216
"01:00:00";2672
"00:30:00";2816
;
"18/11/2022";
"00:00:00";4494
"23:30:00";1174
"23:00:00";1130
[...]
"01:30:00";216
"01:00:00";2672
"00:30:00";2816
How damn can I obtain this kind of lovely formated file :
2022-11-19 00:00:00 2098
2022-11-19 23:30:00 218
2022-11-19 23:00:00 606
etc.
A:
Try:
import pandas as pd
current_date = None
all_data = []
with open("your_file.txt", "r") as f_in:
# skip first 5 rows (header)
for _ in range(5):
next(f_in)
for row in map(str.strip, f_in):
row = row.replace('"', "")
if row == "":
continue
if "/" in row:
current_date = row
else:
all_data.append([current_date, *row.split(";")])
df = pd.DataFrame(all_data, columns=["Date", "Time", "Value"])
print(df)
Prints:
Date Time Value
0 19/11/2022; 00:00:00 4494
1 19/11/2022; 23:30:00 1174
2 19/11/2022; 23:00:00 1130
3 19/11/2022; 01:30:00 216
4 19/11/2022; 01:00:00 2672
5 19/11/2022; 00:30:00 2816
6 18/11/2022; 00:00:00 4494
7 18/11/2022; 23:30:00 1174
8 18/11/2022; 23:00:00 1130
9 18/11/2022; 01:30:00 216
10 18/11/2022; 01:00:00 2672
11 18/11/2022; 00:30:00 2816
A:
Okay I have an idiotic brutforce solution for you, so dont take that as coding recommondation but just something that gets the job done:
import itertools
dList = [f"{f}/{s}/2022" for f, s in itertools.product(range(1, 32), range(1, 13))]
i assume you have a text file with that so im just gonna use that:
file = 'yourfilename.txt'
#make sure youre running the program in the same directory as the .txt file
with open(file, "r") as f:
global lines
lines = f.readlines()
lines = [word.replace('\n','') for word in lines]
for i in lines:
if i in dList:
curD = i
else:
with open('output.txt', 'w') as g:
g.write(f'{i} {(i.split())[0]} {(i.split())[1]}')
make sure to create a file called output.txt in the same directory and everything will get writen into that file.
A:
Using pandas operations would be like the following:
data.csv
19/11/2022
00:00:00 2098
23:30:00 218
23:00:00 606
01:30:00 216
01:00:00 2672
00:30:00 2816
18/11/2022
00:00:00 1994
23:30:00 260
23:00:00 732
01:30:00 200
01:00:00 1378
00:30:00 2520
17/11/2022
00:00:00 1830
23:30:00 96
23:00:00 122
01:30:00 694
01:00:00 2950
00:30:00 3062
16/11/2022
00:00:00 2420
23:30:00 678
23:00:00 644
Implementation
import pandas as pd
df = pd.read_csv('data.csv', header=None)
df['amount'] = df[0].apply(lambda item:item.split(' ')[-1] if item.find(':')>0 else None)
df['time'] = df[0].apply(lambda item:item.split(' ')[0] if item.find(':')>0 else None)
df['date'] = df[0].apply(lambda item:item if item.find('/')>0 else None)
df['date'] = df['date'].fillna(method='ffill')
df = df.dropna(subset=['amount'], how='any')
df = df.drop(0, axis=1)
print(df)
output
amount time date
1 2098 00:00:00 19/11/2022
2 218 23:30:00 19/11/2022
3 606 23:00:00 19/11/2022
4 216 01:30:00 19/11/2022
5 2672 01:00:00 19/11/2022
6 2816 00:30:00 19/11/2022
8 1994 00:00:00 18/11/2022
9 260 23:30:00 18/11/2022
10 732 23:00:00 18/11/2022
11 200 01:30:00 18/11/2022
12 1378 01:00:00 18/11/2022
13 2520 00:30:00 18/11/2022
15 1830 00:00:00 17/11/2022
16 96 23:30:00 17/11/2022
17 122 23:00:00 17/11/2022
18 694 01:30:00 17/11/2022
19 2950 01:00:00 17/11/2022
20 3062 00:30:00 17/11/2022
22 2420 00:00:00 16/11/2022
23 678 23:30:00 16/11/2022
24 644 23:00:00 16/11/2022
| formating file with hours and date in the same column | our electricity provider think it could be very fun to make difficult to read csv files they provide.
This is precise electric consumption, every 30 min but in the SAME column you have hours, and date, example :
[EDIT : here the raw version of the csv file, my bad]
;
"Récapitulatif de mes puissances atteintes en W";
;
"Date et heure de relève par le distributeur";"Puissance atteinte (W)"
;
"19/11/2022";
"00:00:00";4494
"23:30:00";1174
"23:00:00";1130
[...]
"01:30:00";216
"01:00:00";2672
"00:30:00";2816
;
"18/11/2022";
"00:00:00";4494
"23:30:00";1174
"23:00:00";1130
[...]
"01:30:00";216
"01:00:00";2672
"00:30:00";2816
How damn can I obtain this kind of lovely formated file :
2022-11-19 00:00:00 2098
2022-11-19 23:30:00 218
2022-11-19 23:00:00 606
etc.
| [
"Try:\nimport pandas as pd\n\ncurrent_date = None\nall_data = []\nwith open(\"your_file.txt\", \"r\") as f_in:\n # skip first 5 rows (header)\n for _ in range(5):\n next(f_in)\n\n for row in map(str.strip, f_in):\n row = row.replace('\"', \"\")\n if row == \"\":\n continue\n if \"/\" in row:\n current_date = row\n else:\n all_data.append([current_date, *row.split(\";\")])\n\ndf = pd.DataFrame(all_data, columns=[\"Date\", \"Time\", \"Value\"])\nprint(df)\n\nPrints:\n Date Time Value\n0 19/11/2022; 00:00:00 4494\n1 19/11/2022; 23:30:00 1174\n2 19/11/2022; 23:00:00 1130\n3 19/11/2022; 01:30:00 216\n4 19/11/2022; 01:00:00 2672\n5 19/11/2022; 00:30:00 2816\n6 18/11/2022; 00:00:00 4494\n7 18/11/2022; 23:30:00 1174\n8 18/11/2022; 23:00:00 1130\n9 18/11/2022; 01:30:00 216\n10 18/11/2022; 01:00:00 2672\n11 18/11/2022; 00:30:00 2816\n\n",
"Okay I have an idiotic brutforce solution for you, so dont take that as coding recommondation but just something that gets the job done:\nimport itertools\ndList = [f\"{f}/{s}/2022\" for f, s in itertools.product(range(1, 32), range(1, 13))]\n\ni assume you have a text file with that so im just gonna use that:\nfile = 'yourfilename.txt'\n#make sure youre running the program in the same directory as the .txt file\nwith open(file, \"r\") as f:\n global lines\n lines = f.readlines()\nlines = [word.replace('\\n','') for word in lines]\nfor i in lines:\n if i in dList:\n curD = i\n else:\n with open('output.txt', 'w') as g:\n g.write(f'{i} {(i.split())[0]} {(i.split())[1]}')\n\nmake sure to create a file called output.txt in the same directory and everything will get writen into that file.\n",
"Using pandas operations would be like the following:\ndata.csv\n19/11/2022 \n00:00:00 2098\n23:30:00 218\n23:00:00 606\n01:30:00 216\n01:00:00 2672\n00:30:00 2816\n18/11/2022 \n00:00:00 1994\n23:30:00 260\n23:00:00 732\n01:30:00 200\n01:00:00 1378\n00:30:00 2520\n17/11/2022 \n00:00:00 1830\n23:30:00 96\n23:00:00 122\n01:30:00 694\n01:00:00 2950\n00:30:00 3062\n16/11/2022 \n00:00:00 2420\n23:30:00 678\n23:00:00 644\n\nImplementation\nimport pandas as pd\ndf = pd.read_csv('data.csv', header=None)\ndf['amount'] = df[0].apply(lambda item:item.split(' ')[-1] if item.find(':')>0 else None)\ndf['time'] = df[0].apply(lambda item:item.split(' ')[0] if item.find(':')>0 else None)\ndf['date'] = df[0].apply(lambda item:item if item.find('/')>0 else None)\ndf['date'] = df['date'].fillna(method='ffill')\ndf = df.dropna(subset=['amount'], how='any')\ndf = df.drop(0, axis=1)\nprint(df)\n\noutput\n amount time date\n1 2098 00:00:00 19/11/2022 \n2 218 23:30:00 19/11/2022 \n3 606 23:00:00 19/11/2022 \n4 216 01:30:00 19/11/2022 \n5 2672 01:00:00 19/11/2022 \n6 2816 00:30:00 19/11/2022 \n8 1994 00:00:00 18/11/2022 \n9 260 23:30:00 18/11/2022 \n10 732 23:00:00 18/11/2022 \n11 200 01:30:00 18/11/2022 \n12 1378 01:00:00 18/11/2022 \n13 2520 00:30:00 18/11/2022 \n15 1830 00:00:00 17/11/2022 \n16 96 23:30:00 17/11/2022 \n17 122 23:00:00 17/11/2022 \n18 694 01:30:00 17/11/2022 \n19 2950 01:00:00 17/11/2022 \n20 3062 00:30:00 17/11/2022 \n22 2420 00:00:00 16/11/2022 \n23 678 23:30:00 16/11/2022 \n24 644 23:00:00 16/11/2022 \n\n"
] | [
1,
0,
0
] | [] | [] | [
"dataframe",
"pandas",
"parsing",
"python",
"reindex"
] | stackoverflow_0074667137_dataframe_pandas_parsing_python_reindex.txt |
Q:
How to run python coding in anaconda prompt using vba?
I am attempting to run python coding using vba.
However, when running using vba, it was not successful .
(i discovered that it is not running in anaconda prompt)
the code is attached as follow. appreciate the help.
Sub RunPythonScript()
Dim objShell As Object
Dim PythonExePath As String, PythonScriptPath As String
Set objShell = VBA.CreateObject("Wscript.Shell")
PythonExePath = """C:xxx.exe"""
PythonScriptPath = """C:xxx.py"""
objShell.Run PythonExePath & " " & PythonScriptPath
End Sub
Alternatively, I manually run in anaconda prompt and the code works.
"C:xxx.exe" "C:xxx.py"
What I observed on screen was the black cmd window pop out and disappeared in second. It did not work as expected. Is there anything I input incorrectly?
Sub RunPythonScript()
Dim pythonExePath As String, pythonScriptPath As String
pythonExePath = """C:\Users\xxx\Anaconda3\python.exe"""
pythonScriptPath = """C:\Users\xxx\xxx.py"""
Shell pythonExePath & " " & pythonScriptPath, vbNormalFocus
End Sub
A:
The code you provided looks like it is trying to run a Python script using the Wscript.Shell object in VBA, which is used to run external programs and scripts. However, this will not work for running a Python script in the Anaconda Prompt, as the Anaconda Prompt is a command-line interface (CLI) and not a script.
To run a Python script in the Anaconda Prompt using VBA, you will need to use the Shell function to run the python.exe executable in the Anaconda Prompt and pass your Python script as a command-line argument. Here is an example of how you could do this:
Sub RunPythonScript()
Dim pythonExePath As String, pythonScriptPath As String
' Replace "C:\Program Files\Anaconda3\python.exe" with the path to your Anaconda Python installation
pythonExePath = """C:\Program Files\Anaconda3\python.exe"""
' Replace "C:\scripts\myscript.py" with the path to your Python script
pythonScriptPath = """C:\scripts\myscript.py"""
Shell pythonExePath & " " & pythonScriptPath, vbNormalFocus
End Sub
This code will open the Anaconda Prompt and run the python.exe executable, passing the path to your Python script as a command-line argument. This will cause the Python script to be executed in the Anaconda Prompt.
edit;
or you can try this
Sub RunPythonScript()
Dim objShell As Object
Dim PythonExePath As String, PythonScriptPath As String
Set objShell = VBA.Interaction.CreateObject("Wscript.Shell")
PythonExePath = "C:xxx.exe"
PythonScriptPath = "C:xxx.py"
objShell.Exec PythonExePath & " " & PythonScriptPath
End Sub
A:
Try both of these and feedback with your results.
Public Sub PythonOutput()
Dim oShell As Object, oCmd As String
Dim oExec As Object, oOutput As Object
Dim arg As Variant
Dim s As String, sLine As String
Set oShell = CreateObject("WScript.Shell")
arg = "somevalue"
oCmd = "python ""C:\Users\ryans\from_vba.py""" ' & " " & arg
Set oExec = oShell.Exec(oCmd)
Set oOutput = oExec.StdOut
While Not oOutput.AtEndOfStream
sLine = oOutput.ReadLine
If sLine <> "" Then s = s & sLine & vbNewLine
Wend
Debug.Print s
Set oOutput = Nothing: Set oExec = Nothing
Set oShell = Nothing
End Sub
Sub RunPython()
Dim objShell As Object
Dim PythonExe, PythonScript As String
Set objShell = VBA.CreateObject("Wscript.Shell")
PythonExe = """C:\Users\ryans\AppData\Local\Programs\Python\Python38\python.exe"""
PythonScript = "C:\Users\ryans\from_vba.py"
objShell.Run PythonExe & PythonScript
End Sub
| How to run python coding in anaconda prompt using vba? | I am attempting to run python coding using vba.
However, when running using vba, it was not successful .
(i discovered that it is not running in anaconda prompt)
the code is attached as follow. appreciate the help.
Sub RunPythonScript()
Dim objShell As Object
Dim PythonExePath As String, PythonScriptPath As String
Set objShell = VBA.CreateObject("Wscript.Shell")
PythonExePath = """C:xxx.exe"""
PythonScriptPath = """C:xxx.py"""
objShell.Run PythonExePath & " " & PythonScriptPath
End Sub
Alternatively, I manually run in anaconda prompt and the code works.
"C:xxx.exe" "C:xxx.py"
What I observed on screen was the black cmd window pop out and disappeared in second. It did not work as expected. Is there anything I input incorrectly?
Sub RunPythonScript()
Dim pythonExePath As String, pythonScriptPath As String
pythonExePath = """C:\Users\xxx\Anaconda3\python.exe"""
pythonScriptPath = """C:\Users\xxx\xxx.py"""
Shell pythonExePath & " " & pythonScriptPath, vbNormalFocus
End Sub
| [
"The code you provided looks like it is trying to run a Python script using the Wscript.Shell object in VBA, which is used to run external programs and scripts. However, this will not work for running a Python script in the Anaconda Prompt, as the Anaconda Prompt is a command-line interface (CLI) and not a script.\nTo run a Python script in the Anaconda Prompt using VBA, you will need to use the Shell function to run the python.exe executable in the Anaconda Prompt and pass your Python script as a command-line argument. Here is an example of how you could do this:\nSub RunPythonScript()\nDim pythonExePath As String, pythonScriptPath As String\n\n' Replace \"C:\\Program Files\\Anaconda3\\python.exe\" with the path to your Anaconda Python installation\npythonExePath = \"\"\"C:\\Program Files\\Anaconda3\\python.exe\"\"\"\n\n' Replace \"C:\\scripts\\myscript.py\" with the path to your Python script\npythonScriptPath = \"\"\"C:\\scripts\\myscript.py\"\"\"\n\nShell pythonExePath & \" \" & pythonScriptPath, vbNormalFocus\nEnd Sub\n\nThis code will open the Anaconda Prompt and run the python.exe executable, passing the path to your Python script as a command-line argument. This will cause the Python script to be executed in the Anaconda Prompt.\nedit;\nor you can try this\nSub RunPythonScript()\nDim objShell As Object\nDim PythonExePath As String, PythonScriptPath As String\n\nSet objShell = VBA.Interaction.CreateObject(\"Wscript.Shell\")\n\nPythonExePath = \"C:xxx.exe\"\nPythonScriptPath = \"C:xxx.py\"\n\nobjShell.Exec PythonExePath & \" \" & PythonScriptPath\nEnd Sub\n\n",
"Try both of these and feedback with your results.\nPublic Sub PythonOutput()\n\n Dim oShell As Object, oCmd As String\n Dim oExec As Object, oOutput As Object\n Dim arg As Variant\n Dim s As String, sLine As String\n\n Set oShell = CreateObject(\"WScript.Shell\")\n arg = \"somevalue\"\n oCmd = \"python \"\"C:\\Users\\ryans\\from_vba.py\"\"\" ' & \" \" & arg\n\n Set oExec = oShell.Exec(oCmd)\n Set oOutput = oExec.StdOut\n\n While Not oOutput.AtEndOfStream\n sLine = oOutput.ReadLine\n If sLine <> \"\" Then s = s & sLine & vbNewLine\n Wend\n\n Debug.Print s\n\n Set oOutput = Nothing: Set oExec = Nothing\n Set oShell = Nothing\n\nEnd Sub\n\n\nSub RunPython()\n\nDim objShell As Object\nDim PythonExe, PythonScript As String\n \n Set objShell = VBA.CreateObject(\"Wscript.Shell\")\n\n PythonExe = \"\"\"C:\\Users\\ryans\\AppData\\Local\\Programs\\Python\\Python38\\python.exe\"\"\"\n PythonScript = \"C:\\Users\\ryans\\from_vba.py\"\n \n objShell.Run PythonExe & PythonScript\n \nEnd Sub\n\n"
] | [
0,
0
] | [] | [] | [
"anaconda",
"python",
"vba"
] | stackoverflow_0074662928_anaconda_python_vba.txt |
Q:
Why does my second python async (scraping) function (which uses results from the first async (scraping) function) return no result?
Summary of what the program should do:
Step 1 (sync): Determine exactly how many pages need to be scraped.
Step 2 (sync): create the links to the pages to be scraped in a for-loop.
Step 3 (async): Use the link list from step 2 to get the links to the desired detail pages from each of these pages.
Step 4 (async): Use the result from step 3 to extract the detail information for each hofladen. This information is stored in a list for each farm store and each of these lists is appended to a global list.
Where do I have the problem?
The transition from step 3 to step 4 does not seem to work properly.
Traceback (most recent call last):
File "/Users/REPLACED_MY_USER/PycharmProjects/PKI-Projekt/test_ttt.py", line 108, in <module>
asyncio.run(main())
File "/Users/REPLACED_MY_USER/miniconda3/envs/scrapy/lib/python3.10/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "/Users/REPLACED_MY_USER/miniconda3/envs/scrapy/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete
return future.result()
File "/Users/REPLACED_MY_USER/PycharmProjects/PKI-Projekt/test_ttt.py", line 96, in main
await asyncio.gather(*tasks_detail_infos)
File "/Users/REPLACED_MY_USER/PycharmProjects/PKI-Projekt/test_ttt.py", line 61, in scrape_detail_infos
data = JsonLdExtractor().extract(body_d)
File "/Users/REPLACED_MY_USER/miniconda3/envs/scrapy/lib/python3.10/site-packages/extruct/jsonld.py", line 21, in extract
tree = parse_html(htmlstring, encoding=encoding)
File "/Users/REPLACED_MY_USER/miniconda3/envs/scrapy/lib/python3.10/site-packages/extruct/utils.py", line 10, in parse_html
return lxml.html.fromstring(html, parser=parser)
File "/Users/REPLACED_MY_USER/miniconda3/envs/scrapy/lib/python3.10/site-packages/lxml/html/__init__.py", line 873, in fromstring
doc = document_fromstring(html, parser=parser, base_url=base_url, **kw)
File "/Users/REPLACED_MY_USER/miniconda3/envs/scrapy/lib/python3.10/site-packages/lxml/html/__init__.py", line 761, in document_fromstring
raise etree.ParserError(
lxml.etree.ParserError: Document is empty
Process finished with exit code 1
What did I do to isolate the problem?
In a first attempt I rewrote the async function append_detail_infos so that it no longer tries to create a list and append the values but only prints data[0]["name"].
This resulted in the error message
Traceback (most recent call last):
File "/Users/REPLACED_MY_USER/PycharmProjects/PKI-Projekt/test_ttt.py", line 108, in <module>
asyncio.run(main())
File "/Users/REPLACED_MY_USER/miniconda3/envs/scrapy/lib/python3.10/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "/Users/REPLACED_MY_USER/miniconda3/envs/scrapy/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete
return future.result()
File "/Users/REPLACED_MY_USER/PycharmProjects/PKI-Projekt/test_ttt.py", line 96, in main
await asyncio.gather(*tasks_detail_infos)
File "/Users/REPLACED_MY_USER/PycharmProjects/PKI-Projekt/test_ttt.py", line 61, in scrape_detail_infos
data = JsonLdExtractor().extract(body_d)
File "/Users/REPLACED_MY_USER/miniconda3/envs/scrapy/lib/python3.10/site-packages/extruct/jsonld.py", line 21, in extract
tree = parse_html(htmlstring, encoding=encoding)
File "/Users/REPLACED_MY_USER/miniconda3/envs/scrapy/lib/python3.10/site-packages/extruct/utils.py", line 10, in parse_html
return lxml.html.fromstring(html, parser=parser)
File "/Users/REPLACED_MY_USER/miniconda3/envs/scrapy/lib/python3.10/site-packages/lxml/html/__init__.py", line 873, in fromstring
doc = document_fromstring(html, parser=parser, base_url=base_url, **kw)
File "/Users/REPLACED_MY_USER/miniconda3/envs/scrapy/lib/python3.10/site-packages/lxml/html/__init__.py", line 761, in document_fromstring
raise etree.ParserError(
lxml.etree.ParserError: Document is empty
Process finished with exit code 1
In the next attempt, I exported the links from detail_links as .csv and visually checked them and opened some of them to see if they were valid. This was also the case.
The program code:
import asyncio
import time
import aiohttp
import requests
import re
from selectolax.parser import HTMLParser
from extruct.jsonld import JsonLdExtractor
import pandas as pd
BASE_URL = "https://hofladen.info"
FIRST_PAGE = 1
def get_last_page(url: str) -> int:
res = requests.get(url).text
html = HTMLParser(res)
last_page = int(re.findall("(\d+)", html.css("li.page-last > a")[0].attributes["href"])[0])
return last_page
def build_links_to_pages(start: int, ende: int) -> list:
lst = []
for i in range(start, ende + 1):
url = f"https://hofladen.info/regionale-produkte?page={i}"
lst.append(url)
return lst
async def scrape_detail_links(url: str):
async with aiohttp.ClientSession() as session:
async with session.get(url, allow_redirects=True) as resp:
body = await resp.text()
html = HTMLParser(body)
for node in html.css(".sp13"):
detail_link = BASE_URL + node.attributes["href"]
detail_links.append(detail_link)
async def append_detail_infos(data):
my_detail_lst = []
# print(data[0]["name"]) # name for debugging purpose
my_detail_lst.append(data[0]["name"]) # name
my_detail_lst.append(data[0]["address"]["streetAddress"]) # str
my_detail_lst.append(data[0]["address"]["postalCode"]) # plz
my_detail_lst.append(data[0]["address"]["addressLocality"]) # ort
my_detail_lst.append(data[0]["address"]["addressRegion"]) # bundesland
my_detail_lst.append(data[0]["address"]["addressCountry"]) # land
my_detail_lst.append(data[0]["geo"]["latitude"]) # breitengrad
my_detail_lst.append(data[0]["geo"]["longitude"]) # längengrad
detail_infos.append(my_detail_lst)
async def scrape_detail_infos(detail_link: str):
async with aiohttp.ClientSession() as session_detailinfos:
async with session_detailinfos.get(detail_link) as res_d:
body_d = await res_d.text()
data = JsonLdExtractor().extract(body_d)
await append_detail_infos(data)
async def main() -> None:
start_time = time.perf_counter()
# Beginn individueller code
# ----------
global detail_links, detail_infos
detail_links, detail_infos = [], []
tasks = []
tasks_detail_infos = []
# extrahiere die letzte zu iterierende Seite
last_page = get_last_page("https://hofladen.info/regionale-produkte")
# scrape detail links
links_to_pages = build_links_to_pages(FIRST_PAGE, last_page)
for link in links_to_pages:
task = asyncio.create_task(scrape_detail_links(link))
tasks.append(task)
print("Saving the output of extracted information.")
await asyncio.gather(*tasks)
pd.DataFrame(data=detail_links).to_csv("detail_links.csv")
# scrape detail infos
for detail_url in detail_links:
task_detail_infos = asyncio.create_task(scrape_detail_infos(detail_url))
tasks_detail_infos.append(task_detail_infos)
await asyncio.gather(*tasks_detail_infos)
# Ende individueller Code
# ------------
time_difference = time.perf_counter() - start_time
print(f"Scraping time: {time_difference} seconds.")
print(len(detail_links))
# print(detail_infos[])
asyncio.run(main())
A working solution to the problem:
added python allow_redirects=True to python async with session_detailinfos.get(detail_link, allow_redirects=True) as res_d:
added python return_exceptions=True to python await asyncio.gather(*tasks_detail_infos, return_exceptions=True)
A:
A working solution to the problem:
added
python allow_redirects=True to python async with session_detailinfos.get(detail_link, allow_redirects=True) as res_d:
added python return_exceptions=True to python await asyncio.gather(*tasks_detail_infos, return_exceptions=True)
| Why does my second python async (scraping) function (which uses results from the first async (scraping) function) return no result? | Summary of what the program should do:
Step 1 (sync): Determine exactly how many pages need to be scraped.
Step 2 (sync): create the links to the pages to be scraped in a for-loop.
Step 3 (async): Use the link list from step 2 to get the links to the desired detail pages from each of these pages.
Step 4 (async): Use the result from step 3 to extract the detail information for each hofladen. This information is stored in a list for each farm store and each of these lists is appended to a global list.
Where do I have the problem?
The transition from step 3 to step 4 does not seem to work properly.
Traceback (most recent call last):
File "/Users/REPLACED_MY_USER/PycharmProjects/PKI-Projekt/test_ttt.py", line 108, in <module>
asyncio.run(main())
File "/Users/REPLACED_MY_USER/miniconda3/envs/scrapy/lib/python3.10/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "/Users/REPLACED_MY_USER/miniconda3/envs/scrapy/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete
return future.result()
File "/Users/REPLACED_MY_USER/PycharmProjects/PKI-Projekt/test_ttt.py", line 96, in main
await asyncio.gather(*tasks_detail_infos)
File "/Users/REPLACED_MY_USER/PycharmProjects/PKI-Projekt/test_ttt.py", line 61, in scrape_detail_infos
data = JsonLdExtractor().extract(body_d)
File "/Users/REPLACED_MY_USER/miniconda3/envs/scrapy/lib/python3.10/site-packages/extruct/jsonld.py", line 21, in extract
tree = parse_html(htmlstring, encoding=encoding)
File "/Users/REPLACED_MY_USER/miniconda3/envs/scrapy/lib/python3.10/site-packages/extruct/utils.py", line 10, in parse_html
return lxml.html.fromstring(html, parser=parser)
File "/Users/REPLACED_MY_USER/miniconda3/envs/scrapy/lib/python3.10/site-packages/lxml/html/__init__.py", line 873, in fromstring
doc = document_fromstring(html, parser=parser, base_url=base_url, **kw)
File "/Users/REPLACED_MY_USER/miniconda3/envs/scrapy/lib/python3.10/site-packages/lxml/html/__init__.py", line 761, in document_fromstring
raise etree.ParserError(
lxml.etree.ParserError: Document is empty
Process finished with exit code 1
What did I do to isolate the problem?
In a first attempt I rewrote the async function append_detail_infos so that it no longer tries to create a list and append the values but only prints data[0]["name"].
This resulted in the error message
Traceback (most recent call last):
File "/Users/REPLACED_MY_USER/PycharmProjects/PKI-Projekt/test_ttt.py", line 108, in <module>
asyncio.run(main())
File "/Users/REPLACED_MY_USER/miniconda3/envs/scrapy/lib/python3.10/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "/Users/REPLACED_MY_USER/miniconda3/envs/scrapy/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete
return future.result()
File "/Users/REPLACED_MY_USER/PycharmProjects/PKI-Projekt/test_ttt.py", line 96, in main
await asyncio.gather(*tasks_detail_infos)
File "/Users/REPLACED_MY_USER/PycharmProjects/PKI-Projekt/test_ttt.py", line 61, in scrape_detail_infos
data = JsonLdExtractor().extract(body_d)
File "/Users/REPLACED_MY_USER/miniconda3/envs/scrapy/lib/python3.10/site-packages/extruct/jsonld.py", line 21, in extract
tree = parse_html(htmlstring, encoding=encoding)
File "/Users/REPLACED_MY_USER/miniconda3/envs/scrapy/lib/python3.10/site-packages/extruct/utils.py", line 10, in parse_html
return lxml.html.fromstring(html, parser=parser)
File "/Users/REPLACED_MY_USER/miniconda3/envs/scrapy/lib/python3.10/site-packages/lxml/html/__init__.py", line 873, in fromstring
doc = document_fromstring(html, parser=parser, base_url=base_url, **kw)
File "/Users/REPLACED_MY_USER/miniconda3/envs/scrapy/lib/python3.10/site-packages/lxml/html/__init__.py", line 761, in document_fromstring
raise etree.ParserError(
lxml.etree.ParserError: Document is empty
Process finished with exit code 1
In the next attempt, I exported the links from detail_links as .csv and visually checked them and opened some of them to see if they were valid. This was also the case.
The program code:
import asyncio
import time
import aiohttp
import requests
import re
from selectolax.parser import HTMLParser
from extruct.jsonld import JsonLdExtractor
import pandas as pd
BASE_URL = "https://hofladen.info"
FIRST_PAGE = 1
def get_last_page(url: str) -> int:
res = requests.get(url).text
html = HTMLParser(res)
last_page = int(re.findall("(\d+)", html.css("li.page-last > a")[0].attributes["href"])[0])
return last_page
def build_links_to_pages(start: int, ende: int) -> list:
lst = []
for i in range(start, ende + 1):
url = f"https://hofladen.info/regionale-produkte?page={i}"
lst.append(url)
return lst
async def scrape_detail_links(url: str):
async with aiohttp.ClientSession() as session:
async with session.get(url, allow_redirects=True) as resp:
body = await resp.text()
html = HTMLParser(body)
for node in html.css(".sp13"):
detail_link = BASE_URL + node.attributes["href"]
detail_links.append(detail_link)
async def append_detail_infos(data):
my_detail_lst = []
# print(data[0]["name"]) # name for debugging purpose
my_detail_lst.append(data[0]["name"]) # name
my_detail_lst.append(data[0]["address"]["streetAddress"]) # str
my_detail_lst.append(data[0]["address"]["postalCode"]) # plz
my_detail_lst.append(data[0]["address"]["addressLocality"]) # ort
my_detail_lst.append(data[0]["address"]["addressRegion"]) # bundesland
my_detail_lst.append(data[0]["address"]["addressCountry"]) # land
my_detail_lst.append(data[0]["geo"]["latitude"]) # breitengrad
my_detail_lst.append(data[0]["geo"]["longitude"]) # längengrad
detail_infos.append(my_detail_lst)
async def scrape_detail_infos(detail_link: str):
async with aiohttp.ClientSession() as session_detailinfos:
async with session_detailinfos.get(detail_link) as res_d:
body_d = await res_d.text()
data = JsonLdExtractor().extract(body_d)
await append_detail_infos(data)
async def main() -> None:
start_time = time.perf_counter()
# Beginn individueller code
# ----------
global detail_links, detail_infos
detail_links, detail_infos = [], []
tasks = []
tasks_detail_infos = []
# extrahiere die letzte zu iterierende Seite
last_page = get_last_page("https://hofladen.info/regionale-produkte")
# scrape detail links
links_to_pages = build_links_to_pages(FIRST_PAGE, last_page)
for link in links_to_pages:
task = asyncio.create_task(scrape_detail_links(link))
tasks.append(task)
print("Saving the output of extracted information.")
await asyncio.gather(*tasks)
pd.DataFrame(data=detail_links).to_csv("detail_links.csv")
# scrape detail infos
for detail_url in detail_links:
task_detail_infos = asyncio.create_task(scrape_detail_infos(detail_url))
tasks_detail_infos.append(task_detail_infos)
await asyncio.gather(*tasks_detail_infos)
# Ende individueller Code
# ------------
time_difference = time.perf_counter() - start_time
print(f"Scraping time: {time_difference} seconds.")
print(len(detail_links))
# print(detail_infos[])
asyncio.run(main())
A working solution to the problem:
added python allow_redirects=True to python async with session_detailinfos.get(detail_link, allow_redirects=True) as res_d:
added python return_exceptions=True to python await asyncio.gather(*tasks_detail_infos, return_exceptions=True)
| [
"A working solution to the problem:\nadded\npython allow_redirects=True to python async with session_detailinfos.get(detail_link, allow_redirects=True) as res_d:\nadded python return_exceptions=True to python await asyncio.gather(*tasks_detail_infos, return_exceptions=True)\n"
] | [
0
] | [] | [] | [
"aiohttp",
"python",
"python_3.x",
"python_asyncio"
] | stackoverflow_0074642424_aiohttp_python_python_3.x_python_asyncio.txt |
Q:
dataframe group by for all columns in new dataframe
I want to create a new dataframe with the values grouped by each column header dataset
this is the dataset i'm working with.
I essentially want a new dataframe which sums the occurences of 1 and 0 for each feature (chocolate, fruity etc)
i tried this code with the groupby and sort function
`
chocolate = data.groupby(["chocolate"]).size()
bar = data.groupby(["bar"]).size()
hard = data.groupby(["hard"]).size()
display(chocolate,bar, hard)
`
but this only gives me the sum per feature
this is the end result i want to become
end result
A:
You could try the following:
res = (
data
.drop(columns="competitorname")
.melt().value_counts()
.unstack()
.fillna(0).astype("int").T
)
Eliminate the columns that aren't relevant (I've only seen competitorname, but there could be more).
.melt the dataframe. The result has 2 columns, one with the column names, and another with the resp. 0/1 values.
Now .value_counts gives you a series that essentially contains what you are looking for.
Then you just have to .unstack the first index level (column names) and transpose the dataframe.
Example:
data = pd.DataFrame({
"competitorname": ["A", "B", "C"],
"chocolate": [1, 0, 0], "bar": [1, 0, 1], "hard": [1, 1, 1]
})
competitorname chocolate bar hard
0 A 1 1 1
1 B 0 0 1
2 C 0 1 1
Result:
variable bar chocolate hard
value
0 1 2 0
1 2 1 3
Alternative with .pivot_table:
res = (
data
.drop(columns="competitorname")
.melt().value_counts().to_frame()
.pivot_table(index="value", columns="variable", fill_value=0)
.droplevel(0, axis=1)
)
PS: Please don't post images, provide a litte example (like here) that encapsulates your problem.
| dataframe group by for all columns in new dataframe | I want to create a new dataframe with the values grouped by each column header dataset
this is the dataset i'm working with.
I essentially want a new dataframe which sums the occurences of 1 and 0 for each feature (chocolate, fruity etc)
i tried this code with the groupby and sort function
`
chocolate = data.groupby(["chocolate"]).size()
bar = data.groupby(["bar"]).size()
hard = data.groupby(["hard"]).size()
display(chocolate,bar, hard)
`
but this only gives me the sum per feature
this is the end result i want to become
end result
| [
"You could try the following:\nres = (\n data\n .drop(columns=\"competitorname\")\n .melt().value_counts()\n .unstack()\n .fillna(0).astype(\"int\").T\n)\n\n\nEliminate the columns that aren't relevant (I've only seen competitorname, but there could be more).\n.melt the dataframe. The result has 2 columns, one with the column names, and another with the resp. 0/1 values.\nNow .value_counts gives you a series that essentially contains what you are looking for.\nThen you just have to .unstack the first index level (column names) and transpose the dataframe.\n\nExample:\ndata = pd.DataFrame({\n \"competitorname\": [\"A\", \"B\", \"C\"],\n \"chocolate\": [1, 0, 0], \"bar\": [1, 0, 1], \"hard\": [1, 1, 1]\n})\n\n competitorname chocolate bar hard\n0 A 1 1 1\n1 B 0 0 1\n2 C 0 1 1\n\nResult:\nvariable bar chocolate hard\nvalue \n0 1 2 0\n1 2 1 3\n\nAlternative with .pivot_table:\nres = (\n data\n .drop(columns=\"competitorname\")\n .melt().value_counts().to_frame()\n .pivot_table(index=\"value\", columns=\"variable\", fill_value=0)\n .droplevel(0, axis=1)\n)\n\nPS: Please don't post images, provide a litte example (like here) that encapsulates your problem.\n"
] | [
0
] | [] | [] | [
"dataframe",
"pandas",
"python"
] | stackoverflow_0074665750_dataframe_pandas_python.txt |
Q:
What's the optimal implementation of a sliding window over a number's bits?
Given a number i.e (0xD5B8), what is the most efficient way in Python to subset across the bits over a sliding window using only native libraries?
A method might look like the following:
def window_bits(n,w, s):
'''
n: the number
w: the window size
s: step size
'''
# code
window_bits(0xD5B8, 4, 4) # returns [[0b1101],[0b0101],[0b1011],[0b1000]]
window_bits(0xD5B8, 2, 2) # returns [[0b11],[0b01],[0b01],[0b01],[0b10],[0b11],[0b10],[0b00]]
Some things to consider:
should strive to use minimal possible memory footprint
can only use inbuilt libraries
as fast as possible.
if len(bin(n)) % w != 0, then the last window should exist, with a size less than w
Some of the suggestions are like How to iterate over a list in chunks, which is convert the int using bin and iterate over as a slice. However, these questions do not prove the optimality. I would think that there are other possible bitwise operations that could be done that are more optimal than running over the bin as a slice (a generic data structure), either from a memory or speed perspective. This question is about the MOST optimal, not about what gets the job done, and it can be considered from memory, speed, or both. Ideally, an answer gives good reasons why their representation is the most optimal.
So, if it is provably the most optimal to convert to bin(x) and then just manage the bits as a slice, then that's the optimal methodology. But this is NOT about an easy way to move a window around bits. Every op and bit counts in this question.
A:
The "naive" option would be to create a bits array - bin(n)[2:] - and then use the answers from How to iterate over a list in chunks.
But this is most likely not so efficient assuming we can use bit operations. Another option is to shift-and-mask the input according to the window and step size:
def window_bits(n, w, step_size):
offset = n.bit_length() - w # the initial shift to get the MSB window
mask = 2**w-1 # To get the actual window we need
while offset >= 0:
print(f"{(n >> offset)&mask:x}")
offset -= step_size # advance the window
And running window_bits(0xD5B8, 4, 4) will indeed print each nibble on a separate line.
A:
This is not the full answer yet, as I need to do more research, but wanted to add it to the question.
Here's a modification of Tomerikoo's answer that handles the ends better.
This is the "blue" section of the graph below.
def window_bits(n, w, s):
offset = n.bit_length() - w
mask = 2**w-1
ret = []
while offset >= 0:
ret.append((n >> offset) & mask)
offset -= s # advance the window
if offset < 0: # close the end
mask = 2**(-offset)-1
ret.append((n >> mask) & mask)
return ret
This, along with the red chunker algo mentioned next, was benchmarked over pytest benchamrk.
The red is the chunker used over the following function:
def chunker(n, size, s=None):
seq = bin(n)
return (seq[pos:pos + size] for pos in range(0, len(seq), size))
These were parameterized the same, with the following:
numbers = [2**n for n in range(10)]
window_size = [4, 8, 12]
step_size = window_size
A couple things that stood out to me:
The chunker has much more of an even execution time whereas the window_bit function executes with a lot more variance.
The chunker is just faster in general.
I'm looking into why this might be the case, as it's not clear yet to me if there's something else at play here. I would think that the bit shifting ops would be faster, but maybe there's some optimizations with slicing that's happening that I'm not sure about.
| What's the optimal implementation of a sliding window over a number's bits? | Given a number i.e (0xD5B8), what is the most efficient way in Python to subset across the bits over a sliding window using only native libraries?
A method might look like the following:
def window_bits(n,w, s):
'''
n: the number
w: the window size
s: step size
'''
# code
window_bits(0xD5B8, 4, 4) # returns [[0b1101],[0b0101],[0b1011],[0b1000]]
window_bits(0xD5B8, 2, 2) # returns [[0b11],[0b01],[0b01],[0b01],[0b10],[0b11],[0b10],[0b00]]
Some things to consider:
should strive to use minimal possible memory footprint
can only use inbuilt libraries
as fast as possible.
if len(bin(n)) % w != 0, then the last window should exist, with a size less than w
Some of the suggestions are like How to iterate over a list in chunks, which is convert the int using bin and iterate over as a slice. However, these questions do not prove the optimality. I would think that there are other possible bitwise operations that could be done that are more optimal than running over the bin as a slice (a generic data structure), either from a memory or speed perspective. This question is about the MOST optimal, not about what gets the job done, and it can be considered from memory, speed, or both. Ideally, an answer gives good reasons why their representation is the most optimal.
So, if it is provably the most optimal to convert to bin(x) and then just manage the bits as a slice, then that's the optimal methodology. But this is NOT about an easy way to move a window around bits. Every op and bit counts in this question.
| [
"The \"naive\" option would be to create a bits array - bin(n)[2:] - and then use the answers from How to iterate over a list in chunks.\nBut this is most likely not so efficient assuming we can use bit operations. Another option is to shift-and-mask the input according to the window and step size:\ndef window_bits(n, w, step_size):\n offset = n.bit_length() - w # the initial shift to get the MSB window\n mask = 2**w-1 # To get the actual window we need\n while offset >= 0:\n print(f\"{(n >> offset)&mask:x}\")\n offset -= step_size # advance the window\n\nAnd running window_bits(0xD5B8, 4, 4) will indeed print each nibble on a separate line.\n",
"This is not the full answer yet, as I need to do more research, but wanted to add it to the question.\nHere's a modification of Tomerikoo's answer that handles the ends better.\nThis is the \"blue\" section of the graph below.\ndef window_bits(n, w, s):\n offset = n.bit_length() - w \n mask = 2**w-1 \n ret = []\n while offset >= 0:\n ret.append((n >> offset) & mask)\n offset -= s # advance the window\n if offset < 0: # close the end\n mask = 2**(-offset)-1\n ret.append((n >> mask) & mask)\n return ret\n\nThis, along with the red chunker algo mentioned next, was benchmarked over pytest benchamrk.\nThe red is the chunker used over the following function:\ndef chunker(n, size, s=None):\n seq = bin(n)\n return (seq[pos:pos + size] for pos in range(0, len(seq), size))\n\n\nThese were parameterized the same, with the following:\nnumbers = [2**n for n in range(10)]\nwindow_size = [4, 8, 12]\nstep_size = window_size\n\nA couple things that stood out to me:\n\nThe chunker has much more of an even execution time whereas the window_bit function executes with a lot more variance.\nThe chunker is just faster in general.\n\nI'm looking into why this might be the case, as it's not clear yet to me if there's something else at play here. I would think that the bit shifting ops would be faster, but maybe there's some optimizations with slicing that's happening that I'm not sure about.\n"
] | [
1,
0
] | [] | [] | [
"bit",
"bit_shift",
"python"
] | stackoverflow_0074641295_bit_bit_shift_python.txt |
Q:
Returning values from TextInputs in Kivy
does anyone know how to return the string of a textinput in a kivy widget? The textinput is created inside the kv.file.
<OrderScreen>:
BoxLayout:
TextInput:
size_hint: (.2, None)
pos_hint: {"center_y":0.5}
height: 30
width: 100
hint_text: "Food"
multiline: False
id: input
A:
Yes, you can return the string of a TextInput widget in Kivy by using the text property of the TextInput widget. For example:
textinput = self.ids['my_textinput']
textinput_string = textinput.text
Here is an example of a Kivy TextInput widget:
TextInput:
id: my_textinput
multiline: False
font_size: 20
size_hint: .5, .2
pos_hint: {'center_x': .5, 'center_y': .5}
| Returning values from TextInputs in Kivy | does anyone know how to return the string of a textinput in a kivy widget? The textinput is created inside the kv.file.
<OrderScreen>:
BoxLayout:
TextInput:
size_hint: (.2, None)
pos_hint: {"center_y":0.5}
height: 30
width: 100
hint_text: "Food"
multiline: False
id: input
| [
"Yes, you can return the string of a TextInput widget in Kivy by using the text property of the TextInput widget. For example:\ntextinput = self.ids['my_textinput']\ntextinput_string = textinput.text\n\nHere is an example of a Kivy TextInput widget:\nTextInput:\n id: my_textinput\n multiline: False\n font_size: 20\n size_hint: .5, .2\n pos_hint: {'center_x': .5, 'center_y': .5}\n\n"
] | [
0
] | [] | [] | [
"kivy",
"python",
"textinput"
] | stackoverflow_0074667394_kivy_python_textinput.txt |
Q:
Get the week numbers between two dates with python
I'd like to find the most pythonic way to output a list of the week numbers between two dates.
For example:
input
start = datetime.date(2011, 12, 25)
end = datetime.date(2012, 1, 21)
output
find_weeks(start, end)
>> [201152, 201201, 201202, 201203]
I've been struggling using the datetime library with little success
A:
Something in the lines of (update: removed less-readable option)
import datetime
def find_weeks(start,end):
l = []
for i in range((end-start).days + 1):
d = (start+datetime.timedelta(days=i)).isocalendar()[:2] # e.g. (2011, 52)
yearweek = '{}{:02}'.format(*d) # e.g. "201152"
l.append(yearweek)
return sorted(set(l))
start = datetime.date(2011, 12, 25)
end = datetime.date(2012, 1, 21)
print(find_weeks(start,end)[1:]) # [1:] to exclude first week.
Returns
['201152', '201201', '201202', '201203']
To include the first week (201151) simply remove [1:] after function call
A:
.isocalendar() is your friend here - it returns a tuple of (year, week of year, day of week). We use that to reset the start date to the start of th eweek, and then add on a week each time until we pass the end date:
import datetime
def find_weeks(start_date, end_date):
subtract_days = start_date.isocalendar()[2] - 1
current_date = start_date + datetime.timedelta(days=7-subtract_days)
weeks_between = []
while current_date <= end_date:
weeks_between.append(
'{}{:02d}'.format(*current_date.isocalendar()[:2])
)
current_date += datetime.timedelta(days=7)
return weeks_between
start = datetime.date(2011, 12, 25)
end = datetime.date(2012, 1, 21)
print(find_weeks(start, end))
This prints
['201152', '201201', '201202', '201203']
A:
Using Pandas
import pandas as pd
dates=pd.date_range(start=start,end=end,freq='W')
date_index=dates.year.astype(str)+dates.weekofyear.astype(str).str.zfill(2)
date_index.tolist()
A:
I suggest you the following easy-to-read solution:
import datetime
start = datetime.date(2011, 12, 25)
end = datetime.date(2012, 1, 21)
def find_weeks(start, end):
l = []
while (start.isocalendar()[1] != end.isocalendar()[1]) or (start.year != end.year):
l.append(start.isocalendar()[1] + 100*start.year)
start += datetime.timedelta(7)
l.append(start.isocalendar()[1] + 100*start.year)
return (l[1:])
print(find_weeks(start, end))
>> [201252, 201201, 201202, 201203]
A:
I prefer the arrow style solution here (might need pip install arrow):
import arrow
start = arrow.get('2011-12-25')
end = arrow.get('2012-01-21')
weeks = list(arrow.Arrow.span_range('week', start, end))
result looks like this:
>> from pprint import pprint
>> pprint(weeks[1:])
[(<Arrow [2011-12-19T00:00:00+00:00]>,
<Arrow [2011-12-25T23:59:59.999999+00:00]>),
(<Arrow [2011-12-26T00:00:00+00:00]>,
<Arrow [2012-01-01T23:59:59.999999+00:00]>),
(<Arrow [2012-01-02T00:00:00+00:00]>,
<Arrow [2012-01-08T23:59:59.999999+00:00]>),
(<Arrow [2012-01-09T00:00:00+00:00]>,
<Arrow [2012-01-15T23:59:59.999999+00:00]>),
(<Arrow [2012-01-16T00:00:00+00:00]>,
<Arrow [2012-01-22T23:59:59.999999+00:00]>)]
from there you can change the output to match the year and week number.
| Get the week numbers between two dates with python | I'd like to find the most pythonic way to output a list of the week numbers between two dates.
For example:
input
start = datetime.date(2011, 12, 25)
end = datetime.date(2012, 1, 21)
output
find_weeks(start, end)
>> [201152, 201201, 201202, 201203]
I've been struggling using the datetime library with little success
| [
"Something in the lines of (update: removed less-readable option)\nimport datetime\n\ndef find_weeks(start,end):\n l = []\n for i in range((end-start).days + 1):\n d = (start+datetime.timedelta(days=i)).isocalendar()[:2] # e.g. (2011, 52)\n yearweek = '{}{:02}'.format(*d) # e.g. \"201152\"\n l.append(yearweek)\n return sorted(set(l))\n\nstart = datetime.date(2011, 12, 25) \nend = datetime.date(2012, 1, 21)\n\nprint(find_weeks(start,end)[1:]) # [1:] to exclude first week.\n\nReturns\n['201152', '201201', '201202', '201203']\n\nTo include the first week (201151) simply remove [1:] after function call\n",
".isocalendar() is your friend here - it returns a tuple of (year, week of year, day of week). We use that to reset the start date to the start of th eweek, and then add on a week each time until we pass the end date:\nimport datetime\n\n\ndef find_weeks(start_date, end_date):\n subtract_days = start_date.isocalendar()[2] - 1\n current_date = start_date + datetime.timedelta(days=7-subtract_days)\n weeks_between = []\n while current_date <= end_date:\n weeks_between.append(\n '{}{:02d}'.format(*current_date.isocalendar()[:2])\n )\n current_date += datetime.timedelta(days=7)\n return weeks_between\n\nstart = datetime.date(2011, 12, 25)\nend = datetime.date(2012, 1, 21)\n\nprint(find_weeks(start, end))\n\nThis prints\n['201152', '201201', '201202', '201203']\n\n",
"Using Pandas\nimport pandas as pd\n\ndates=pd.date_range(start=start,end=end,freq='W')\ndate_index=dates.year.astype(str)+dates.weekofyear.astype(str).str.zfill(2)\ndate_index.tolist()\n\n",
"I suggest you the following easy-to-read solution: \nimport datetime\n\nstart = datetime.date(2011, 12, 25) \nend = datetime.date(2012, 1, 21)\n\ndef find_weeks(start, end):\n l = []\n while (start.isocalendar()[1] != end.isocalendar()[1]) or (start.year != end.year):\n l.append(start.isocalendar()[1] + 100*start.year)\n start += datetime.timedelta(7)\n l.append(start.isocalendar()[1] + 100*start.year)\n return (l[1:])\n\n\nprint(find_weeks(start, end))\n\n>> [201252, 201201, 201202, 201203]\n\n",
"I prefer the arrow style solution here (might need pip install arrow):\nimport arrow\n\nstart = arrow.get('2011-12-25')\nend = arrow.get('2012-01-21')\nweeks = list(arrow.Arrow.span_range('week', start, end))\n\nresult looks like this:\n>> from pprint import pprint\n>> pprint(weeks[1:])\n[(<Arrow [2011-12-19T00:00:00+00:00]>,\n <Arrow [2011-12-25T23:59:59.999999+00:00]>),\n (<Arrow [2011-12-26T00:00:00+00:00]>,\n <Arrow [2012-01-01T23:59:59.999999+00:00]>),\n (<Arrow [2012-01-02T00:00:00+00:00]>,\n <Arrow [2012-01-08T23:59:59.999999+00:00]>),\n (<Arrow [2012-01-09T00:00:00+00:00]>,\n <Arrow [2012-01-15T23:59:59.999999+00:00]>),\n (<Arrow [2012-01-16T00:00:00+00:00]>,\n <Arrow [2012-01-22T23:59:59.999999+00:00]>)]\n\nfrom there you can change the output to match the year and week number.\n"
] | [
6,
3,
3,
0,
0
] | [] | [] | [
"datetime",
"python",
"rrule",
"timedelta"
] | stackoverflow_0048927466_datetime_python_rrule_timedelta.txt |
Q:
How to get n longest entries of DataFrame?
I'm trying to get the n longest entries of a dask DataFrame. I tried calling nlargest on a dask DataFrame with two columns like this:
import dask.dataframe as dd
df = dd.read_csv("opendns-random-domains.txt", header=None, names=['domain_name'])
df['domain_length'] = df.domain_name.map(len)
print(df.head())
print(df.dtypes)
top_3 = df.nlargest(3, 'domain_length')
print(top_3.head())
The file opendns-random-domains.txt contains just a long list of domain names. This is what the output of the above code looks like:
domain_name domain_length
0 webmagnat.ro 12
1 nickelfreesolutions.com 23
2 scheepvaarttelefoongids.nl 26
3 tursan.net 10
4 plannersanonymous.com 21
domain_name object
domain_length float64
dtype: object
Traceback (most recent call last):
File "nlargest_test.py", line 9, in <module>
print(top_3.head())
File "/home/work/Dokumente/ModZero/Commerzbank/DNS_und_Proxylog-Analyse/dask-log-analyzer/venv/lib/python3.5/site-packages/dask/dataframe/core.py", line 382, in head
result = result.compute()
File "/home/work/Dokumente/ModZero/Commerzbank/DNS_und_Proxylog-Analyse/dask-log-analyzer/venv/lib/python3.5/site-packages/dask/base.py", line 86, in compute
return compute(self, **kwargs)[0]
File "/home/work/Dokumente/ModZero/Commerzbank/DNS_und_Proxylog-Analyse/dask-log-analyzer/venv/lib/python3.5/site-packages/dask/base.py", line 179, in compute
results = get(dsk, keys, **kwargs)
File "/home/work/Dokumente/ModZero/Commerzbank/DNS_und_Proxylog-Analyse/dask-log-analyzer/venv/lib/python3.5/site-packages/dask/threaded.py", line 57, in get
**kwargs)
File "/home/work/Dokumente/ModZero/Commerzbank/DNS_und_Proxylog-Analyse/dask-log-analyzer/venv/lib/python3.5/site-packages/dask/async.py", line 484, in get_async
raise(remote_exception(res, tb))
dask.async.TypeError: Cannot use method 'nlargest' with dtype object
Traceback
---------
File "/home/work/Dokumente/ModZero/Commerzbank/DNS_und_Proxylog-Analyse/dask-log-analyzer/venv/lib/python3.5/site-packages/dask/async.py", line 267, in execute_task
result = _execute_task(task, data)
File "/home/work/Dokumente/ModZero/Commerzbank/DNS_und_Proxylog-Analyse/dask-log-analyzer/venv/lib/python3.5/site-packages/dask/async.py", line 249, in _execute_task
return func(*args2)
File "/home/work/Dokumente/ModZero/Commerzbank/DNS_und_Proxylog-Analyse/dask-log-analyzer/venv/lib/python3.5/site-packages/dask/dataframe/core.py", line 2040, in <lambda>
f = lambda df: df.nlargest(n, columns)
File "/home/work/Dokumente/ModZero/Commerzbank/DNS_und_Proxylog-Analyse/dask-log-analyzer/venv/lib/python3.5/site-packages/pandas/core/frame.py", line 3355, in nlargest
return self._nsorted(columns, n, 'nlargest', keep)
File "/home/work/Dokumente/ModZero/Commerzbank/DNS_und_Proxylog-Analyse/dask-log-analyzer/venv/lib/python3.5/site-packages/pandas/core/frame.py", line 3318, in _nsorted
ser = getattr(self[columns[0]], method)(n, keep=keep)
File "/home/work/Dokumente/ModZero/Commerzbank/DNS_und_Proxylog-Analyse/dask-log-analyzer/venv/lib/python3.5/site-packages/pandas/util/decorators.py", line 91, in wrapper
return func(*args, **kwargs)
File "/home/work/Dokumente/ModZero/Commerzbank/DNS_und_Proxylog-Analyse/dask-log-analyzer/venv/lib/python3.5/site-packages/pandas/core/series.py", line 1898, in nlargest
return algos.select_n(self, n=n, keep=keep, method='nlargest')
File "/home/work/Dokumente/ModZero/Commerzbank/DNS_und_Proxylog-Analyse/dask-log-analyzer/venv/lib/python3.5/site-packages/pandas/core/algorithms.py", line 559, in select_n
raise TypeError("Cannot use method %r with dtype %s" % (method, dtype))
I'm confused, because I'm calling nlargest on the column which is of type float64 but still get this error saying it cannot be called on dtype object. Also this works fine in pandas. How can I get the n longest entries from a DataFrame?
A:
I was helped by explicit type conversion:
df['column'].astype(str).astype(float).nlargest(5)
A:
I tried to reproduce your problem but things worked fine. Can I recommend that you produce a Minimal Complete Verifiable Example?
Pandas example
In [1]: import pandas as pd
In [2]: df = pd.DataFrame({'x': ['a', 'bb', 'ccc', 'dddd']})
In [3]: df['y'] = df.x.map(len)
In [4]: df
Out[4]:
x y
0 a 1
1 bb 2
2 ccc 3
3 dddd 4
In [5]: df.nlargest(3, 'y')
Out[5]:
x y
3 dddd 4
2 ccc 3
1 bb 2
Dask dataframe example
In [1]: import pandas as pd
In [2]: df = pd.DataFrame({'x': ['a', 'bb', 'ccc', 'dddd']})
In [3]: import dask.dataframe as dd
In [4]: ddf = dd.from_pandas(df, npartitions=2)
In [5]: ddf['y'] = ddf.x.map(len)
In [6]: ddf.nlargest(3, 'y').compute()
Out[6]:
x y
3 dddd 4
2 ccc 3
1 bb 2
Alternatively, perhaps this is just working now on the git master version?
A:
You only need to change the type of respective column to int or float using .astype().
For example, in your case:
top_3 = df['domain_length'].astype(float).nlargest(3)
A:
If you want to get the values with the most occurrences from a String type column you may use value_counts() with nlargest(n), where n is the number of elements you want to bring.
df['your_column'].value_counts().nlargest(3)
It will bring the top 3 occurrences from that column.
A:
This is how my first data frame look.
This is how my new data frame looks after getting top 5.
'''
station_count.nlargest(5,'count')
'''
You have to give (nlargest) command to a column who have int data type and not in string so it can calculate the count.
Always top n number followed by its corresponding column that is int type.
| How to get n longest entries of DataFrame? | I'm trying to get the n longest entries of a dask DataFrame. I tried calling nlargest on a dask DataFrame with two columns like this:
import dask.dataframe as dd
df = dd.read_csv("opendns-random-domains.txt", header=None, names=['domain_name'])
df['domain_length'] = df.domain_name.map(len)
print(df.head())
print(df.dtypes)
top_3 = df.nlargest(3, 'domain_length')
print(top_3.head())
The file opendns-random-domains.txt contains just a long list of domain names. This is what the output of the above code looks like:
domain_name domain_length
0 webmagnat.ro 12
1 nickelfreesolutions.com 23
2 scheepvaarttelefoongids.nl 26
3 tursan.net 10
4 plannersanonymous.com 21
domain_name object
domain_length float64
dtype: object
Traceback (most recent call last):
File "nlargest_test.py", line 9, in <module>
print(top_3.head())
File "/home/work/Dokumente/ModZero/Commerzbank/DNS_und_Proxylog-Analyse/dask-log-analyzer/venv/lib/python3.5/site-packages/dask/dataframe/core.py", line 382, in head
result = result.compute()
File "/home/work/Dokumente/ModZero/Commerzbank/DNS_und_Proxylog-Analyse/dask-log-analyzer/venv/lib/python3.5/site-packages/dask/base.py", line 86, in compute
return compute(self, **kwargs)[0]
File "/home/work/Dokumente/ModZero/Commerzbank/DNS_und_Proxylog-Analyse/dask-log-analyzer/venv/lib/python3.5/site-packages/dask/base.py", line 179, in compute
results = get(dsk, keys, **kwargs)
File "/home/work/Dokumente/ModZero/Commerzbank/DNS_und_Proxylog-Analyse/dask-log-analyzer/venv/lib/python3.5/site-packages/dask/threaded.py", line 57, in get
**kwargs)
File "/home/work/Dokumente/ModZero/Commerzbank/DNS_und_Proxylog-Analyse/dask-log-analyzer/venv/lib/python3.5/site-packages/dask/async.py", line 484, in get_async
raise(remote_exception(res, tb))
dask.async.TypeError: Cannot use method 'nlargest' with dtype object
Traceback
---------
File "/home/work/Dokumente/ModZero/Commerzbank/DNS_und_Proxylog-Analyse/dask-log-analyzer/venv/lib/python3.5/site-packages/dask/async.py", line 267, in execute_task
result = _execute_task(task, data)
File "/home/work/Dokumente/ModZero/Commerzbank/DNS_und_Proxylog-Analyse/dask-log-analyzer/venv/lib/python3.5/site-packages/dask/async.py", line 249, in _execute_task
return func(*args2)
File "/home/work/Dokumente/ModZero/Commerzbank/DNS_und_Proxylog-Analyse/dask-log-analyzer/venv/lib/python3.5/site-packages/dask/dataframe/core.py", line 2040, in <lambda>
f = lambda df: df.nlargest(n, columns)
File "/home/work/Dokumente/ModZero/Commerzbank/DNS_und_Proxylog-Analyse/dask-log-analyzer/venv/lib/python3.5/site-packages/pandas/core/frame.py", line 3355, in nlargest
return self._nsorted(columns, n, 'nlargest', keep)
File "/home/work/Dokumente/ModZero/Commerzbank/DNS_und_Proxylog-Analyse/dask-log-analyzer/venv/lib/python3.5/site-packages/pandas/core/frame.py", line 3318, in _nsorted
ser = getattr(self[columns[0]], method)(n, keep=keep)
File "/home/work/Dokumente/ModZero/Commerzbank/DNS_und_Proxylog-Analyse/dask-log-analyzer/venv/lib/python3.5/site-packages/pandas/util/decorators.py", line 91, in wrapper
return func(*args, **kwargs)
File "/home/work/Dokumente/ModZero/Commerzbank/DNS_und_Proxylog-Analyse/dask-log-analyzer/venv/lib/python3.5/site-packages/pandas/core/series.py", line 1898, in nlargest
return algos.select_n(self, n=n, keep=keep, method='nlargest')
File "/home/work/Dokumente/ModZero/Commerzbank/DNS_und_Proxylog-Analyse/dask-log-analyzer/venv/lib/python3.5/site-packages/pandas/core/algorithms.py", line 559, in select_n
raise TypeError("Cannot use method %r with dtype %s" % (method, dtype))
I'm confused, because I'm calling nlargest on the column which is of type float64 but still get this error saying it cannot be called on dtype object. Also this works fine in pandas. How can I get the n longest entries from a DataFrame?
| [
"I was helped by explicit type conversion:\ndf['column'].astype(str).astype(float).nlargest(5)\n\n",
"I tried to reproduce your problem but things worked fine. Can I recommend that you produce a Minimal Complete Verifiable Example?\nPandas example\nIn [1]: import pandas as pd\n\nIn [2]: df = pd.DataFrame({'x': ['a', 'bb', 'ccc', 'dddd']})\n\nIn [3]: df['y'] = df.x.map(len)\n\nIn [4]: df\nOut[4]: \n x y\n0 a 1\n1 bb 2\n2 ccc 3\n3 dddd 4\n\nIn [5]: df.nlargest(3, 'y')\nOut[5]: \n x y\n3 dddd 4\n2 ccc 3\n1 bb 2\n\nDask dataframe example\nIn [1]: import pandas as pd\n\nIn [2]: df = pd.DataFrame({'x': ['a', 'bb', 'ccc', 'dddd']})\n\nIn [3]: import dask.dataframe as dd\n\nIn [4]: ddf = dd.from_pandas(df, npartitions=2)\n\nIn [5]: ddf['y'] = ddf.x.map(len)\n\nIn [6]: ddf.nlargest(3, 'y').compute()\nOut[6]: \n x y\n3 dddd 4\n2 ccc 3\n1 bb 2\n\nAlternatively, perhaps this is just working now on the git master version?\n",
"You only need to change the type of respective column to int or float using .astype().\nFor example, in your case:\ntop_3 = df['domain_length'].astype(float).nlargest(3)\n\n",
"If you want to get the values with the most occurrences from a String type column you may use value_counts() with nlargest(n), where n is the number of elements you want to bring.\ndf['your_column'].value_counts().nlargest(3)\n\nIt will bring the top 3 occurrences from that column.\n",
"This is how my first data frame look.\nThis is how my new data frame looks after getting top 5.\n'''\nstation_count.nlargest(5,'count')\n'''\nYou have to give (nlargest) command to a column who have int data type and not in string so it can calculate the count.\nAlways top n number followed by its corresponding column that is int type.\n"
] | [
3,
0,
0,
0,
0
] | [] | [] | [
"dask",
"python"
] | stackoverflow_0038978432_dask_python.txt |
Q:
Split torch tensor : max size and end of the sentence
I would like to split a tensor into several tensors with torch on Python.
The tensor is the tokenization of a long text.
First here is what I had done:
tensor = tensor([[ 3746, 3120, 1024, ..., 2655, 24051, 2015]]) #size 14714
result = tensor.split(510)
It works but now I would like to refine this, and make it so that it can't split in the middle of a sentence but at the end of a sentence, so recognizing the dot '.' (token 1012). Of course all the tensor will not be the same size but will have to respect a maximum size (510 for example).
Thanks for your help
A:
i tried it out a solution but its not straightforward but does the trick
oo and you might want to install this library more_itertools, used this to do the split
from transformers import BertTokenizerFast
import typer
import torch
from pathlib import Path
from typing import List
from more_itertools import split_after
def open_txt(txt_path:Path) -> List[str]:
with open(txt_path, 'r') as txt_file:
return [txt.replace('\n', '') for txt in txt_file.readlines()]
def pad_token(input_ids, pad_length=510):
split_input_ids = list(split_after(input_ids, lambda x: x == 1012))
# Pad to 510
new_input_ids = []
for ids in split_input_ids:
ids += [0] * (pad_length - len(ids))
new_input_ids.append(ids)
return new_input_ids
def main(
text_path:Path=typer.Option('sent.txt')
):
tokenizer:BertTokenizerFast = BertTokenizerFast.from_pretrained('bert-base-uncased')
sentence = open_txt(text_path)
sentence = ''.join(sentence)
features = tokenizer(
sentence, padding='max_length'
)
input_ids = features['input_ids']
new_input_ids = pad_token(input_ids, pad_length=600)
# print(tokenizer.decode(new_input_ids[0]))
# convert to torch
new_input_ids = torch.tensor(new_input_ids)
# features['input_ids'] = new_input_ids
print(new_input_ids[0])
if __name__ == '__main__':
typer.run(main)
A:
Not sure if there's a built-in function in PyTorch to do what you asked, which involves several steps:
Count how many sentences there are; assign this number to n_sents.
Compute the sentences' lengths and the indices where they start in tensor. Let 1-D tensors length and start store the lengths and the indices respectively. In other words, length[i] and start[i] are the length and the start index of the i-th sentence respectively, i.e. tensor[start[i]] is the first token of the i-th sentence.
Create a result tensor result = torch.full((n_sents, max_len), pad_value) where max_len = max(length).
Assign result[i, :length[i]] = tensor[start[i] : start[i]+length[i]] for all i in range(n_sents).
A side comment: detecting sentence ending by recognizing periods doesn't always work e.g., I went to dr. Smith yesterday. is one sentence but has two periods.
| Split torch tensor : max size and end of the sentence | I would like to split a tensor into several tensors with torch on Python.
The tensor is the tokenization of a long text.
First here is what I had done:
tensor = tensor([[ 3746, 3120, 1024, ..., 2655, 24051, 2015]]) #size 14714
result = tensor.split(510)
It works but now I would like to refine this, and make it so that it can't split in the middle of a sentence but at the end of a sentence, so recognizing the dot '.' (token 1012). Of course all the tensor will not be the same size but will have to respect a maximum size (510 for example).
Thanks for your help
| [
"i tried it out a solution but its not straightforward but does the trick\noo and you might want to install this library more_itertools, used this to do the split\nfrom transformers import BertTokenizerFast\nimport typer\nimport torch\n\nfrom pathlib import Path\nfrom typing import List\nfrom more_itertools import split_after\n\ndef open_txt(txt_path:Path) -> List[str]:\n with open(txt_path, 'r') as txt_file:\n return [txt.replace('\\n', '') for txt in txt_file.readlines()]\n \ndef pad_token(input_ids, pad_length=510):\n split_input_ids = list(split_after(input_ids, lambda x: x == 1012))\n \n # Pad to 510\n new_input_ids = []\n for ids in split_input_ids:\n ids += [0] * (pad_length - len(ids))\n new_input_ids.append(ids)\n \n return new_input_ids\n\ndef main(\n text_path:Path=typer.Option('sent.txt')\n):\n tokenizer:BertTokenizerFast = BertTokenizerFast.from_pretrained('bert-base-uncased')\n \n sentence = open_txt(text_path)\n sentence = ''.join(sentence)\n \n features = tokenizer(\n sentence, padding='max_length'\n )\n \n input_ids = features['input_ids']\n \n new_input_ids = pad_token(input_ids, pad_length=600)\n # print(tokenizer.decode(new_input_ids[0]))\n # convert to torch\n new_input_ids = torch.tensor(new_input_ids)\n # features['input_ids'] = new_input_ids\n \n print(new_input_ids[0])\n\nif __name__ == '__main__':\n typer.run(main)\n\n\n",
"Not sure if there's a built-in function in PyTorch to do what you asked, which involves several steps:\n\nCount how many sentences there are; assign this number to n_sents.\nCompute the sentences' lengths and the indices where they start in tensor. Let 1-D tensors length and start store the lengths and the indices respectively. In other words, length[i] and start[i] are the length and the start index of the i-th sentence respectively, i.e. tensor[start[i]] is the first token of the i-th sentence.\nCreate a result tensor result = torch.full((n_sents, max_len), pad_value) where max_len = max(length).\nAssign result[i, :length[i]] = tensor[start[i] : start[i]+length[i]] for all i in range(n_sents).\n\nA side comment: detecting sentence ending by recognizing periods doesn't always work e.g., I went to dr. Smith yesterday. is one sentence but has two periods.\n"
] | [
0,
0
] | [] | [] | [
"nlp",
"python",
"pytorch",
"tensor",
"torch"
] | stackoverflow_0074488479_nlp_python_pytorch_tensor_torch.txt |
Q:
How to interact with a turtle when it is invisible?
I have been creating a game with turtle and I was going to make a the background change when a certain area is clicked. So I used a turtle and used the onclick() method when I realized that it did not look good with the background so I tried to use the hideturtle() method to hide it. But when I hid the turtle the clicking function did not work.
This is something like my code:
t = turtle.Turtle()
t.hideturtle()
def my_function(x, y):
print('this function would change the bg but that doesn't matter right now')
t.onclick(my_function, btn=1, add=None)
As you can see, if the hideturtle() is not there, when the turtle is clicked the function runs. But when the hideturtle() is called the turtle doesn't respond to clicks.
A:
Passed your question to ChatGpt, that's his answer :) :
It sounds like you're running into a problem where the turtle becomes
unresponsive to clicks after you hide it. This is likely because the
turtle's clickable area is also hidden when you hide the turtle.
One solution to this problem would be to create a separate turtle that
is used only for clicking, and keep it visible at all times. You could
do this by creating a new turtle, setting its shape to "blank", and
then using the onclick() method to register your function. This way,
the turtle will be invisible but still respond to clicks.
Here is an example of how you could do this:
import turtle
# Create a new turtle for clicking
click_turtle = turtle.Turtle()
# Set the shape to "blank" to make it invisible
click_turtle.shape("blank")
# Register the function to run when the turtle is clicked
click_turtle.onclick(my_function, btn=1, add=None)
# Hide the original turtle
t.hideturtle()
By using this approach, you can hide the original turtle and still
have a visible area that responds to clicks.
| How to interact with a turtle when it is invisible? | I have been creating a game with turtle and I was going to make a the background change when a certain area is clicked. So I used a turtle and used the onclick() method when I realized that it did not look good with the background so I tried to use the hideturtle() method to hide it. But when I hid the turtle the clicking function did not work.
This is something like my code:
t = turtle.Turtle()
t.hideturtle()
def my_function(x, y):
print('this function would change the bg but that doesn't matter right now')
t.onclick(my_function, btn=1, add=None)
As you can see, if the hideturtle() is not there, when the turtle is clicked the function runs. But when the hideturtle() is called the turtle doesn't respond to clicks.
| [
"Passed your question to ChatGpt, that's his answer :) :\n\nIt sounds like you're running into a problem where the turtle becomes\nunresponsive to clicks after you hide it. This is likely because the\nturtle's clickable area is also hidden when you hide the turtle.\nOne solution to this problem would be to create a separate turtle that\nis used only for clicking, and keep it visible at all times. You could\ndo this by creating a new turtle, setting its shape to \"blank\", and\nthen using the onclick() method to register your function. This way,\nthe turtle will be invisible but still respond to clicks.\nHere is an example of how you could do this:\nimport turtle\n\n# Create a new turtle for clicking\nclick_turtle = turtle.Turtle()\n\n# Set the shape to \"blank\" to make it invisible\nclick_turtle.shape(\"blank\")\n\n# Register the function to run when the turtle is clicked\nclick_turtle.onclick(my_function, btn=1, add=None)\n\n# Hide the original turtle\nt.hideturtle()\n\nBy using this approach, you can hide the original turtle and still\nhave a visible area that responds to clicks.\n\n"
] | [
0
] | [] | [] | [
"python",
"python_turtle",
"turtle_graphics"
] | stackoverflow_0074667472_python_python_turtle_turtle_graphics.txt |
Q:
how to convert 5 digits number to date in python
I would like to convert 44562 int64 data type (5 digits number) to date format like this 1/1/2022.
Out[5]:
0 44562
1 44562
2 44563
3 44563
4 44564
Name: Date, dtype: int64
I try with
df['Date'].apply(lambda x: (datetime.utcfromtimestamp(0) + timedelta(int(x))).strftime("%m-%d-%Y"))
but output date is not correct. Please help to fix the issue.
Out[13]:
0 01-03-2092
1 01-03-2092
2 01-04-2092
3 01-04-2092
4 01-05-2092
Name: Date2, dtype: object
A:
You very nearly had it:
df['Date'].apply(lambda x: (datetime(1899, 12, 30) + timedelta(days=int(x))).strftime("%m/%d/%Y"))
| how to convert 5 digits number to date in python | I would like to convert 44562 int64 data type (5 digits number) to date format like this 1/1/2022.
Out[5]:
0 44562
1 44562
2 44563
3 44563
4 44564
Name: Date, dtype: int64
I try with
df['Date'].apply(lambda x: (datetime.utcfromtimestamp(0) + timedelta(int(x))).strftime("%m-%d-%Y"))
but output date is not correct. Please help to fix the issue.
Out[13]:
0 01-03-2092
1 01-03-2092
2 01-04-2092
3 01-04-2092
4 01-05-2092
Name: Date2, dtype: object
| [
"You very nearly had it:\ndf['Date'].apply(lambda x: (datetime(1899, 12, 30) + timedelta(days=int(x))).strftime(\"%m/%d/%Y\"))\n\n"
] | [
1
] | [] | [] | [
"python"
] | stackoverflow_0074667378_python.txt |
Q:
Solving large-scale nonlinear system using exact Newton's method in SciPy
I am trying to solve a large-scale nonlinear system using the exact Newton method in SciPy. In my application, the Jacobian is easy to assemble (and factorize) as a sparse matrix.
It seems that all methods available in scipy.optimize.root approximate the Jacobian in one way or another, and I can't find a way to use Newton's method using the API that is discussed in SciPy's documentation.
Nonetheless, using the internal API, I have managed to use Newton's method with the following code:
from scipy.optimize.nonlin import nonlin_solve
x, info = nonlin_solve(f, x0, jac, line_search=False)
where f(x) is the residual and jac(x) is a callable that returns the Jacobian at x as a sparse matrix.
However, I am not sure whether this function is meant to be used outside SciPy and is subject to changes without notice.
Would this be recommended approach?
A:
It is meant to be used.
Scipy's private functions that are not meant to be used from the outside start with a _.
This was confirmed by the scipy's team in an issue I raised recently: cf https://github.com/scipy/scipy/issues/17510
| Solving large-scale nonlinear system using exact Newton's method in SciPy | I am trying to solve a large-scale nonlinear system using the exact Newton method in SciPy. In my application, the Jacobian is easy to assemble (and factorize) as a sparse matrix.
It seems that all methods available in scipy.optimize.root approximate the Jacobian in one way or another, and I can't find a way to use Newton's method using the API that is discussed in SciPy's documentation.
Nonetheless, using the internal API, I have managed to use Newton's method with the following code:
from scipy.optimize.nonlin import nonlin_solve
x, info = nonlin_solve(f, x0, jac, line_search=False)
where f(x) is the residual and jac(x) is a callable that returns the Jacobian at x as a sparse matrix.
However, I am not sure whether this function is meant to be used outside SciPy and is subject to changes without notice.
Would this be recommended approach?
| [
"It is meant to be used.\nScipy's private functions that are not meant to be used from the outside start with a _.\nThis was confirmed by the scipy's team in an issue I raised recently: cf https://github.com/scipy/scipy/issues/17510\n"
] | [
0
] | [] | [] | [
"optimization",
"python",
"scipy"
] | stackoverflow_0068297903_optimization_python_scipy.txt |
Q:
python issue while importing a module from a file
the below is my main_call.py file
from flask import Flask, jsonify, request
from test_invoke.invoke import end_invoke
from config import config
app = Flask(__name__)
@app.route("/get/posts", methods=["GET"])
def load_data():
res = "True"
# setting a Host url
host_url = config()["url"]
# getting request parameter and validating it
generate_schedule= end_invoke(host_url)
if generate_schedule == 200:
return jsonify({"status_code": 200, "message": "success"})
elif generate_schedule == 400:
return jsonify(
{"error": "Invalid ", "status_code": 400}
)
if __name__ == "__main__":
app.run(debug=True)
invoke.py
import requests
import json
import urllib
from urllib import request, parse
from config import config
from flask import request
def end_invoke(schedule_url):
headers = {
"Content-Type":"application/json",
}
schedule_data = requests.get(schedule_url, headers=headers)
if not schedule_data.status_code // 100 == 2:
error = schedule_data.json()["error"]
print(error)
return 400
else:
success = schedule_data.json()
return 200
tree structure
test_invoke
├── __init__.py
├── __pycache__
│ ├── config.cpython-38.pyc
│ └── invoke.cpython-38.pyc
├── config.py
├── env.yaml
├── invoke.py
└── main_call.py
However when i run, i get the no module found error
python3 main_call.py
Traceback (most recent call last):
File "main_call.py", line 3, in <module>
from test_invoke.invoke import end_invoke
ModuleNotFoundError: No module named 'test_invoke'
A:
Python looks for packages and modules in its Python path. It searches (in that order):
the current directory (which may not be the path of the current Python module...)
the content of the PYTHONPATH environment variable
various (implementation and system dependant) system paths
As test_invoke is indeed a package, nothing is a priori bad in using it at the root for its modules provided it is accessible from the Python path.
But IMHO, it is always a bad idea to directly start a python module that resides inside a package. Better to make the package accessible and then use relative imports inside the package:
rename main_call.py to __main__.py
replace the offending import line with from .invoke import end_invoke
start the package as python -m test_invoke either for the directory containing test_invoke or after adding that directory to the PYTHONPATH environment variable
That way, the import will work even if you start your program from a different current directory.
A:
You are trying to import file available in the current directory.
So, please replace line
from test_invoke.invoke import end_invoke with from invoke import end_invoke
| python issue while importing a module from a file | the below is my main_call.py file
from flask import Flask, jsonify, request
from test_invoke.invoke import end_invoke
from config import config
app = Flask(__name__)
@app.route("/get/posts", methods=["GET"])
def load_data():
res = "True"
# setting a Host url
host_url = config()["url"]
# getting request parameter and validating it
generate_schedule= end_invoke(host_url)
if generate_schedule == 200:
return jsonify({"status_code": 200, "message": "success"})
elif generate_schedule == 400:
return jsonify(
{"error": "Invalid ", "status_code": 400}
)
if __name__ == "__main__":
app.run(debug=True)
invoke.py
import requests
import json
import urllib
from urllib import request, parse
from config import config
from flask import request
def end_invoke(schedule_url):
headers = {
"Content-Type":"application/json",
}
schedule_data = requests.get(schedule_url, headers=headers)
if not schedule_data.status_code // 100 == 2:
error = schedule_data.json()["error"]
print(error)
return 400
else:
success = schedule_data.json()
return 200
tree structure
test_invoke
├── __init__.py
├── __pycache__
│ ├── config.cpython-38.pyc
│ └── invoke.cpython-38.pyc
├── config.py
├── env.yaml
├── invoke.py
└── main_call.py
However when i run, i get the no module found error
python3 main_call.py
Traceback (most recent call last):
File "main_call.py", line 3, in <module>
from test_invoke.invoke import end_invoke
ModuleNotFoundError: No module named 'test_invoke'
| [
"Python looks for packages and modules in its Python path. It searches (in that order):\n\nthe current directory (which may not be the path of the current Python module...)\nthe content of the PYTHONPATH environment variable\nvarious (implementation and system dependant) system paths\n\nAs test_invoke is indeed a package, nothing is a priori bad in using it at the root for its modules provided it is accessible from the Python path.\nBut IMHO, it is always a bad idea to directly start a python module that resides inside a package. Better to make the package accessible and then use relative imports inside the package:\n\nrename main_call.py to __main__.py\nreplace the offending import line with from .invoke import end_invoke\nstart the package as python -m test_invoke either for the directory containing test_invoke or after adding that directory to the PYTHONPATH environment variable\n\nThat way, the import will work even if you start your program from a different current directory.\n",
"You are trying to import file available in the current directory.\nSo, please replace line\nfrom test_invoke.invoke import end_invoke with from invoke import end_invoke\n"
] | [
2,
0
] | [] | [] | [
"python"
] | stackoverflow_0074667350_python.txt |
Q:
Fix dates to correct format as days and months interchanged in certain rows
I have a dataset that has a date column and it is interchanging days and months in certain rows after importing the dataset. Can someone pls help me find a fix to this?
Correct data:
First Name
Last Name
Date
Start time
Duration
DetectedArtifactPercentage
Average HR (bpm)
Average RespR (times/min)
Athlete
X
02-02-2022
06:59:18
95
9
110
19.48
Athlete
X
02-09-2022
06:49:47
143
6
79
13.52
Athlete
X
02-09-2022
18:25:23
125
6
114
19.85
Athlete
X
03-09-2022
08:31:22
110
5
105
17.57
Athlete
X
03-09-2022
18:37:20
152
5
98
15.61
Athlete
X
04-09-2022
09:00:34
228
9
132
23.08
Interchanged dates after importing the dataset:
First Name Last Name Date Start time Duration ...
0 Athlete X 2022-02-02 06:59:18 95
1 Athlete X 2022-02-09 06:49:47 143
2 Athlete X 2022-02-09 18:25:23 125
3 Athlete X 2022-03-09 08:31:22 110
4 Athlete X 2022-03-09 18:37:20 152
I am not able to fix this. Pls help.
A:
I assume you're reading data from an excel file, right? And in excel the cells are represented by text, because otherwise it would have been read automatically without a problem. You should have something like this:
print(df.Date)
Output:
0 02-02-2022
1 02-09-2022
2 02-09-2022
3 03-09-2022
4 03-09-2022
5 04-09-2022
Name: Date, dtype: object
Cast it with the formating and you'll be fine:
print(pd.to_datetime(df.Date, format='%d-%m-%Y'))
Output:
0 2022-02-02
1 2022-09-02
2 2022-09-02
3 2022-09-03
4 2022-09-03
5 2022-09-04
Name: Date, dtype: datetime64[ns]
In case it reads as datetime64[ns] in the first place, you may also swap day and month:
import datetime
df.Date.apply(lambda x: datetime.datetime.strftime(x, '%Y-%d-%m'))
Though it should be considered as a duct tape as sooner or later you come across month going beyond 12.
Your situation might also happen if you have some exotic date time format on you PC. To make sure that date is converted the right way besides how it is printed, you may try:
print(df.Date.dt.day)
Output:
0 2
1 2
2 2
3 3
4 3
5 4
Name: Date, dtype: int64
| Fix dates to correct format as days and months interchanged in certain rows | I have a dataset that has a date column and it is interchanging days and months in certain rows after importing the dataset. Can someone pls help me find a fix to this?
Correct data:
First Name
Last Name
Date
Start time
Duration
DetectedArtifactPercentage
Average HR (bpm)
Average RespR (times/min)
Athlete
X
02-02-2022
06:59:18
95
9
110
19.48
Athlete
X
02-09-2022
06:49:47
143
6
79
13.52
Athlete
X
02-09-2022
18:25:23
125
6
114
19.85
Athlete
X
03-09-2022
08:31:22
110
5
105
17.57
Athlete
X
03-09-2022
18:37:20
152
5
98
15.61
Athlete
X
04-09-2022
09:00:34
228
9
132
23.08
Interchanged dates after importing the dataset:
First Name Last Name Date Start time Duration ...
0 Athlete X 2022-02-02 06:59:18 95
1 Athlete X 2022-02-09 06:49:47 143
2 Athlete X 2022-02-09 18:25:23 125
3 Athlete X 2022-03-09 08:31:22 110
4 Athlete X 2022-03-09 18:37:20 152
I am not able to fix this. Pls help.
| [
"I assume you're reading data from an excel file, right? And in excel the cells are represented by text, because otherwise it would have been read automatically without a problem. You should have something like this:\nprint(df.Date)\n\nOutput:\n0 02-02-2022\n1 02-09-2022\n2 02-09-2022\n3 03-09-2022\n4 03-09-2022\n5 04-09-2022\nName: Date, dtype: object\n\nCast it with the formating and you'll be fine:\nprint(pd.to_datetime(df.Date, format='%d-%m-%Y'))\n\nOutput:\n0 2022-02-02\n1 2022-09-02\n2 2022-09-02\n3 2022-09-03\n4 2022-09-03\n5 2022-09-04\nName: Date, dtype: datetime64[ns]\n\nIn case it reads as datetime64[ns] in the first place, you may also swap day and month:\nimport datetime\ndf.Date.apply(lambda x: datetime.datetime.strftime(x, '%Y-%d-%m'))\n\nThough it should be considered as a duct tape as sooner or later you come across month going beyond 12.\nYour situation might also happen if you have some exotic date time format on you PC. To make sure that date is converted the right way besides how it is printed, you may try:\nprint(df.Date.dt.day)\n\nOutput:\n0 2\n1 2\n2 2\n3 3\n4 3\n5 4\nName: Date, dtype: int64\n\n"
] | [
0
] | [] | [] | [
"dataframe",
"pandas",
"python"
] | stackoverflow_0074667320_dataframe_pandas_python.txt |
Q:
Don't understand this ConfigParser.InterpolationSyntaxError
So I have tried to write a small config file for my script, which should specify an IP address, a port and a URL which should be created via interpolation using the former two variables. My config.ini looks like this:
[Client]
recv_url : http://%(recv_host):%(recv_port)/rpm_list/api/
recv_host = 172.28.128.5
recv_port = 5000
column_list = Name,Version,Build_Date,Host,Release,Architecture,Install_Date,Group,Size,License,Signature,Source_RPM,Build_Host,Relocations,Packager,Vendor,URL,Summary
In my script I parse this config file as follows:
config = SafeConfigParser()
config.read('config.ini')
column_list = config.get('Client', 'column_list').split(',')
URL = config.get('Client', 'recv_url')
If I run my script, this results in:
Traceback (most recent call last):
File "server_side_agent.py", line 56, in <module>
URL = config.get('Client', 'recv_url')
File "/usr/lib64/python2.7/ConfigParser.py", line 623, in get
return self._interpolate(section, option, value, d)
File "/usr/lib64/python2.7/ConfigParser.py", line 691, in _interpolate
self._interpolate_some(option, L, rawval, section, vars, 1)
File "/usr/lib64/python2.7/ConfigParser.py", line 716, in _interpolate_some
"bad interpolation variable reference %r" % rest)
ConfigParser.InterpolationSyntaxError: bad interpolation variable reference '%(recv_host):%(recv_port)/rpm_list/api/'
I have tried debugging, which resulted in giving me one more line of error code:
...
ConfigParser.InterpolationSyntaxError: bad interpolation variable reference '%(recv_host):%(recv_port)/rpm_list/api/'
Exception AttributeError: "'NoneType' object has no attribute 'path'" in <function _remove at 0x7fc4d32c46e0> ignored
Here I am stuck. I don't know where this _remove function is supposed to be... I tried searching for what the message is supposed to tell me, but quite frankly I have no idea. So...
Is there something wrong with my code?
What does '< function _remove at ... >' mean?
A:
There was indeed a mistake in my config.ini file. I did not regard the s at the end of %(...)s as a necessary syntax element. I suppose it refers to "string" but I couldn't really confirm this.
A:
My .ini file for starting the Python Pyramid server had a similar problem.
And to use the variable from the .env file, I needed to add the following: %%(VARIEBLE_FOR_EXAMPLE)s
But I got other problems, and I solved them with this: How can I use a system environment variable inside a pyramid ini file?
| Don't understand this ConfigParser.InterpolationSyntaxError | So I have tried to write a small config file for my script, which should specify an IP address, a port and a URL which should be created via interpolation using the former two variables. My config.ini looks like this:
[Client]
recv_url : http://%(recv_host):%(recv_port)/rpm_list/api/
recv_host = 172.28.128.5
recv_port = 5000
column_list = Name,Version,Build_Date,Host,Release,Architecture,Install_Date,Group,Size,License,Signature,Source_RPM,Build_Host,Relocations,Packager,Vendor,URL,Summary
In my script I parse this config file as follows:
config = SafeConfigParser()
config.read('config.ini')
column_list = config.get('Client', 'column_list').split(',')
URL = config.get('Client', 'recv_url')
If I run my script, this results in:
Traceback (most recent call last):
File "server_side_agent.py", line 56, in <module>
URL = config.get('Client', 'recv_url')
File "/usr/lib64/python2.7/ConfigParser.py", line 623, in get
return self._interpolate(section, option, value, d)
File "/usr/lib64/python2.7/ConfigParser.py", line 691, in _interpolate
self._interpolate_some(option, L, rawval, section, vars, 1)
File "/usr/lib64/python2.7/ConfigParser.py", line 716, in _interpolate_some
"bad interpolation variable reference %r" % rest)
ConfigParser.InterpolationSyntaxError: bad interpolation variable reference '%(recv_host):%(recv_port)/rpm_list/api/'
I have tried debugging, which resulted in giving me one more line of error code:
...
ConfigParser.InterpolationSyntaxError: bad interpolation variable reference '%(recv_host):%(recv_port)/rpm_list/api/'
Exception AttributeError: "'NoneType' object has no attribute 'path'" in <function _remove at 0x7fc4d32c46e0> ignored
Here I am stuck. I don't know where this _remove function is supposed to be... I tried searching for what the message is supposed to tell me, but quite frankly I have no idea. So...
Is there something wrong with my code?
What does '< function _remove at ... >' mean?
| [
"There was indeed a mistake in my config.ini file. I did not regard the s at the end of %(...)s as a necessary syntax element. I suppose it refers to \"string\" but I couldn't really confirm this.\n",
"My .ini file for starting the Python Pyramid server had a similar problem.\nAnd to use the variable from the .env file, I needed to add the following: %%(VARIEBLE_FOR_EXAMPLE)s\nBut I got other problems, and I solved them with this: How can I use a system environment variable inside a pyramid ini file?\n"
] | [
16,
0
] | [] | [] | [
"configparser",
"python",
"string_interpolation"
] | stackoverflow_0044156665_configparser_python_string_interpolation.txt |