content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
sequence
answers_scores
sequence
non_answers
sequence
non_answers_scores
sequence
tags
sequence
name
stringlengths
35
137
Q: Python/Plotly: px bar costumize hover Having this dataframe: df_grafico2 = pd.DataFrame(data = { "Usos" : ['Total','BK','BI','CyL','PyA','BC','VA','Resto','Total','BK','BI','CyL','PyA','BC','VA','Resto'], "Periodo" : ['Octubre 2021*','Octubre 2021*','Octubre 2021*','Octubre 2021*','Octubre 2021*','Octubre 2021*','Octubre 2021*','Octubre 2021*','Octubre 2022*','Octubre 2022*','Octubre 2022*','Octubre 2022*','Octubre 2022*','Octubre 2022*','Octubre 2022*','Octubre 2022*'], "Dolares" : [5247,869,2227,393,991,606,104,57,6074,996,2334,601,1231,676,202,33] }) I've tryied this plot: plot_impo_usos = px.histogram(df_grafico2[df_grafico2.Usos != "Total"], x = "Usos", y = "Dolares",color="Periodo", barmode="group", template="none", hover_data =["Periodo", "Dolares"], ) plot_impo_usos.update_yaxes(tickformat = ",",title_text='En millones de USD') plot_impo_usos.update_layout(separators=",.",font_family='georgia', title_text = "Importación por usos económicos. Octubre de 2022 y octubre de 2021", legend=dict( yanchor="top", orientation = "h", y=1.07, xanchor="left", x=0.3)) But the hover changes automaticaly into "sum of Dolares", and it won't be possible to get the "Dolares" name back, even if I try this: labels={"Usos":"Uso","sum of Dólares": "Dólares"} The best outcome would be a hover template with: "Periodo", "Uso" and "Dolares" (with $ before). I've tried this, but it won't work neither: plot_impo_usos.update_traces(hovertemplate='Periodo: %{color} <br>Uso: %{x} <br>Dolares: $%{y}') Help is much appreciated! A: The easiest way to do hover text is to use fig.data (in your case, plot_impo_usos.data). to get the graph configuration data, so it is easy to customize it. So copy the hover template that is set up for the two listograms and edit it. Being able to customize it with the configuration information gives you more freedom of expression. import plotly.express as px plot_impo_usos = px.histogram(df_grafico2[df_grafico2.Usos != "Total"], x = "Usos", y = "Dolares", color="Periodo", barmode="group", template="none", hover_data =["Periodo", "Dolares"], ) plot_impo_usos.data[0].hovertemplate = 'Periodo: Octubre 2021*<br>Usos: %{x}<br>Dolares: $%{y}<extra></extra>' plot_impo_usos.data[1].hovertemplate = 'Periodo: Octubre 2022*<br>Usos: %{x}<br>Dolares: $%{y}<extra></extra>' plot_impo_usos.update_yaxes(tickformat = ",", title_text='En millones de USD') plot_impo_usos.update_layout(separators=",.", font_family='georgia', title_text = "Importación por usos económicos. Octubre de 2022 y octubre de 2021", legend=dict( yanchor="top", orientation = "h", y=1.07, xanchor="left", x=0.3 ) ) plot_impo_usos.show() A: You were very close, in hovertemplate you just need to use %{fullData.name} instead of %{color} : plot_impo_usos.update_traces( hovertemplate='Periodo: %{fullData.name}<br>Uso: %{x}<br>Dolares: %{y:$,.2f}<extra></extra>' ) Nb. When using hovertemplate, there is this "secondary box" that appears next to the hover box, which is (quite often) annoying : Anything contained in tag <extra> is displayed in the secondary box, for example "{fullData.name}". To hide the secondary box completely, use an empty tag <extra></extra>. Note also that fullData.name is the name of the (hovered) trace, which, when using px.histogram(), is set automatically according to the value of color.
Python/Plotly: px bar costumize hover
Having this dataframe: df_grafico2 = pd.DataFrame(data = { "Usos" : ['Total','BK','BI','CyL','PyA','BC','VA','Resto','Total','BK','BI','CyL','PyA','BC','VA','Resto'], "Periodo" : ['Octubre 2021*','Octubre 2021*','Octubre 2021*','Octubre 2021*','Octubre 2021*','Octubre 2021*','Octubre 2021*','Octubre 2021*','Octubre 2022*','Octubre 2022*','Octubre 2022*','Octubre 2022*','Octubre 2022*','Octubre 2022*','Octubre 2022*','Octubre 2022*'], "Dolares" : [5247,869,2227,393,991,606,104,57,6074,996,2334,601,1231,676,202,33] }) I've tryied this plot: plot_impo_usos = px.histogram(df_grafico2[df_grafico2.Usos != "Total"], x = "Usos", y = "Dolares",color="Periodo", barmode="group", template="none", hover_data =["Periodo", "Dolares"], ) plot_impo_usos.update_yaxes(tickformat = ",",title_text='En millones de USD') plot_impo_usos.update_layout(separators=",.",font_family='georgia', title_text = "Importación por usos económicos. Octubre de 2022 y octubre de 2021", legend=dict( yanchor="top", orientation = "h", y=1.07, xanchor="left", x=0.3)) But the hover changes automaticaly into "sum of Dolares", and it won't be possible to get the "Dolares" name back, even if I try this: labels={"Usos":"Uso","sum of Dólares": "Dólares"} The best outcome would be a hover template with: "Periodo", "Uso" and "Dolares" (with $ before). I've tried this, but it won't work neither: plot_impo_usos.update_traces(hovertemplate='Periodo: %{color} <br>Uso: %{x} <br>Dolares: $%{y}') Help is much appreciated!
[ "The easiest way to do hover text is to use fig.data (in your case, plot_impo_usos.data). to get the graph configuration data, so it is easy to customize it. So copy the hover template that is set up for the two listograms and edit it. Being able to customize it with the configuration information gives you more freedom of expression.\nimport plotly.express as px\n\nplot_impo_usos = px.histogram(df_grafico2[df_grafico2.Usos != \"Total\"],\n x = \"Usos\",\n y = \"Dolares\",\n color=\"Periodo\",\n barmode=\"group\",\n template=\"none\",\n hover_data =[\"Periodo\", \"Dolares\"],\n )\n\nplot_impo_usos.data[0].hovertemplate = 'Periodo: Octubre 2021*<br>Usos: %{x}<br>Dolares: $%{y}<extra></extra>'\nplot_impo_usos.data[1].hovertemplate = 'Periodo: Octubre 2022*<br>Usos: %{x}<br>Dolares: $%{y}<extra></extra>'\n\nplot_impo_usos.update_yaxes(tickformat = \",\",\n title_text='En millones de USD')\nplot_impo_usos.update_layout(separators=\",.\",\n font_family='georgia',\n title_text = \"Importación por usos económicos. Octubre de 2022 y octubre de 2021\",\n legend=dict(\n yanchor=\"top\",\n orientation = \"h\",\n y=1.07,\n xanchor=\"left\",\n x=0.3\n )\n )\n\nplot_impo_usos.show()\n\n\n", "You were very close, in hovertemplate you just need to use %{fullData.name} instead of %{color} :\nplot_impo_usos.update_traces(\n hovertemplate='Periodo: %{fullData.name}<br>Uso: %{x}<br>Dolares: %{y:$,.2f}<extra></extra>'\n)\n\nNb. When using hovertemplate, there is this \"secondary box\" that appears next to the hover box, which is (quite often) annoying :\n\nAnything contained in tag <extra> is displayed in the secondary box,\nfor example \"{fullData.name}\". To hide the secondary\nbox completely, use an empty tag <extra></extra>.\n\nNote also that fullData.name is the name of the (hovered) trace, which, when using px.histogram(), is set automatically according to the value of color.\n" ]
[ 1, 0 ]
[]
[]
[ "plotly", "plotly_express", "python" ]
stackoverflow_0074658732_plotly_plotly_express_python.txt
Q: Python Pandas Converting Dataframe to Tidy Format dt = {'ID': [1, 1, 1, 1, 2, 2, 2, 2], 'Test': [‘Math’, 'Math', 'Writing', 'Writing', ‘Math’, 'Math', 'Writing', 'Writing', ‘Math’] 'Year': ['2008', '2009', '2008', '2009', '2008', ‘2009’, ‘2008’, ‘2009’], 'Fall': [15, 12, 22, 10, 12, 16, 13, 23] ‘Spring’: [16, 13, 22, 14, 13, 14, 11, 20] ‘Winter’: [19, 27, 24, 20, 25, 21, 29, 26]} mydt = pd.DataFrame(dt, columns = ['ID', ‘Test’, 'Year', 'Fall', ‘Spring’, ‘Winter’]) So I have the above dataset. How can I convert the above dataset so that it looks like the following? Please let me know. A: You can try with set_index with stack + unstack out = (df.set_index(['ID','Test','Year']). stack().unstack(level=1). add_suffix('_Score').reset_index()) out Out[271]: Test ID Year level_2 Math_Score Writing_Score 0 1 2008 Fall 15 22 1 1 2008 Spring 16 22 2 1 2008 Winter 19 24 3 1 2009 Fall 12 10 4 1 2009 Spring 13 14 5 1 2009 Winter 27 20 6 2 2008 Fall 12 13 7 2 2008 Spring 13 11 8 2 2008 Winter 25 29 9 2 2009 Fall 16 23 10 2 2009 Spring 14 20 11 2 2009 Winter 21 26 A: Here is another solution: import pandas as pd data = {'ID': [1, 1, 1, 1, 2, 2, 2, 2], 'Test': ['Math', 'Math', 'Writing', 'Writing', 'Math', 'Math', 'Writing', 'Writing'], 'Year': ['2008', '2009', '2008', '2009', '2008', '2009', '2008', '2009'], 'Fall': [15, 12, 22, 10, 12, 16, 13, 23], 'Spring': [16, 13, 22, 14, 13, 14, 11, 20], 'Winter': [19, 27, 24, 20, 25, 21, 29, 26]} df_data = pd.DataFrame(data, columns=['ID', 'Test', 'Year', 'Fall', 'Spring', 'Winter']) df = df_data.melt(id_vars=['ID', 'Year', 'Test'], var_name='Quarter', value_name='Score') df = df.pivot(index=['ID', 'Year', 'Quarter'], columns=['Test'], values=['Score']) df.columns = df.columns.droplevel(level=0) df = df.add_suffix('_Score').reset_index(drop=False) A: This requires two pivoting operations using tidypandas: import pandas as pd from tidypandas.tidy_accessor import tp data = {'ID': [1, 1, 1, 1, 2, 2, 2, 2], 'Test': ['Math', 'Math', 'Writing', 'Writing', 'Math', 'Math', 'Writing', 'Writing'], 'Year': ['2008', '2009', '2008', '2009', '2008', '2009', '2008', '2009'], 'Fall': [15, 12, 22, 10, 12, 16, 13, 23], 'Spring': [16, 13, 22, 14, 13, 14, 11, 20], 'Winter': [19, 27, 24, 20, 25, 21, 29, 26]} df_data = pd.DataFrame(data, columns=['ID', 'Test', 'Year', 'Fall', 'Spring', 'Winter']) >>> (df_data.tp.pivot_longer(cols = ['Fall', 'Spring', 'Winter'], ... names_to='quarter' ... ) ... .tp.pivot_wider(id_cols = ['ID', 'quarter', 'Year'], ... names_from = 'Test', ... values_from = 'value', ... names_prefix = 'score_' ... ) ... ) Year quarter ID score_Math score_Writing 0 2008 Fall 1 15 22 1 2009 Fall 1 12 10 2 2008 Spring 1 16 22 3 2009 Spring 1 13 14 4 2008 Winter 1 19 24 5 2009 Winter 1 27 20 6 2008 Fall 2 12 13 7 2009 Fall 2 16 23 8 2008 Spring 2 13 11 9 2009 Spring 2 14 20 10 2008 Winter 2 25 29 11 2009 Winter 2 21 26
Python Pandas Converting Dataframe to Tidy Format
dt = {'ID': [1, 1, 1, 1, 2, 2, 2, 2], 'Test': [‘Math’, 'Math', 'Writing', 'Writing', ‘Math’, 'Math', 'Writing', 'Writing', ‘Math’] 'Year': ['2008', '2009', '2008', '2009', '2008', ‘2009’, ‘2008’, ‘2009’], 'Fall': [15, 12, 22, 10, 12, 16, 13, 23] ‘Spring’: [16, 13, 22, 14, 13, 14, 11, 20] ‘Winter’: [19, 27, 24, 20, 25, 21, 29, 26]} mydt = pd.DataFrame(dt, columns = ['ID', ‘Test’, 'Year', 'Fall', ‘Spring’, ‘Winter’]) So I have the above dataset. How can I convert the above dataset so that it looks like the following? Please let me know.
[ "You can try with set_index with stack + unstack\nout = (df.set_index(['ID','Test','Year']).\n stack().unstack(level=1).\n add_suffix('_Score').reset_index())\nout\nOut[271]: \nTest ID Year level_2 Math_Score Writing_Score\n0 1 2008 Fall 15 22\n1 1 2008 Spring 16 22\n2 1 2008 Winter 19 24\n3 1 2009 Fall 12 10\n4 1 2009 Spring 13 14\n5 1 2009 Winter 27 20\n6 2 2008 Fall 12 13\n7 2 2008 Spring 13 11\n8 2 2008 Winter 25 29\n9 2 2009 Fall 16 23\n10 2 2009 Spring 14 20\n11 2 2009 Winter 21 26\n\n", "Here is another solution:\nimport pandas as pd\n\ndata = {'ID': [1, 1, 1, 1, 2, 2, 2, 2],\n 'Test': ['Math', 'Math', 'Writing', 'Writing', 'Math', 'Math', 'Writing', 'Writing'],\n 'Year': ['2008', '2009', '2008', '2009', '2008', '2009', '2008', '2009'],\n 'Fall': [15, 12, 22, 10, 12, 16, 13, 23],\n 'Spring': [16, 13, 22, 14, 13, 14, 11, 20],\n 'Winter': [19, 27, 24, 20, 25, 21, 29, 26]}\ndf_data = pd.DataFrame(data, columns=['ID', 'Test', 'Year', 'Fall', 'Spring', 'Winter'])\n\ndf = df_data.melt(id_vars=['ID', 'Year', 'Test'], var_name='Quarter', value_name='Score')\ndf = df.pivot(index=['ID', 'Year', 'Quarter'], columns=['Test'], values=['Score'])\ndf.columns = df.columns.droplevel(level=0)\ndf = df.add_suffix('_Score').reset_index(drop=False)\n\n", "This requires two pivoting operations using tidypandas:\nimport pandas as pd\nfrom tidypandas.tidy_accessor import tp\n\ndata = {'ID': [1, 1, 1, 1, 2, 2, 2, 2],\n 'Test': ['Math', 'Math', 'Writing', 'Writing', 'Math', 'Math', 'Writing', 'Writing'],\n 'Year': ['2008', '2009', '2008', '2009', '2008', '2009', '2008', '2009'],\n 'Fall': [15, 12, 22, 10, 12, 16, 13, 23],\n 'Spring': [16, 13, 22, 14, 13, 14, 11, 20],\n 'Winter': [19, 27, 24, 20, 25, 21, 29, 26]}\ndf_data = pd.DataFrame(data, columns=['ID', 'Test', 'Year', 'Fall', 'Spring', 'Winter'])\n\n>>> (df_data.tp.pivot_longer(cols = ['Fall', 'Spring', 'Winter'],\n... names_to='quarter'\n... )\n... .tp.pivot_wider(id_cols = ['ID', 'quarter', 'Year'],\n... names_from = 'Test',\n... values_from = 'value',\n... names_prefix = 'score_'\n... )\n... )\n Year quarter ID score_Math score_Writing\n0 2008 Fall 1 15 22\n1 2009 Fall 1 12 10\n2 2008 Spring 1 16 22\n3 2009 Spring 1 13 14\n4 2008 Winter 1 19 24\n5 2009 Winter 1 27 20\n6 2008 Fall 2 12 13\n7 2009 Fall 2 16 23\n8 2008 Spring 2 13 11\n9 2009 Spring 2 14 20\n10 2008 Winter 2 25 29\n11 2009 Winter 2 21 26\n\n" ]
[ 0, 0, 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0072933196_pandas_python.txt
Q: `pip install` Gives Error on Some Packages Some packages give errors when I try to install them using pip install. This is the error when I try to install chatterbot, but some other packages give this error as well: pip install chatterbot Collecting chatterbot Using cached ChatterBot-1.0.5-py2.py3-none-any.whl (67 kB) Collecting pint>=0.8.1 Downloading Pint-0.19.2.tar.gz (292 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 292.0/292.0 kB 1.6 MB/s eta 0:00:00 Installing build dependencies ... done Getting requirements to build wheel ... done Preparing metadata (pyproject.toml) ... done Collecting pyyaml<5.2,>=5.1 Using cached PyYAML-5.1.2.tar.gz (265 kB) Preparing metadata (setup.py) ... done Collecting spacy<2.2,>=2.1 Using cached spacy-2.1.9.tar.gz (30.7 MB) Installing build dependencies ... error error: subprocess-exited-with-error × pip subprocess to install build dependencies did not run successfully. │ exit code: 1 ╰─> [35 lines of output] Collecting setuptools Using cached setuptools-65.0.1-py3-none-any.whl (1.2 MB) Collecting wheel<0.33.0,>0.32.0 Using cached wheel-0.32.3-py2.py3-none-any.whl (21 kB) Collecting Cython Using cached Cython-0.29.32-py2.py3-none-any.whl (986 kB) Collecting cymem<2.1.0,>=2.0.2 Using cached cymem-2.0.6-cp310-cp310-win_amd64.whl (36 kB) Collecting preshed<2.1.0,>=2.0.1 Using cached preshed-2.0.1.tar.gz (113 kB) Preparing metadata (setup.py): started Preparing metadata (setup.py): finished with status 'error' error: subprocess-exited-with-error python setup.py egg_info did not run successfully. exit code: 1 [6 lines of output] Traceback (most recent call last): File "<string>", line 2, in <module> File "<pip-setuptools-caller>", line 34, in <module> File "C:\Users\oguls\AppData\Local\Temp\pip-install-qce7tdof\preshed_546a51fe26c74852ab50db073ad57f1f\setup.py", line 9, in <module> from distutils import ccompiler, msvccompiler ImportError: cannot import name 'msvccompiler' from 'distutils' (C:\Users\oguls\AppData\Local\Programs\Python\Python310\lib\site-packages\setuptools\_distutils\__init__.py) [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: metadata-generation-failed Encountered error while generating package metadata. See above for output. note: This is an issue with the package mentioned above, not pip. hint: See above for details. [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: subprocess-exited-with-error × pip subprocess to install build dependencies did not run successfully. │ exit code: 1 ╰─> See above for output. note: This error originates from a subprocess, and is likely not a problem with pip. I don't specifically know which packages cause this error, a lot of them install without any problems. I have tried updating pip, changing environment variables and other possible solutions I've found on the internet, but nothing seems to work. Edit: The package I am trying to install supports my Python version. A: The real error in your case is: ImportError: cannot import name 'msvccompiler' from 'distutils' It occured because setuptools has broken distutils in version 65.0.0 (and has already fixed it in version 65.0.2). According to your log, the error occured in your global setuptools installation (see the path in error message), so you need to update it with the following command: pip install -U setuptools Those packages, however, may still not get installed or not work properly as the module causing this error doesn't support compiler versions needed for currently supported versions of Python. A: Same thing happened with me, it was basically pip's version problem. Try upgrading pip to latest version --22.3.1 and downgrade the python version from latest version --3.10.00 to 3.9.13... pip --version check for pip's version pip install notebook --upgrade -command to update pip to latest version This worked for me
`pip install` Gives Error on Some Packages
Some packages give errors when I try to install them using pip install. This is the error when I try to install chatterbot, but some other packages give this error as well: pip install chatterbot Collecting chatterbot Using cached ChatterBot-1.0.5-py2.py3-none-any.whl (67 kB) Collecting pint>=0.8.1 Downloading Pint-0.19.2.tar.gz (292 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 292.0/292.0 kB 1.6 MB/s eta 0:00:00 Installing build dependencies ... done Getting requirements to build wheel ... done Preparing metadata (pyproject.toml) ... done Collecting pyyaml<5.2,>=5.1 Using cached PyYAML-5.1.2.tar.gz (265 kB) Preparing metadata (setup.py) ... done Collecting spacy<2.2,>=2.1 Using cached spacy-2.1.9.tar.gz (30.7 MB) Installing build dependencies ... error error: subprocess-exited-with-error × pip subprocess to install build dependencies did not run successfully. │ exit code: 1 ╰─> [35 lines of output] Collecting setuptools Using cached setuptools-65.0.1-py3-none-any.whl (1.2 MB) Collecting wheel<0.33.0,>0.32.0 Using cached wheel-0.32.3-py2.py3-none-any.whl (21 kB) Collecting Cython Using cached Cython-0.29.32-py2.py3-none-any.whl (986 kB) Collecting cymem<2.1.0,>=2.0.2 Using cached cymem-2.0.6-cp310-cp310-win_amd64.whl (36 kB) Collecting preshed<2.1.0,>=2.0.1 Using cached preshed-2.0.1.tar.gz (113 kB) Preparing metadata (setup.py): started Preparing metadata (setup.py): finished with status 'error' error: subprocess-exited-with-error python setup.py egg_info did not run successfully. exit code: 1 [6 lines of output] Traceback (most recent call last): File "<string>", line 2, in <module> File "<pip-setuptools-caller>", line 34, in <module> File "C:\Users\oguls\AppData\Local\Temp\pip-install-qce7tdof\preshed_546a51fe26c74852ab50db073ad57f1f\setup.py", line 9, in <module> from distutils import ccompiler, msvccompiler ImportError: cannot import name 'msvccompiler' from 'distutils' (C:\Users\oguls\AppData\Local\Programs\Python\Python310\lib\site-packages\setuptools\_distutils\__init__.py) [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: metadata-generation-failed Encountered error while generating package metadata. See above for output. note: This is an issue with the package mentioned above, not pip. hint: See above for details. [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: subprocess-exited-with-error × pip subprocess to install build dependencies did not run successfully. │ exit code: 1 ╰─> See above for output. note: This error originates from a subprocess, and is likely not a problem with pip. I don't specifically know which packages cause this error, a lot of them install without any problems. I have tried updating pip, changing environment variables and other possible solutions I've found on the internet, but nothing seems to work. Edit: The package I am trying to install supports my Python version.
[ "The real error in your case is:\nImportError: cannot import name 'msvccompiler' from 'distutils'\n\nIt occured because setuptools has broken distutils in version 65.0.0 (and has already fixed it in version 65.0.2). According to your log, the error occured in your global setuptools installation (see the path in error message), so you need to update it with the following command:\npip install -U setuptools\n\nThose packages, however, may still not get installed or not work properly as the module causing this error doesn't support compiler versions needed for currently supported versions of Python.\n", "Same thing happened with me, it was basically pip's version problem.\nTry upgrading pip to latest version --22.3.1 and downgrade the python version from latest version --3.10.00 to 3.9.13...\npip --version check for pip's version\npip install notebook --upgrade -command to update pip to latest version\nThis worked for me\n" ]
[ 1, 0 ]
[]
[]
[ "dependencies", "pip", "python", "setup.py", "setuptools" ]
stackoverflow_0073378545_dependencies_pip_python_setup.py_setuptools.txt
Q: Finding element by the second class in Selenium Using inspect element, I have one element with two states: <span class="c-form-control-feedback c-form-control-feedback-error" title="" data-original-title="that username is already taken"></span> <span class="c-form-control-feedback c-form-control-feedback-error" title=""></span> Sometimes the element has the first form, sometimes the last form. I need to find the element when it's in the first state, so I need a way to driver.find_element by the data-original-title class. Is that possible? A: You can retrieve them by checking whether contains the data-original-title attribute. Selenium: driver.find_elements(by=By.XPATH, value="//*[contains(@data-original-title, '')]") Beautifulsoup: soup.find_all("span", attrs={"data-original-title": True}) Output: [<span class="c-form-control-feedback c-form-control-feedback-error" data-original-title="that username is already taken" title=""></span>]
Finding element by the second class in Selenium
Using inspect element, I have one element with two states: <span class="c-form-control-feedback c-form-control-feedback-error" title="" data-original-title="that username is already taken"></span> <span class="c-form-control-feedback c-form-control-feedback-error" title=""></span> Sometimes the element has the first form, sometimes the last form. I need to find the element when it's in the first state, so I need a way to driver.find_element by the data-original-title class. Is that possible?
[ "You can retrieve them by checking whether contains the data-original-title attribute.\nSelenium:\ndriver.find_elements(by=By.XPATH, value=\"//*[contains(@data-original-title, '')]\")\n\nBeautifulsoup:\nsoup.find_all(\"span\", attrs={\"data-original-title\": True})\n\n\nOutput:\n[<span class=\"c-form-control-feedback c-form-control-feedback-error\" data-original-title=\"that username is already taken\" title=\"\"></span>]\n\n" ]
[ 1 ]
[]
[]
[ "python", "selenium" ]
stackoverflow_0074667563_python_selenium.txt
Q: My dataframe is not changed when I run for loop(comparing two dataframes) I have two dataset, one with over 100,000 rows and 300 columns and the other with 200 rows and 6 columns. I'm comparing these two datasets and updating df1 from df2 using for loop. Here is the sample dataset df1: KEY MAIN_METHOD DRUG_ETCDTL 0 100944 1 unknown 1 67488 20 unknown 2 101476 20 unknown 3 102549 1 sleepingpill_plunitrazeparm 4 103227 1 some drug df2: 5. 방법/수단 Unnamed: 4 0 100944 sleepingpill_unknown 1 100984 others_green material 2 101476 others_anorexia 3 102549 sleepingpill_plunitrazeparm 4 103227 sleepingpill_pentobarbytal and here is the code that I tried: for i in range(0,4): index_key = df2['5. 방법/수단'][i] index_rawdata = df1.loc[df1['KEY']==index_key,'DRUG_ETCDTL'].index[0] method1 = df1['DRUG_ETCDTL'][index_rawdata] method2 = df1['METHOD_ETCDTL'][index_rawdata] # split df2 mainmethod = df2['Unnamed: 4'].str.split('_',expland=False) mainmethod[i][0] = mainmethod[i][0].replace('sleepingpill','1').replace('others','20') # change the type so we can compare it with df1 mainmethod[i][0] = int(mainmethod[i][0]) if (mainmethod[i][1] == 1) & (df1['MAIN_METHOD'][index_rawdata] ==1 ): method1 = mainmethod[i][1] elif (mainmethod[i][1] == 20) & df1['MAIN_METHOD'][index_rawdata] == 20): method2 = mainmethodp[i][1] so the df1 should be changed but when it use print df1 it is not changed. The desired output is: KEY MAIN_METHOD DRUG_ETCDTL 0 100944 1 unknown 1 67488. 20 unknown 2 101476 20 anorexia 3 102549 1 plunitrazeparm 4 103227 1 pentobarbytal NOTE: I approached this for loop method since I didn't want to manipulate df2 A: This solution avoids for loops and instead uses a temporary data frame to perform the task. The strings in the Unnamed: 4 column are split using the str.split() function provided by Pandas. The MAIN_METHOD information is transformed using a mapping. The df1 data frame is conditionally updated using numpy.where() before the temporay data frame is deleted. EDIT: The code has been modified to convert the temporary data frame column series to a numpy array using .values to avoid the error: ValueError: Can only compare identically-labeled Series objects Modified np.where() conditions: df1['DRUG_ETCDTL'] = np.where(((df1['KEY']==tmp_df['KEY'].values) & (df1['MAIN_METHOD']==tmp_df['MAIN_METHOD'].values)), tmp_df['DRUG_ETCDTL'], df1['DRUG_ETCDTL']) An alternative solution to avoiding the error would be to use .equals() instead of == when performing the comparison. df1['DRUG_ETCDTL'] = np.where(((df1['KEY'].equals(tmp_df['KEY'])) & (df1['MAIN_METHOD'].equals(tmp_df['MAIN_METHOD']))), tmp_df['DRUG_ETCDTL'], df1['DRUG_ETCDTL']) Complete solution: import pandas as pd import numpy as np df1 = pd.DataFrame({ 'KEY': [100944, 67488, 101476, 102549, 103227], 'MAIN_METHOD': [1, 20, 20, 1, 1], 'DRUG_ETCDTL': ['unknown', 'unknown', 'unknown', 'sleepingpill_plunitrazeparm', 'some drug'] }, index=np.arange(11,16)) df2 = pd.DataFrame({ '5. 방법/수단': [100944, 100984, 101476, 102549, 103227], 'Unnamed: 4': ['sleepingpill_unknown', 'others_green material', 'others_anorexia', 'sleepingpill_plunitrazeparm', 'sleepingpill_pentobarbytal'] }) # make a temporary copy of 'df2' tmp_df = df2[['5. 방법/수단', 'Unnamed: 4']].copy() # rename columns tmp_df.columns = ['KEY', 'METHOD_DRUG'] # split the string to get 'METHOD' and 'DRUG_ETCDTL' information tmp_df[['METHOD', 'DRUG_ETCDTL']] = tmp_df['METHOD_DRUG'].str.split('_', expand=True) # use a mapping to create 'MAIN_METHOD' column method_map = { 'sleepingpill': 1, 'others': 20 } tmp_df['MAIN_METHOD'] = tmp_df['METHOD'].map(method_map) # drop unwanted columns (This step is optional) tmp_df.drop(['METHOD_DRUG', 'METHOD'], inplace=True, axis=1) # update 'df1' df1['DRUG_ETCDTL'] = np.where(((df1['KEY']==tmp_df['KEY'].values) & (df1['MAIN_METHOD']==tmp_df['MAIN_METHOD'].values)), tmp_df['DRUG_ETCDTL'], df1['DRUG_ETCDTL']) # delete temporary copy of 'df2' del tmp_df
My dataframe is not changed when I run for loop(comparing two dataframes)
I have two dataset, one with over 100,000 rows and 300 columns and the other with 200 rows and 6 columns. I'm comparing these two datasets and updating df1 from df2 using for loop. Here is the sample dataset df1: KEY MAIN_METHOD DRUG_ETCDTL 0 100944 1 unknown 1 67488 20 unknown 2 101476 20 unknown 3 102549 1 sleepingpill_plunitrazeparm 4 103227 1 some drug df2: 5. 방법/수단 Unnamed: 4 0 100944 sleepingpill_unknown 1 100984 others_green material 2 101476 others_anorexia 3 102549 sleepingpill_plunitrazeparm 4 103227 sleepingpill_pentobarbytal and here is the code that I tried: for i in range(0,4): index_key = df2['5. 방법/수단'][i] index_rawdata = df1.loc[df1['KEY']==index_key,'DRUG_ETCDTL'].index[0] method1 = df1['DRUG_ETCDTL'][index_rawdata] method2 = df1['METHOD_ETCDTL'][index_rawdata] # split df2 mainmethod = df2['Unnamed: 4'].str.split('_',expland=False) mainmethod[i][0] = mainmethod[i][0].replace('sleepingpill','1').replace('others','20') # change the type so we can compare it with df1 mainmethod[i][0] = int(mainmethod[i][0]) if (mainmethod[i][1] == 1) & (df1['MAIN_METHOD'][index_rawdata] ==1 ): method1 = mainmethod[i][1] elif (mainmethod[i][1] == 20) & df1['MAIN_METHOD'][index_rawdata] == 20): method2 = mainmethodp[i][1] so the df1 should be changed but when it use print df1 it is not changed. The desired output is: KEY MAIN_METHOD DRUG_ETCDTL 0 100944 1 unknown 1 67488. 20 unknown 2 101476 20 anorexia 3 102549 1 plunitrazeparm 4 103227 1 pentobarbytal NOTE: I approached this for loop method since I didn't want to manipulate df2
[ "This solution avoids for loops and instead uses a temporary data frame to perform the task. The strings in the Unnamed: 4 column are split using the str.split() function provided by Pandas. The MAIN_METHOD information is transformed using a mapping. The df1 data frame is conditionally updated using numpy.where() before the temporay data frame is deleted.\nEDIT: The code has been modified to convert the temporary data frame column series to a numpy array using .values to avoid the error:\nValueError: Can only compare identically-labeled Series objects\n\nModified np.where() conditions:\ndf1['DRUG_ETCDTL'] = np.where(((df1['KEY']==tmp_df['KEY'].values) & \n (df1['MAIN_METHOD']==tmp_df['MAIN_METHOD'].values)),\n tmp_df['DRUG_ETCDTL'],\n df1['DRUG_ETCDTL'])\n\nAn alternative solution to avoiding the error would be to use .equals() instead of == when performing the comparison.\ndf1['DRUG_ETCDTL'] = np.where(((df1['KEY'].equals(tmp_df['KEY'])) & \n (df1['MAIN_METHOD'].equals(tmp_df['MAIN_METHOD']))),\n tmp_df['DRUG_ETCDTL'],\n df1['DRUG_ETCDTL'])\n\nComplete solution:\nimport pandas as pd\nimport numpy as np\n\ndf1 = pd.DataFrame({ \n 'KEY': [100944, 67488, 101476, 102549, 103227],\n 'MAIN_METHOD': [1, 20, 20, 1, 1],\n 'DRUG_ETCDTL': ['unknown', 'unknown', 'unknown', 'sleepingpill_plunitrazeparm', 'some drug']\n}, index=np.arange(11,16))\n\ndf2 = pd.DataFrame({\n '5. 방법/수단': [100944, 100984, 101476, 102549, 103227],\n 'Unnamed: 4': ['sleepingpill_unknown', 'others_green material', 'others_anorexia', 'sleepingpill_plunitrazeparm', 'sleepingpill_pentobarbytal']\n})\n\n# make a temporary copy of 'df2'\ntmp_df = df2[['5. 방법/수단', 'Unnamed: 4']].copy()\n# rename columns\ntmp_df.columns = ['KEY', 'METHOD_DRUG']\n# split the string to get 'METHOD' and 'DRUG_ETCDTL' information\ntmp_df[['METHOD', 'DRUG_ETCDTL']] = tmp_df['METHOD_DRUG'].str.split('_', expand=True)\n# use a mapping to create 'MAIN_METHOD' column\nmethod_map = { 'sleepingpill': 1, 'others': 20 }\ntmp_df['MAIN_METHOD'] = tmp_df['METHOD'].map(method_map)\n# drop unwanted columns (This step is optional)\ntmp_df.drop(['METHOD_DRUG', 'METHOD'], inplace=True, axis=1)\n# update 'df1'\ndf1['DRUG_ETCDTL'] = np.where(((df1['KEY']==tmp_df['KEY'].values) & \n (df1['MAIN_METHOD']==tmp_df['MAIN_METHOD'].values)),\n tmp_df['DRUG_ETCDTL'],\n df1['DRUG_ETCDTL'])\n# delete temporary copy of 'df2'\ndel tmp_df\n\n" ]
[ 0 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074666370_dataframe_pandas_python.txt
Q: Free alternative to HEROKU? I have a simple script that is a code that fetches the tweets of a specific account on Twitter and publishes them on Facebook using Python , facebook-sdk and BeautifulSoup library. I used to host script on HEROKU, but now everything in HEROKU needs money. I want a host that I can hosting for my script, thank you I tried Pythonanywhere, but in the free version I can't use Request library A: Have you ever considered using "GitHub-Actions", or "Google Firebase-Cloud Functions" ? Those sound like good candidates for your demand.
Free alternative to HEROKU?
I have a simple script that is a code that fetches the tweets of a specific account on Twitter and publishes them on Facebook using Python , facebook-sdk and BeautifulSoup library. I used to host script on HEROKU, but now everything in HEROKU needs money. I want a host that I can hosting for my script, thank you I tried Pythonanywhere, but in the free version I can't use Request library
[ "Have you ever considered using \"GitHub-Actions\", or \"Google Firebase-Cloud Functions\" ? Those sound like good candidates for your demand.\n" ]
[ 0 ]
[]
[]
[ "heroku", "python", "pythonanywhere" ]
stackoverflow_0074667292_heroku_python_pythonanywhere.txt
Q: How to save file with a number such as 2 so it isnt the same as first file saved import qrcode import time import tkinter as tk import os import shutil from sys import exit # GUI with tkinter root = tk.Tk() root.title('Window') root.geometry("400x400+50+50") root.iconbitmap('QRCODE-GENERATOR.ico') root.configure(bg="grey") lbl_1 = tk.Label(root, text="Qrcode generator", font="1") entry_1 = tk.Entry(root) lbl_1.pack() entry_1.pack(side=tk.RIGHT) tk.mainloop() # GUI end if not entry_1: exit() data = entry_1 # Qr code setup qr = qrcode.QRCode( version=1, box_size=5, border=5 ) # Adding the data to the system qr.add_data(data) # qr customizing qr.make(fit=True) img = qr.make_image( fill_color= 'black', back_color= 'white' ) time.sleep(2) # saving qr img.save('output.png') # absolute path src_path = r"D:\Python\QRcode generator\output.png" dst_path = r"D:\Users" shutil.move(src_path, dst_path) you see I'm getting the error file already exists, so what I want it to add a number to the QR code every time someone saves it. So it doesn't throw the error, you see python and shutils just gets confused when saving a file with the same name 2 times. If you don't really get what I'm saying then just tell me to make some edits, ill make it simpler. Note: I might not be able to respond when you answer A: Based on what i understand you want to add something at the end of the name file to prevent throwing an error. import time FILE_NAME = f"output-{time.time()}.png" img.save(FILE_NAME) A: Try this: Put all your code into a while true loop. Declare a variable "num" and assign it to the integer 0. MAKE SURE THIS IS OUTSIDE THE WHILE TRUE LOOP! Change your code so that this part: src_path = r"D:\Python\QRcode generator\output.png" dst_path = r"D:\Users" looks like this: src_path = r"D:\Python\QRcode generator\output" + str(num) + ".png" dst_path = r"D:\Users" num += 1 (note) If you close the program it will reset
How to save file with a number such as 2 so it isnt the same as first file saved
import qrcode import time import tkinter as tk import os import shutil from sys import exit # GUI with tkinter root = tk.Tk() root.title('Window') root.geometry("400x400+50+50") root.iconbitmap('QRCODE-GENERATOR.ico') root.configure(bg="grey") lbl_1 = tk.Label(root, text="Qrcode generator", font="1") entry_1 = tk.Entry(root) lbl_1.pack() entry_1.pack(side=tk.RIGHT) tk.mainloop() # GUI end if not entry_1: exit() data = entry_1 # Qr code setup qr = qrcode.QRCode( version=1, box_size=5, border=5 ) # Adding the data to the system qr.add_data(data) # qr customizing qr.make(fit=True) img = qr.make_image( fill_color= 'black', back_color= 'white' ) time.sleep(2) # saving qr img.save('output.png') # absolute path src_path = r"D:\Python\QRcode generator\output.png" dst_path = r"D:\Users" shutil.move(src_path, dst_path) you see I'm getting the error file already exists, so what I want it to add a number to the QR code every time someone saves it. So it doesn't throw the error, you see python and shutils just gets confused when saving a file with the same name 2 times. If you don't really get what I'm saying then just tell me to make some edits, ill make it simpler. Note: I might not be able to respond when you answer
[ "Based on what i understand you want to add something at the end of the name file to prevent throwing an error.\nimport time\nFILE_NAME = f\"output-{time.time()}.png\"\nimg.save(FILE_NAME)\n\n", "Try this:\n\nPut all your code into a while true loop.\n\nDeclare a variable \"num\" and assign it to the integer 0. MAKE SURE THIS IS OUTSIDE THE WHILE TRUE LOOP!\n\nChange your code so that this part:\nsrc_path = r\"D:\\Python\\QRcode generator\\output.png\"\ndst_path = r\"D:\\Users\"\nlooks like this:\nsrc_path = r\"D:\\Python\\QRcode generator\\output\" + str(num) + \".png\"\ndst_path = r\"D:\\Users\"\n\n\nnum += 1\n(note) If you close the program it will reset\n" ]
[ 0, 0 ]
[]
[]
[ "file", "python", "python_3.x", "tkinter" ]
stackoverflow_0074667518_file_python_python_3.x_tkinter.txt
Q: How to set a column for checkbox value in MySQL in Python? I'm a beginner of python and mysql. Now I want build a small project, user will enter name and email on the website. And there will be a checkbox to check if the users read policy. So if the box is checked, the database will save "true", if not then save "false". Could someone help me the code? Thanks. users = Table('allUsers', meta, Column('id', Integer, primary_key = True), Column('name', String(225)), Column('email', String(225)), Column('ifReadPolicy', String(225)), ) A: The following code can be used to set a column for checkbox value in MySQL in Python: from mysql.connector import connect db_connection = connect( host='localhost', user='username', password='password', database='my_database' ) # Create a cursor object cursor = db_connection.cursor() # Create a table cursor.execute("CREATE TABLE my_table (id INT, checkbox_value BOOLEAN)") # Insert data into the table sql = "INSERT INTO my_table (id, checkbox_value) VALUES (%s, %s)" values = (1, 0) cursor.execute(sql, values) # Commit the changes to the database db_connection.commit()
How to set a column for checkbox value in MySQL in Python?
I'm a beginner of python and mysql. Now I want build a small project, user will enter name and email on the website. And there will be a checkbox to check if the users read policy. So if the box is checked, the database will save "true", if not then save "false". Could someone help me the code? Thanks. users = Table('allUsers', meta, Column('id', Integer, primary_key = True), Column('name', String(225)), Column('email', String(225)), Column('ifReadPolicy', String(225)), )
[ "The following code can be used to set a column for checkbox value in MySQL in Python:\nfrom mysql.connector import connect\n\ndb_connection = connect(\n host='localhost',\n user='username',\n password='password',\n database='my_database'\n)\n\n# Create a cursor object\ncursor = db_connection.cursor()\n\n# Create a table\ncursor.execute(\"CREATE TABLE my_table (id INT, checkbox_value BOOLEAN)\")\n\n# Insert data into the table\nsql = \"INSERT INTO my_table (id, checkbox_value) VALUES (%s, %s)\"\nvalues = (1, 0)\ncursor.execute(sql, values)\n\n# Commit the changes to the database\ndb_connection.commit()\n\n" ]
[ 0 ]
[]
[]
[ "mysql", "python" ]
stackoverflow_0074665816_mysql_python.txt
Q: TCLab issues with python 3.10 (and python 3.9) OS: macOS 11.7.1 (Big Sur) A few months ago I purchased a TCLab kit and at the time did some very rudimentary tests where the device worked as expected. Recently I decided that I wanted to work on some of the APMonitor lessons and connected the TCLab to my computer expecting that it would work as it had done in the past. Sadly, that is not the case. I would like help in correcting the issues identified and getting the TCLab to work again. Originally, I had been using python 3.9. Since then python 3.10 came out and I installed it. Using the following script from APMonitor as my test, $ cat show_T1.py import tclab with tclab.TCLab() as lab: print(lab.T1) I got the errors documented below: $ python --version Python 3.10.8 $ python show_T1.py Traceback (most recent call last): File "/Users/USER/TClab/arduino/0_Test_Device/Python/show_T1.py", line 1, in <module> import tclab File "/Users/USER/TClab/arduino/0_Test_Device/Python/venv3.10/lib/python3.10/site-packages/tclab/__init__.py", line 2, in <module> from .historian import Historian, Plotter File "/Users/USER/TClab/arduino/0_Test_Device/Python/venv3.10/lib/python3.10/site-packages/tclab/historian.py", line 6, in <module> from collections import Iterable ImportError: cannot import name 'Iterable' from 'collections' (/usr/local/Cellar/[email protected]/3.10.8/Frameworks/Python.framework/Versions/3.10/lib/python3.10/collections/__init__.py) I was able to find the cause of this problem here: Stack Overflow It is an issue where Iterable was moved to collections.abc from collections. When I change the script to: $ cat show_T1.py import collections.abc collections.Iterable = collections.abc.Iterable collections.Mapping = collections.abc.Mapping collections.MutableSet = collections.abc.MutableSet collections.MutableMapping = collections.abc.MutableMapping import tclab with tclab.TCLab() as lab: print(lab.T1) the import error goes away. However, I now get new errors: $ python show_T1.py TCLab version 0.4.9 Traceback (most recent call last): File "/Users/USER/TClab/arduino/0_Test_Device/Python/venv3.10/lib/python3.10/site-packages/tclab/tclab.py", line 64, in __init__ self.connect(baud=115200) File "/Users/USER/TClab/arduino/0_Test_Device/Python/venv3.10/lib/python3.10/site-packages/tclab/tclab.py", line 114, in connect self.sp = serial.Serial(port=self.port, baudrate=baud, timeout=2) AttributeError: module 'serial' has no attribute 'Serial' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/USER/TClab/arduino/0_Test_Device/Python/venv3.10/lib/python3.10/site-packages/tclab/tclab.py", line 70, in __init__ self.sp.close() AttributeError: 'TCLab' object has no attribute 'sp' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/USER/TClab/arduino/0_Test_Device/Python/show_T1.py", line 7, in <module> with tclab.TCLab() as lab: File "/Users/USER/TClab/arduino/0_Test_Device/Python/venv3.10/lib/python3.10/site-packages/tclab/tclab.py", line 77, in __init__ raise RuntimeError('Failed to Connect.') RuntimeError: Failed to Connect. Sadly, I get almost the same error as above if I revert back to python 3.9: (python 3.9 does not have the Iterable problem, so I reverted back to the original script): $ python --version Python 3.9.15 $ python show_T1.py TCLab version 0.4.9 Traceback (most recent call last): File "/Users/USER/TClab/arduino/0_Test_Device/Python/venv/lib/python3.9/site-packages/tclab/tclab.py", line 64, in __init__ self.connect(baud=115200) File "/Users/USER/TClab/arduino/0_Test_Device/Python/venv/lib/python3.9/site-packages/tclab/tclab.py", line 114, in connect self.sp = serial.Serial(port=self.port, baudrate=baud, timeout=2) AttributeError: module 'serial' has no attribute 'Serial' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/USER/TClab/arduino/0_Test_Device/Python/venv/lib/python3.9/site-packages/tclab/tclab.py", line 70, in __init__ self.sp.close() AttributeError: 'TCLab' object has no attribute 'sp' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/USER/TClab/arduino/0_Test_Device/Python/show_T1.py", line 2, in <module> with tclab.TCLab() as lab: File "/Users/USER/TClab/arduino/0_Test_Device/Python/venv/lib/python3.9/site-packages/tclab/tclab.py", line 77, in __init__ raise RuntimeError('Failed to Connect.') RuntimeError: Failed to Connect. I know that at least I have connectivity to the device, because when I unplug the USB cable I get an error message that says, correctly, that no arduino is connected: $ python show_T1.py TCLab version 0.4.9 --- Serial Ports --- /dev/cu.Bluetooth-Incoming-Port n/a n/a Traceback (most recent call last): File "/Users/USER/TClab/arduino/0_Test_Device/Python/show_T1.py", line 2, in <module> with tclab.TCLab() as lab: File "/Users/USER/TClab/arduino/0_Test_Device/Python/venv/lib/python3.9/site-packages/tclab/tclab.py", line 61, in __init__ raise RuntimeError('No Arduino device found.') RuntimeError: No Arduino device found. Below are the python modules I have installed under python 3.10 and 3.9: python 3.10: $ pip list Package Version ------------------ --------- blessed 1.19.1 bpython 0.23 certifi 2022.9.24 charset-normalizer 2.1.1 contourpy 1.0.6 curtsies 0.4.1 cwcwidth 0.1.8 cycler 0.11.0 fonttools 4.38.0 future 0.18.2 greenlet 2.0.1 idna 3.4 iso8601 1.1.0 kiwisolver 1.4.4 matplotlib 3.6.2 numpy 1.23.5 packaging 21.3 Pillow 9.3.0 pip 22.3.1 Pygments 2.13.0 pyparsing 3.0.9 pyserial 3.5 python-dateutil 2.8.2 pyxdg 0.28 PyYAML 6.0 requests 2.28.1 scipy 1.9.3 serial 0.0.97 setuptools 65.4.1 six 1.16.0 tclab 0.4.9 urllib3 1.26.13 wcwidth 0.2.5 python 3.9: $ pip list Package Version ------------------ --------- blessed 1.19.1 bpython 0.23 certifi 2022.9.24 charset-normalizer 2.1.1 contourpy 1.0.6 curtsies 0.4.1 cwcwidth 0.1.8 cycler 0.11.0 docopt 0.6.2 fonttools 4.38.0 future 0.18.2 greenlet 2.0.1 idna 3.4 iso8601 1.1.0 kiwisolver 1.4.4 matplotlib 3.6.2 numpy 1.23.5 packaging 21.3 Pillow 9.3.0 pip 22.3.1 pipreqs 0.4.11 Pygments 2.13.0 pyparsing 3.0.9 pyserial 3.5 python-dateutil 2.8.2 pyxdg 0.28 PyYAML 6.0 requests 2.28.1 scipy 1.9.3 serial 0.0.97 setuptools 65.4.1 six 1.16.0 tclab 0.4.9 urllib3 1.26.13 wcwidth 0.2.5 yarg 0.1.9 NOTE: I have send this issue to [email protected] A: Serial Connection Issue This error AttributeError: module 'serial' has no attribute 'Serial' suggests that the package serial or a local file name serial.py has a conflict with pyserial. Rename your file to something else besides serial.py and/or uninstall the serial package (not needed for TCLab). Your pyserial package is the latest version. pip uninstall serial The error occurs when there is a local file named serial.py and we import from the pyserial module. Additional common TCLab help issues are posted to the TCLab setup and troubleshooting page. Serial Port Permission If the serial uninstall doesn't fix the problem of allowing a serial connection, one other thing to check is the USB port permission. On Linux, discover the USB port name with ls /dev/tty* Set the permission for that USB connection with the correct name. sudo chmod a+rw /dev/ttyACM0 Python 3.10 Compatibility You correctly found the issue with installing the latest version of TCLab for Python 3.10 compatibility. The module developer is still working on the next version of the TCLab package. Until that point, you can either edit the historian.py file (path is in the error message) with a text editor and change from collections import Iterable to from collections.abc import Iterable or install the new package from GitHub: pip install --upgrade https://github.com/jckantor/TCLab/archive/master.zip This will be resolved with the next release of TCLab on PyPI.org. The current version is 0.4.9 that does not include Python 3.10 compatibility because of the Iterable package change.
TCLab issues with python 3.10 (and python 3.9)
OS: macOS 11.7.1 (Big Sur) A few months ago I purchased a TCLab kit and at the time did some very rudimentary tests where the device worked as expected. Recently I decided that I wanted to work on some of the APMonitor lessons and connected the TCLab to my computer expecting that it would work as it had done in the past. Sadly, that is not the case. I would like help in correcting the issues identified and getting the TCLab to work again. Originally, I had been using python 3.9. Since then python 3.10 came out and I installed it. Using the following script from APMonitor as my test, $ cat show_T1.py import tclab with tclab.TCLab() as lab: print(lab.T1) I got the errors documented below: $ python --version Python 3.10.8 $ python show_T1.py Traceback (most recent call last): File "/Users/USER/TClab/arduino/0_Test_Device/Python/show_T1.py", line 1, in <module> import tclab File "/Users/USER/TClab/arduino/0_Test_Device/Python/venv3.10/lib/python3.10/site-packages/tclab/__init__.py", line 2, in <module> from .historian import Historian, Plotter File "/Users/USER/TClab/arduino/0_Test_Device/Python/venv3.10/lib/python3.10/site-packages/tclab/historian.py", line 6, in <module> from collections import Iterable ImportError: cannot import name 'Iterable' from 'collections' (/usr/local/Cellar/[email protected]/3.10.8/Frameworks/Python.framework/Versions/3.10/lib/python3.10/collections/__init__.py) I was able to find the cause of this problem here: Stack Overflow It is an issue where Iterable was moved to collections.abc from collections. When I change the script to: $ cat show_T1.py import collections.abc collections.Iterable = collections.abc.Iterable collections.Mapping = collections.abc.Mapping collections.MutableSet = collections.abc.MutableSet collections.MutableMapping = collections.abc.MutableMapping import tclab with tclab.TCLab() as lab: print(lab.T1) the import error goes away. However, I now get new errors: $ python show_T1.py TCLab version 0.4.9 Traceback (most recent call last): File "/Users/USER/TClab/arduino/0_Test_Device/Python/venv3.10/lib/python3.10/site-packages/tclab/tclab.py", line 64, in __init__ self.connect(baud=115200) File "/Users/USER/TClab/arduino/0_Test_Device/Python/venv3.10/lib/python3.10/site-packages/tclab/tclab.py", line 114, in connect self.sp = serial.Serial(port=self.port, baudrate=baud, timeout=2) AttributeError: module 'serial' has no attribute 'Serial' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/USER/TClab/arduino/0_Test_Device/Python/venv3.10/lib/python3.10/site-packages/tclab/tclab.py", line 70, in __init__ self.sp.close() AttributeError: 'TCLab' object has no attribute 'sp' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/USER/TClab/arduino/0_Test_Device/Python/show_T1.py", line 7, in <module> with tclab.TCLab() as lab: File "/Users/USER/TClab/arduino/0_Test_Device/Python/venv3.10/lib/python3.10/site-packages/tclab/tclab.py", line 77, in __init__ raise RuntimeError('Failed to Connect.') RuntimeError: Failed to Connect. Sadly, I get almost the same error as above if I revert back to python 3.9: (python 3.9 does not have the Iterable problem, so I reverted back to the original script): $ python --version Python 3.9.15 $ python show_T1.py TCLab version 0.4.9 Traceback (most recent call last): File "/Users/USER/TClab/arduino/0_Test_Device/Python/venv/lib/python3.9/site-packages/tclab/tclab.py", line 64, in __init__ self.connect(baud=115200) File "/Users/USER/TClab/arduino/0_Test_Device/Python/venv/lib/python3.9/site-packages/tclab/tclab.py", line 114, in connect self.sp = serial.Serial(port=self.port, baudrate=baud, timeout=2) AttributeError: module 'serial' has no attribute 'Serial' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/USER/TClab/arduino/0_Test_Device/Python/venv/lib/python3.9/site-packages/tclab/tclab.py", line 70, in __init__ self.sp.close() AttributeError: 'TCLab' object has no attribute 'sp' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/USER/TClab/arduino/0_Test_Device/Python/show_T1.py", line 2, in <module> with tclab.TCLab() as lab: File "/Users/USER/TClab/arduino/0_Test_Device/Python/venv/lib/python3.9/site-packages/tclab/tclab.py", line 77, in __init__ raise RuntimeError('Failed to Connect.') RuntimeError: Failed to Connect. I know that at least I have connectivity to the device, because when I unplug the USB cable I get an error message that says, correctly, that no arduino is connected: $ python show_T1.py TCLab version 0.4.9 --- Serial Ports --- /dev/cu.Bluetooth-Incoming-Port n/a n/a Traceback (most recent call last): File "/Users/USER/TClab/arduino/0_Test_Device/Python/show_T1.py", line 2, in <module> with tclab.TCLab() as lab: File "/Users/USER/TClab/arduino/0_Test_Device/Python/venv/lib/python3.9/site-packages/tclab/tclab.py", line 61, in __init__ raise RuntimeError('No Arduino device found.') RuntimeError: No Arduino device found. Below are the python modules I have installed under python 3.10 and 3.9: python 3.10: $ pip list Package Version ------------------ --------- blessed 1.19.1 bpython 0.23 certifi 2022.9.24 charset-normalizer 2.1.1 contourpy 1.0.6 curtsies 0.4.1 cwcwidth 0.1.8 cycler 0.11.0 fonttools 4.38.0 future 0.18.2 greenlet 2.0.1 idna 3.4 iso8601 1.1.0 kiwisolver 1.4.4 matplotlib 3.6.2 numpy 1.23.5 packaging 21.3 Pillow 9.3.0 pip 22.3.1 Pygments 2.13.0 pyparsing 3.0.9 pyserial 3.5 python-dateutil 2.8.2 pyxdg 0.28 PyYAML 6.0 requests 2.28.1 scipy 1.9.3 serial 0.0.97 setuptools 65.4.1 six 1.16.0 tclab 0.4.9 urllib3 1.26.13 wcwidth 0.2.5 python 3.9: $ pip list Package Version ------------------ --------- blessed 1.19.1 bpython 0.23 certifi 2022.9.24 charset-normalizer 2.1.1 contourpy 1.0.6 curtsies 0.4.1 cwcwidth 0.1.8 cycler 0.11.0 docopt 0.6.2 fonttools 4.38.0 future 0.18.2 greenlet 2.0.1 idna 3.4 iso8601 1.1.0 kiwisolver 1.4.4 matplotlib 3.6.2 numpy 1.23.5 packaging 21.3 Pillow 9.3.0 pip 22.3.1 pipreqs 0.4.11 Pygments 2.13.0 pyparsing 3.0.9 pyserial 3.5 python-dateutil 2.8.2 pyxdg 0.28 PyYAML 6.0 requests 2.28.1 scipy 1.9.3 serial 0.0.97 setuptools 65.4.1 six 1.16.0 tclab 0.4.9 urllib3 1.26.13 wcwidth 0.2.5 yarg 0.1.9 NOTE: I have send this issue to [email protected]
[ "Serial Connection Issue\nThis error AttributeError: module 'serial' has no attribute 'Serial' suggests that the package serial or a local file name serial.py has a conflict with pyserial. Rename your file to something else besides serial.py and/or uninstall the serial package (not needed for TCLab). Your pyserial package is the latest version.\npip uninstall serial\n\nThe error occurs when there is a local file named serial.py and we import from the pyserial module. Additional common TCLab help issues are posted to the TCLab setup and troubleshooting page.\n\nSerial Port Permission\nIf the serial uninstall doesn't fix the problem of allowing a serial connection, one other thing to check is the USB port permission. On Linux, discover the USB port name with ls /dev/tty* Set the permission for that USB connection with the correct name.\nsudo chmod a+rw /dev/ttyACM0\n\nPython 3.10 Compatibility\nYou correctly found the issue with installing the latest version of TCLab for Python 3.10 compatibility. The module developer is still working on the next version of the TCLab package. Until that point, you can either edit the historian.py file (path is in the error message) with a text editor and change from collections import Iterable to from collections.abc import Iterable or install the new package from GitHub:\npip install --upgrade https://github.com/jckantor/TCLab/archive/master.zip\n\nThis will be resolved with the next release of TCLab on PyPI.org. The current version is 0.4.9 that does not include Python 3.10 compatibility because of the Iterable package change.\n" ]
[ 1 ]
[]
[]
[ "iterable", "python" ]
stackoverflow_0074663465_iterable_python.txt
Q: Why doesn't type() work in if statements in Python? user_input = int(input('Enter input: ')) if type(user_input) == "<class 'int'>": print('This is a integer.') The code above outputs nothing to the console. I am just confused because it is very simple and looks like it should work. I've tried removing the int() in the input line which output nothing, I understand this because user_input turns into a string but I do not understand why it outputs nothing when user_input is defined as an integer. A: Fix That is because type(user_input) returns a type, not a string, don't confuse yourself with what you see printed and the real thing. When you print something you only see a representation of the thing. Only if it's a string you can copy and compare it directly print(type(type(user_input))) # <class 'type'> So you well understand, this is how it would work using type if str(type(user_input)) == "<class 'int'>": print('This is a integer.') if type(user_input) == int: print('This is a integer.') if type(user_input) is int: print('This is a integer.') Improve The prefered way should be if isinstance(user_input, int): print('This is a integer.') A: Its because you're comparing it to the wrong thing. if you did "type(user_input) == int" your program should work as expected. A: You can use the isinstance method as mentioned above or just directly compare to the int like: user_input = int(input('Enter input: ')) if type(user_input) is int: print('This is a integer.')
Why doesn't type() work in if statements in Python?
user_input = int(input('Enter input: ')) if type(user_input) == "<class 'int'>": print('This is a integer.') The code above outputs nothing to the console. I am just confused because it is very simple and looks like it should work. I've tried removing the int() in the input line which output nothing, I understand this because user_input turns into a string but I do not understand why it outputs nothing when user_input is defined as an integer.
[ "Fix\nThat is because type(user_input) returns a type, not a string, don't confuse yourself with what you see printed and the real thing. When you print something you only see a representation of the thing. Only if it's a string you can copy and compare it directly\nprint(type(type(user_input))) # <class 'type'>\n\nSo you well understand, this is how it would work using type\nif str(type(user_input)) == \"<class 'int'>\":\n print('This is a integer.')\n\nif type(user_input) == int:\n print('This is a integer.')\n\nif type(user_input) is int:\n print('This is a integer.')\n\nImprove\nThe prefered way should be\nif isinstance(user_input, int):\n print('This is a integer.')\n\n", "Its because you're comparing it to the wrong thing. if you did \"type(user_input) == int\" your program should work as expected.\n", "You can use the isinstance method as mentioned above or just directly compare to the int like: \nuser_input = int(input('Enter input: '))\n\nif type(user_input) is int:\n print('This is a integer.')\n\n" ]
[ 0, 0, 0 ]
[]
[]
[ "conditional_statements", "if_statement", "input", "python", "types" ]
stackoverflow_0074667623_conditional_statements_if_statement_input_python_types.txt
Q: why 'set' function doesn't work in Jupyter when I write in cell set('hello') it raises error 'tuple' object is not callable
why 'set' function doesn't work in Jupyter
when I write in cell set('hello') it raises error 'tuple' object is not callable
[]
[]
[ "To set an env variable in a jupyter notebook, just use a % magic commands, either %env or %set_env , e.g., %env VAR = VALUE or %env VAR VALUE .\n" ]
[ -1 ]
[ "jupyter_notebook", "python", "set" ]
stackoverflow_0074667529_jupyter_notebook_python_set.txt
Q: Input 0 of layer "conv2d_5" is incompatible with the layer: expected min_ndim=4, found ndim=2. Full shape received: (None, 2) I am trying to use CNN on multivariate time series instead the most common usage on images. The number of features is between 90 and 120, depending on which I need to consider and experiment with. This is my code scaler = StandardScaler() X_train_s = scaler.fit_transform(X_train) X_test_s = scaler.transform(X_test) X_train_s = X_train_s.reshape((X_train_s.shape[0], X_train_s.shape[1],1)) X_test_s = X_test_s.reshape((X_test_s.shape[0], X_test_s.shape[1],1)) batch_size = 1024 length = 120 n_features = X_train_s.shape[1] generator = TimeseriesGenerator(X_train_s, pd.DataFrame.to_numpy(Y_train[['TARGET_KEEP_LONG', 'TARGET_KEEP_SHORT']]), length=length, batch_size=batch_size) validation_generator = TimeseriesGenerator(X_test_s, pd.DataFrame.to_numpy(Y_test[['TARGET_KEEP_LONG', 'TARGET_KEEP_SHORT']]), length=length, batch_size=batch_size) early_stop = EarlyStopping(monitor = 'val_accuracy', mode = 'max', verbose = 1, patience = 20) CNN_model = Sequential() model.add( Conv2D( filters=64, kernel_size=(1, 5), strides=1, activation="relu", padding="valid", input_shape=(length, n_features, 1), use_bias=True, ) ) model.add(MaxPooling2D(pool_size=(1, 2))) model.add( Conv2D( filters=64, kernel_size=(1, 5), strides=1, activation="relu", padding="valid", use_bias=True, ) ) [... code continuation ...] In other words, I take the features as one dimension and a certain number of rows as the other dimension. But I get this error "ValueError: Input 0 of layer "conv2d_5" is incompatible with the layer: expected min_ndim=4, found ndim=2. Full shape received: (None, 2)" that is referred to the first CNN layer. A: Data loading I have made a simple class that demonstrates a reasonable approach to doing so. Mind you, I am not that familiar with TensorFlow, mainly using PyTorch, so the code might not be optimized. You are probably best at defining a custom generator if one can't be used for this. After reading the comments, I noticed that you don't want to compute all ahead of time all the values; this would do so because we are only keeping the underlying data in self.data and creating new tensors based on this. import tensorflow as tf import numpy as np v = np.array([[12055., 11430., 10966., 12055., 11430., 10966.], [11430., 10966., 10725., 11430., 10966., 10725.], [10966., 10725., 10672.,10966., 10725., 10672.]]) q = tf.constant(v) class MyData(): def __init__(self, data, windows_size): self.data = data self.windows_size = windows_size self._dataset = tf.data.Dataset.from_generator(self._generator, output_types=tf.float32, output_shapes=(self.windows_size, self.data.shape[1])) def _generator(self): for i in range(self.data.shape[0] - self.windows_size + 1): yield self.data[i:i+self.windows_size] def __len__(self): return self.data.shape[0] - self.windows_size + 1 def get_dataset(self): return self._dataset # Example usage: test = MyData(q, 2) it = iter(test.get_dataset()) for data in it: print(data.shape) This produces tensors that have a windows_size for the first dimension. The code was made to work with [N, DATA] -> [W, DATA], where N is for the time_series, and W is for the reduced window size; I added part of the example code from the previous link. Model design Multiple design decisions can be made for the model design. Firstly, you can treat it as an embedding problem (Embedding layer). Then you can reshape it to use with your 2D convolutions. The second approach is to reshape the data into something resembling 2D images directly. Note that the second approach will be bad if the sequence length changes between different examples. You cannot batchify the training without modifying the network (adding extra layers to process images depending on the size is not relatively straightforward). Lastly, there already exist tutorials that do such things with features of time series data, shown below: def basic_conv2D(n_filters=10, fsize=5, window_size=5, n_features=2): new_model = keras.Sequential() new_model.add(tf.keras.layers.Conv2D(n_filters, (1,fsize), padding=”same”, activation=”relu”, input_shape=(window_size, n_features, 1))) new_model.add(tf.keras.layers.Flatten()) new_model.add(tf.keras.layers.Dense(1000, activation=’relu’)) new_model.add(tf.keras.layers.Dense(100)) new_model.add(tf.keras.layers.Dense(1)) new_model.compile(optimizer=”adam”, loss=”mean_squared_error”) return new_model m2 = basic_conv2D(n_filters=24, fsize=2, window_size=window_size, n_features=data_train_wide.shape[2]) m2.summary()
Input 0 of layer "conv2d_5" is incompatible with the layer: expected min_ndim=4, found ndim=2. Full shape received: (None, 2)
I am trying to use CNN on multivariate time series instead the most common usage on images. The number of features is between 90 and 120, depending on which I need to consider and experiment with. This is my code scaler = StandardScaler() X_train_s = scaler.fit_transform(X_train) X_test_s = scaler.transform(X_test) X_train_s = X_train_s.reshape((X_train_s.shape[0], X_train_s.shape[1],1)) X_test_s = X_test_s.reshape((X_test_s.shape[0], X_test_s.shape[1],1)) batch_size = 1024 length = 120 n_features = X_train_s.shape[1] generator = TimeseriesGenerator(X_train_s, pd.DataFrame.to_numpy(Y_train[['TARGET_KEEP_LONG', 'TARGET_KEEP_SHORT']]), length=length, batch_size=batch_size) validation_generator = TimeseriesGenerator(X_test_s, pd.DataFrame.to_numpy(Y_test[['TARGET_KEEP_LONG', 'TARGET_KEEP_SHORT']]), length=length, batch_size=batch_size) early_stop = EarlyStopping(monitor = 'val_accuracy', mode = 'max', verbose = 1, patience = 20) CNN_model = Sequential() model.add( Conv2D( filters=64, kernel_size=(1, 5), strides=1, activation="relu", padding="valid", input_shape=(length, n_features, 1), use_bias=True, ) ) model.add(MaxPooling2D(pool_size=(1, 2))) model.add( Conv2D( filters=64, kernel_size=(1, 5), strides=1, activation="relu", padding="valid", use_bias=True, ) ) [... code continuation ...] In other words, I take the features as one dimension and a certain number of rows as the other dimension. But I get this error "ValueError: Input 0 of layer "conv2d_5" is incompatible with the layer: expected min_ndim=4, found ndim=2. Full shape received: (None, 2)" that is referred to the first CNN layer.
[ "Data loading\nI have made a simple class that demonstrates a reasonable approach to doing so. Mind you, I am not that familiar with TensorFlow, mainly using PyTorch, so the code might not be optimized.\nYou are probably best at defining a custom generator if one can't be used for this. After reading the comments, I noticed that you don't want to compute all ahead of time all the values; this would do so because we are only keeping the underlying data in self.data and creating new tensors based on this.\nimport tensorflow as tf\nimport numpy as np\nv = np.array([[12055., 11430., 10966., 12055., 11430., 10966.], \n [11430., 10966., 10725., 11430., 10966., 10725.],\n [10966., 10725., 10672.,10966., 10725., 10672.]])\nq = tf.constant(v)\nclass MyData():\n\n def __init__(self, data, windows_size):\n self.data = data\n self.windows_size = windows_size\n self._dataset = tf.data.Dataset.from_generator(self._generator,\n output_types=tf.float32,\n output_shapes=(self.windows_size, self.data.shape[1]))\n\n def _generator(self):\n for i in range(self.data.shape[0] - self.windows_size + 1):\n yield self.data[i:i+self.windows_size]\n \n def __len__(self):\n return self.data.shape[0] - self.windows_size + 1\n\n def get_dataset(self):\n return self._dataset\n# Example usage:\ntest = MyData(q, 2)\nit = iter(test.get_dataset())\n\nfor data in it:\n print(data.shape)\n\nThis produces tensors that have a windows_size for the first dimension. The code was made to work with [N, DATA] -> [W, DATA], where N is for the time_series, and W is for the reduced window size; I added part of the example code from the previous link.\nModel design\nMultiple design decisions can be made for the model design.\nFirstly, you can treat it as an embedding problem (Embedding layer). Then you can reshape it to use with your 2D convolutions.\nThe second approach is to reshape the data into something resembling 2D images directly. Note that the second approach will be bad if the sequence length changes between different examples. You cannot batchify the training without modifying the network (adding extra layers to process images depending on the size is not relatively straightforward).\nLastly, there already exist tutorials that do such things with features of time series data, shown below:\ndef basic_conv2D(n_filters=10, fsize=5, window_size=5, n_features=2):\n new_model = keras.Sequential()\n new_model.add(tf.keras.layers.Conv2D(n_filters, (1,fsize), padding=”same”, activation=”relu”, input_shape=(window_size, n_features, 1)))\n new_model.add(tf.keras.layers.Flatten())\n new_model.add(tf.keras.layers.Dense(1000, activation=’relu’))\n new_model.add(tf.keras.layers.Dense(100))\n new_model.add(tf.keras.layers.Dense(1))\n new_model.compile(optimizer=”adam”, loss=”mean_squared_error”) \n return new_model\nm2 = basic_conv2D(n_filters=24, fsize=2, window_size=window_size, n_features=data_train_wide.shape[2])\nm2.summary()\n\n" ]
[ 0 ]
[]
[]
[ "conv_neural_network", "python" ]
stackoverflow_0074590804_conv_neural_network_python.txt
Q: SqlAlchemy AsyncSession transaction When using async session as context manager, what happens is if an exception raises, I get a warning that I wanna get rid of. here's how I use the session: async with session.begin(): retailer: model.Retailer = (await session.scalars(select(model.Retailer).filter(model.Retailer.name=="default"))).first() await session.execute(insert(model.Contact).values(mock_contact(retailer.uuid))) raise RuntimeError() and the warning that I get is: RuntimeWarning: coroutine 'Transaction.rollback' was never awaited I'm sure what I'm supposed to do and the twist here should be a little tricky because I surfed the net for any possible solution and none worked A: The warning message you are seeing, RuntimeWarning: coroutine 'Transaction.rollback' was never awaited, is indicating that you are using an async context manager (async with session.begin()) but you are not awaiting the rollback of the transaction if an exception is raised. In your code, you are using an async context manager to manage a database transaction. This means that the transaction will be automatically committed when the context manager exits normally, but it will be rolled back if an exception is raised. However, because you are not awaiting the rollback of the transaction, the Transaction.rollback coroutine is never actually executed and the warning message is displayed. To fix this issue, you can simply add an await statement to the Transaction.rollback coroutine. Here's an example of how you could do this: async with session.begin() as txn: retailer: model.Retailer = (await session.scalars(select(model.Retailer).filter(model.Retailer.name=="default"))).first() await session.execute(insert(model.Contact).values(mock_contact(retailer.uuid))) raise RuntimeError() # await the rollback of the transaction await txn.rollback()
SqlAlchemy AsyncSession transaction
When using async session as context manager, what happens is if an exception raises, I get a warning that I wanna get rid of. here's how I use the session: async with session.begin(): retailer: model.Retailer = (await session.scalars(select(model.Retailer).filter(model.Retailer.name=="default"))).first() await session.execute(insert(model.Contact).values(mock_contact(retailer.uuid))) raise RuntimeError() and the warning that I get is: RuntimeWarning: coroutine 'Transaction.rollback' was never awaited I'm sure what I'm supposed to do and the twist here should be a little tricky because I surfed the net for any possible solution and none worked
[ "The warning message you are seeing, RuntimeWarning: coroutine 'Transaction.rollback' was never awaited, is indicating that you are using an async context manager (async with session.begin()) but you are not awaiting the rollback of the transaction if an exception is raised.\nIn your code, you are using an async context manager to manage a database transaction. This means that the transaction will be automatically committed when the context manager exits normally, but it will be rolled back if an exception is raised. However, because you are not awaiting the rollback of the transaction, the Transaction.rollback coroutine is never actually executed and the warning message is displayed.\nTo fix this issue, you can simply add an await statement to the Transaction.rollback coroutine. Here's an example of how you could do this:\nasync with session.begin() as txn:\n retailer: model.Retailer = (await session.scalars(select(model.Retailer).filter(model.Retailer.name==\"default\"))).first()\n await session.execute(insert(model.Contact).values(mock_contact(retailer.uuid)))\n raise RuntimeError()\n # await the rollback of the transaction\n await txn.rollback()\n\n" ]
[ 0 ]
[]
[]
[ "python", "sqlalchemy" ]
stackoverflow_0074667608_python_sqlalchemy.txt
Q: Remove empty strings from a list of strings on each row in a pandas dataframe I have a pandas dataframe and one of the columns contains a list of strings e.g: ['', 'Hello', 'The house is warm', '', 'What time is it'] The strings are different for each row of the dataframe but all lists on each row contain empty strings. How can I remove these? The column is called 'Description'. I have tried the following methods: df['Description'] = df['Description', [i for i in df['Description'] if i]] while("" in df['Description']): df['Description'].remove("") df['Description'] = [list(filter(None, sublist)) for sublist in df['Description']] But none work. Thank you in advance! A: create new list and append only string that is not empty use eval() if they are string representation of list df['Description'] = df['Description'].apply(lambda x: [item for item in eval(x) if item != ''])
Remove empty strings from a list of strings on each row in a pandas dataframe
I have a pandas dataframe and one of the columns contains a list of strings e.g: ['', 'Hello', 'The house is warm', '', 'What time is it'] The strings are different for each row of the dataframe but all lists on each row contain empty strings. How can I remove these? The column is called 'Description'. I have tried the following methods: df['Description'] = df['Description', [i for i in df['Description'] if i]] while("" in df['Description']): df['Description'].remove("") df['Description'] = [list(filter(None, sublist)) for sublist in df['Description']] But none work. Thank you in advance!
[ "create new list and append only string that is not empty\nuse eval() if they are string representation of list\ndf['Description'] = df['Description'].apply(lambda x: [item for item in eval(x) if item != ''])\n\n" ]
[ 0 ]
[]
[]
[ "dataframe", "list", "pandas", "python", "string" ]
stackoverflow_0074667700_dataframe_list_pandas_python_string.txt
Q: Dynamically add attributes to instance and use attribute like @Property with get and set Im learning Python here so please spare me for silly questions. I Encounter an issue with adding attribute to a class instance I have a dictionary of people with name,age and strength, i.e { "Mary": {"age":25, "strength": 80}, "John": {"age": 40, "strength": 70}, ... } and a class that will get in list of people as constructor input and add them as its own attribute, and when that attribute is called, it will return the age i.e: group = Person({dictionary of person}) # call person name as attribute, get back age first_person = group.Mary # return 25 here group.John # return 40 here However, each attribute will also need to maintain its behavior as a dict/object group.Mary["strength"] # return 80 here I tried __get()__ but it seems to work only on class variables which is not this case since I need to create multiple group instances of class Person and they don't share variables. Also tried setattr() but it will keep each attribute as a dict and therefore cannot directly call like group.Maryto get age May I know is there any way in Python to implement this requirement? A: I don't think this is possible. The expression group.Mary["strength"] essentially consists of 2 steps: retrieving the attribute named "Mary" from the object group, and; calling the method __getitem__ on the retrieved attribute with argument "strength". However, note that in your example you require Step 1 (group.Mary) to return 25, which is an integer. Unfortunately, integers can't also be a mapping (objects that implement __getitem__). A: Added to class a method to create dynamically the properties but, for example, it can also be refactor as global function. Each attribute is private and its access is ruled by the descriptors. Notice the private attributes created dynamically required a bit more of care, see this for details and references. class Person: @classmethod def property_factory(cls, name): # dynamical descriptors, -> private name mangling! p = property(fget=lambda self: getattr(self, f'_{cls.__name__}__{name}'), fset=lambda self, v: setattr(self, f'_{cls.__name__}__{name}', v)) setattr(cls, name, p) def __init__(self, **p): # dynamically add properties for name in p.keys(): self.property_factory(name) # initialization descriptors for name, d in p.items(): setattr(self, name, d) data = {"Mary": {"age":25, "strength": 80}, "John": {"age": 40, "strength": 70}} p = Person(**data) # check property print(type(p).Mary) #<property object at 0x7f5e5d469e00> print(type(p.Mary)) #<class 'dict'> # descriptors in action p.Mary['age'] += 666 p.John['strength'] -= 666 print(p.John) #{'age': 40, 'strength': -596} print(p.Mary) #{'age': 691, 'strength': 80}
Dynamically add attributes to instance and use attribute like @Property with get and set
Im learning Python here so please spare me for silly questions. I Encounter an issue with adding attribute to a class instance I have a dictionary of people with name,age and strength, i.e { "Mary": {"age":25, "strength": 80}, "John": {"age": 40, "strength": 70}, ... } and a class that will get in list of people as constructor input and add them as its own attribute, and when that attribute is called, it will return the age i.e: group = Person({dictionary of person}) # call person name as attribute, get back age first_person = group.Mary # return 25 here group.John # return 40 here However, each attribute will also need to maintain its behavior as a dict/object group.Mary["strength"] # return 80 here I tried __get()__ but it seems to work only on class variables which is not this case since I need to create multiple group instances of class Person and they don't share variables. Also tried setattr() but it will keep each attribute as a dict and therefore cannot directly call like group.Maryto get age May I know is there any way in Python to implement this requirement?
[ "I don't think this is possible. The expression group.Mary[\"strength\"] essentially consists of 2 steps:\n\nretrieving the attribute named \"Mary\" from the object group, and;\ncalling the method __getitem__ on the retrieved attribute with argument \"strength\".\n\nHowever, note that in your example you require Step 1 (group.Mary) to return 25, which is an integer. Unfortunately, integers can't also be a mapping (objects that implement __getitem__).\n", "Added to class a method to create dynamically the properties but, for example, it can also be refactor as global function.\nEach attribute is private and its access is ruled by the descriptors. Notice the private attributes created dynamically required a bit more of care, see this for details and references.\nclass Person:\n @classmethod\n def property_factory(cls, name):\n # dynamical descriptors, -> private name mangling!\n p = property(fget=lambda self: getattr(self, f'_{cls.__name__}__{name}'), \n fset=lambda self, v: setattr(self, f'_{cls.__name__}__{name}', v))\n \n setattr(cls, name, p)\n\n def __init__(self, **p):\n # dynamically add properties\n for name in p.keys():\n self.property_factory(name)\n\n # initialization descriptors\n for name, d in p.items():\n setattr(self, name, d)\n\n\ndata = {\"Mary\": {\"age\":25, \"strength\": 80}, \"John\": {\"age\": 40, \"strength\": 70}}\n\np = Person(**data)\n\n# check property\nprint(type(p).Mary)\n#<property object at 0x7f5e5d469e00>\nprint(type(p.Mary))\n#<class 'dict'>\n\n# descriptors in action\np.Mary['age'] += 666\np.John['strength'] -= 666\nprint(p.John)\n#{'age': 40, 'strength': -596}\nprint(p.Mary)\n#{'age': 691, 'strength': 80}\n\n" ]
[ 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0074665577_python.txt
Q: Loading older sklearn models with new sklearn package I have upgraded my python version from 3.6.5 to 3.10.6 and scikit-learn version from 0.20.3 to 1.1.3. I am getting the following error when I am trying to load my older models built on older sklearn version using the new sklearn version: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/deepakahire/codebase/venv_3_10_6/lib/python3.10/site-packages/joblib/numpy_pickle.py", line 658, in load obj = _unpickle(fobj, filename, mmap_mode) File "/home/deepakahire/codebase/venv_3_10_6/lib/python3.10/site-packages/joblib/numpy_pickle.py", line 577, in _unpickle obj = unpickler.load() File "/home/deepakahire/.pyenv/versions/3.10.6/lib/python3.10/pickle.py", line 1213, in load dispatch[key[0]](self) File "/home/deepakahire/.pyenv/versions/3.10.6/lib/python3.10/pickle.py", line 1529, in load_global klass = self.find_class(module, name) File "/home/deepakahire/.pyenv/versions/3.10.6/lib/python3.10/pickle.py", line 1580, in find_class __import__(module, level=0) ModuleNotFoundError: No module named 'sklearn.linear_model.logistic' I am using joblib's load functionality to load the model. I did not upgrade the joblib package. A: This is the problem which I faced during a production release. Complete details and the solution to this issue are discussed at - https://www.kaggle.com/code/adeepak7/load-old-sklearn-models-with-new-sklearn-package
Loading older sklearn models with new sklearn package
I have upgraded my python version from 3.6.5 to 3.10.6 and scikit-learn version from 0.20.3 to 1.1.3. I am getting the following error when I am trying to load my older models built on older sklearn version using the new sklearn version: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/deepakahire/codebase/venv_3_10_6/lib/python3.10/site-packages/joblib/numpy_pickle.py", line 658, in load obj = _unpickle(fobj, filename, mmap_mode) File "/home/deepakahire/codebase/venv_3_10_6/lib/python3.10/site-packages/joblib/numpy_pickle.py", line 577, in _unpickle obj = unpickler.load() File "/home/deepakahire/.pyenv/versions/3.10.6/lib/python3.10/pickle.py", line 1213, in load dispatch[key[0]](self) File "/home/deepakahire/.pyenv/versions/3.10.6/lib/python3.10/pickle.py", line 1529, in load_global klass = self.find_class(module, name) File "/home/deepakahire/.pyenv/versions/3.10.6/lib/python3.10/pickle.py", line 1580, in find_class __import__(module, level=0) ModuleNotFoundError: No module named 'sklearn.linear_model.logistic' I am using joblib's load functionality to load the model. I did not upgrade the joblib package.
[ "This is the problem which I faced during a production release.\nComplete details and the solution to this issue are discussed at -\nhttps://www.kaggle.com/code/adeepak7/load-old-sklearn-models-with-new-sklearn-package\n" ]
[ 0 ]
[]
[]
[ "model_management", "python", "python_3.x", "scikit_learn" ]
stackoverflow_0074667759_model_management_python_python_3.x_scikit_learn.txt
Q: Return a list of weekdays, starting with given weekday My task is to define a function weekdays(weekday) that returns a list of weekdays, starting with the given weekday. It should work like this: >>> weekdays('Wednesday') ['Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday', 'Monday', 'Tuesday'] So far I've come up with this one: def weekdays(weekday): days = ('Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday') result = "" for day in days: if day == weekday: result += day return result But this prints the input day only: >>> weekdays("Sunday") 'Sunday' What am I doing wrong? A: The reason your code is only returning one day name is because weekday will never match more than one string in the days tuple and therefore won't add any of the days of the week that follow it (nor wrap around to those before it). Even if it did somehow, it would still return them all as one long string because you're initializing result to an empty string, not an empty list. Here's a solution that uses the datetime module to create a list of all the weekday names starting with "Monday" in the current locale's language. This list is then used to create another list of names in the desired order which is returned. It does the ordering by finding the index of designated day in the original list and then splicing together two slices of it relative to that index to form the result. As an optimization it also caches the locale's day names so if it's ever called again with the same current locale (a likely scenario), it won't need to recreate this private list. import datetime import locale def weekdays(weekday): current_locale = locale.getlocale() if current_locale not in weekdays._days_cache: # Add day names from a reference date, Monday 2001-Jan-1 to cache. weekdays._days_cache[current_locale] = [ datetime.date(2001, 1, i).strftime('%A') for i in range(1, 8)] days = weekdays._days_cache[current_locale] index = days.index(weekday) return days[index:] + days[:index] weekdays._days_cache = {} # initialize cache print(weekdays('Wednesday')) # ['Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday', 'Monday', 'Tuesday'] Besides not needing to hard-code days names in the function, another advantage to using the datetime module is that code utilizing it will automatically work in other languages. This can be illustrated by changing the locale and then calling the function with a day name in the corresponding language. For example, although France is not my default locale, I can set it to be the current one for testing purposes as shown below. Note: According to this Capitalization of day names article, the names of the days of the week are not capitalized in French like they are in my default English locale, but that is taken into account automatically, too, which means the weekday name passed to it must be in the language of the current locale and is also case-sensitive. Of course you could modify the function to ignore the lettercase of the input argument, if desired. # set or change locale locale.setlocale(locale.LC_ALL, 'french_france') print(weekdays('mercredi')) # use French equivalent of 'Wednesday' # ['mercredi', 'jeudi', 'vendredi', 'samedi', 'dimanche', 'lundi', 'mardi'] A: A far quicker approach would be to keep in mind, that the weekdays cycle. As such, we just need to get the first day we want to include the list, and add the remaining 6 elements to the end. Or in other words, we get the weekday list starting from the starting day, append another full week, and return only the first 7 elements (for the full week). days = ('Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday') def weekdays ( weekday ): index = days.index( weekday ) return list( days[index:] + days )[:7] >>> weekdays( 'Wednesday' ) ['Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday', 'Monday', 'Tuesday'] A: def weekdays(day): days = ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday'] i=days.index(day) # get the index of the selected day d1=days[i:] #get the list from an including this index d1.extend(days[:i]) # append the list form the beginning to this index return d1 And if you want to test that it works: def test_weekdays(): days = ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday'] for day in days: print weekdays(day) A: Hmm, you are currently only searching for the given weekday and set as result :) You can use the slice ability in python list to do this: result = days[days.index(weekday):] + days[:days.index(weekdays)] A: Here's more what you want: def weekdays(weekday): days = ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday'] index = days.index(weekday) return (days + days)[index:index+7] A: You don't need to hardcode array of weekdays. It's already available in calendar module. import calendar as cal def weekdays(weekday): start = [d for d in cal.day_name].index(weekday) return [cal.day_name[(i+start) % 7] for i in range(7)] A: Your result variable is a string and not a list object. Also, it only gets updated one time which is when it is equal to the passed weekday argument. Here's an implementation: import calendar def weekdays(weekday): days = [day for day in calendar.day_name] for day in days: days.insert(0, days.pop()) # add last day as new first day of list if days[0] == weekday: # if new first day same as weekday then all done break return days Example output: >>> weekdays("Wednesday") ['Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday', 'Monday', 'Tuesday'] >>> weekdays("Friday") ['Friday', 'Saturday', 'Sunday', 'Monday', 'Tuesday', 'Wednesday', 'Thursday'] >>> weekdays("Tuesday") ['Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday', 'Monday'] A: Every time you run the for loop, the day variable changes. So day is equal to your input only once. Using "Sunday" as input, it first checked if Monday = Sunday, then if Tuesday = Sunday, then if Wednesday = Sunday, until it finally found that Sunday = Sunday and returned Sunday. A: Another approach using the standard library: days = ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday'] def weekdays(weekday): n = days.index(weekday) return list(itertools.islice(itertools.cycle(days), n, n + 7)) Itertools is a bit much in this case. Since you know at most one extra cycle is needed, you could do that manually: days = ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday'] days += days def weekdays(weekday): n = days.index(weekday) return days[n:n+7] Both give the expected output: >>> weekdays("Wednesday") ['Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday', 'Monday', 'Tuesday'] >>> weekdays("Sunday") ['Sunday', 'Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday'] >>> weekdays("Monday") ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday'] A: The code below will gnereate a list based on X days you want a head , of you want to generate list of days going back change the [ minus to plus ] import datetime numdays = 7 base = datetime.date.today() date_list = [base + datetime.timedelta(days=x) for x in range(numdays)] date_list_with_dayname = ["%s, %s" % ((base + datetime.timedelta(days=x)).strftime("%A"), base + datetime.timedelta(days=x)) for x in range(numdays)] A: You can use Python standard calendar module with very convenient list-like deque object. This way, we just have to rotate the list of the days to the one we want. import calendar from collections import deque def get_weekdays(first: str = 'Monday') -> deque[str]: weekdays = deque(calendar.day_name) weekdays.rotate(-weekdays.index(first)) return weekdays get_weekdays('Wednesday') that outputs: deque(['Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday', 'Monday', 'Tuesday']) A: okay, My simple approach would be : result = days[days.index(weekday):] + days[:days.index(weekdays)] hope This will be helpfull
Return a list of weekdays, starting with given weekday
My task is to define a function weekdays(weekday) that returns a list of weekdays, starting with the given weekday. It should work like this: >>> weekdays('Wednesday') ['Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday', 'Monday', 'Tuesday'] So far I've come up with this one: def weekdays(weekday): days = ('Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday') result = "" for day in days: if day == weekday: result += day return result But this prints the input day only: >>> weekdays("Sunday") 'Sunday' What am I doing wrong?
[ "The reason your code is only returning one day name is because weekday will never match more than one string in the days tuple and therefore won't add any of the days of the week that follow it (nor wrap around to those before it). Even if it did somehow, it would still return them all as one long string because you're initializing result to an empty string, not an empty list.\nHere's a solution that uses the datetime module to create a list of all the weekday names starting with \"Monday\" in the current locale's language. This list is then used to create another list of names in the desired order which is returned. It does the ordering by finding the index of designated day in the original list and then splicing together two slices of it relative to that index to form the result. As an optimization it also caches the locale's day names so if it's ever called again with the same current locale (a likely scenario), it won't need to recreate this private list.\nimport datetime\nimport locale\n\ndef weekdays(weekday):\n current_locale = locale.getlocale()\n if current_locale not in weekdays._days_cache:\n # Add day names from a reference date, Monday 2001-Jan-1 to cache.\n weekdays._days_cache[current_locale] = [\n datetime.date(2001, 1, i).strftime('%A') for i in range(1, 8)]\n days = weekdays._days_cache[current_locale]\n index = days.index(weekday)\n return days[index:] + days[:index]\n\nweekdays._days_cache = {} # initialize cache\n\nprint(weekdays('Wednesday'))\n# ['Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday', 'Monday', 'Tuesday']\n\nBesides not needing to hard-code days names in the function, another advantage to using the datetime module is that code utilizing it will automatically work in other languages. This can be illustrated by changing the locale and then calling the function with a day name in the corresponding language.\nFor example, although France is not my default locale, I can set it to be the current one for testing purposes as shown below. Note: According to this Capitalization of day names article, the names of the days of the week are not capitalized in French like they are in my default English locale, but that is taken into account automatically, too, which means the weekday name passed to it must be in the language of the current locale and is also case-sensitive. Of course you could modify the function to ignore the lettercase of the input argument, if desired.\n# set or change locale\nlocale.setlocale(locale.LC_ALL, 'french_france')\n\nprint(weekdays('mercredi')) # use French equivalent of 'Wednesday'\n# ['mercredi', 'jeudi', 'vendredi', 'samedi', 'dimanche', 'lundi', 'mardi']\n\n", "A far quicker approach would be to keep in mind, that the weekdays cycle. As such, we just need to get the first day we want to include the list, and add the remaining 6 elements to the end. Or in other words, we get the weekday list starting from the starting day, append another full week, and return only the first 7 elements (for the full week).\ndays = ('Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday')\ndef weekdays ( weekday ):\n index = days.index( weekday )\n return list( days[index:] + days )[:7]\n\n>>> weekdays( 'Wednesday' )\n['Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday', 'Monday', 'Tuesday']\n\n", "def weekdays(day):\n days = ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday']\n i=days.index(day) # get the index of the selected day\n d1=days[i:] #get the list from an including this index\n d1.extend(days[:i]) # append the list form the beginning to this index\n return d1\n\nAnd if you want to test that it works: \ndef test_weekdays():\n days = ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday']\n for day in days:\n print weekdays(day)\n\n", "Hmm, you are currently only searching for the given weekday and set as result :)\nYou can use the slice ability in python list to do this:\nresult = days[days.index(weekday):] + days[:days.index(weekdays)]\n\n", "Here's more what you want:\ndef weekdays(weekday):\n days = ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday']\n index = days.index(weekday)\n return (days + days)[index:index+7]\n\n", "You don't need to hardcode array of weekdays. It's already available in calendar module.\nimport calendar as cal\n\ndef weekdays(weekday):\n start = [d for d in cal.day_name].index(weekday)\n return [cal.day_name[(i+start) % 7] for i in range(7)]\n\n", "Your result variable is a string and not a list object. Also, it only gets updated one time which is when it is equal to the passed weekday argument.\nHere's an implementation:\nimport calendar\n\ndef weekdays(weekday):\n days = [day for day in calendar.day_name]\n for day in days:\n days.insert(0, days.pop()) # add last day as new first day of list \n if days[0] == weekday: # if new first day same as weekday then all done\n break \n return days\n\nExample output:\n>>> weekdays(\"Wednesday\")\n['Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday', 'Monday', 'Tuesday']\n>>> weekdays(\"Friday\")\n['Friday', 'Saturday', 'Sunday', 'Monday', 'Tuesday', 'Wednesday', 'Thursday']\n>>> weekdays(\"Tuesday\")\n['Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday', 'Monday']\n\n", "Every time you run the for loop, the day variable changes. So day is equal to your input only once. Using \"Sunday\" as input, it first checked if Monday = Sunday, then if Tuesday = Sunday, then if Wednesday = Sunday, until it finally found that Sunday = Sunday and returned Sunday.\n", "Another approach using the standard library:\ndays = ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday',\n 'Sunday']\ndef weekdays(weekday):\n n = days.index(weekday)\n return list(itertools.islice(itertools.cycle(days), n, n + 7))\n\nItertools is a bit much in this case. Since you know at most one extra cycle is needed, you could do that manually:\ndays = ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday',\n 'Sunday']\ndays += days\ndef weekdays(weekday):\n n = days.index(weekday)\n return days[n:n+7]\n\nBoth give the expected output:\n>>> weekdays(\"Wednesday\")\n['Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday', 'Monday', 'Tuesday']\n>>> weekdays(\"Sunday\")\n['Sunday', 'Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday']\n>>> weekdays(\"Monday\")\n['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday']\n\n", "The code below will gnereate a list based on X days you want a head , of you want to generate list of days going back change the [ minus to plus ]\nimport datetime\nnumdays = 7\nbase = datetime.date.today()\ndate_list = [base + datetime.timedelta(days=x) for x in range(numdays)]\ndate_list_with_dayname = [\"%s, %s\" % ((base + datetime.timedelta(days=x)).strftime(\"%A\"), base + datetime.timedelta(days=x)) for x in range(numdays)]\n\n", "You can use Python standard calendar module with very convenient list-like deque object. This way, we just have to rotate the list of the days to the one we want.\nimport calendar\nfrom collections import deque\n\ndef get_weekdays(first: str = 'Monday') -> deque[str]:\n weekdays = deque(calendar.day_name)\n weekdays.rotate(-weekdays.index(first))\n return weekdays\n\nget_weekdays('Wednesday')\n\nthat outputs:\ndeque(['Wednesday',\n 'Thursday',\n 'Friday',\n 'Saturday',\n 'Sunday',\n 'Monday',\n 'Tuesday'])\n\n", "okay, My simple approach would be :\nresult = days[days.index(weekday):] + days[:days.index(weekdays)]\n\nhope This will be helpfull\n" ]
[ 15, 10, 7, 4, 4, 4, 2, 1, 1, 0, 0, 0 ]
[]
[]
[ "calendar", "python", "weekday" ]
stackoverflow_0004082772_calendar_python_weekday.txt
Q: I want to convert array of intensities to an image I have the MNIST dataset. The CSV file contains 70,000 rows and 785 columns. The last column is the label. I want to convert the first columns of a row to the respective grayscale image with dimensions 28x28. Image of the data: A: So you just want to convert your data from csv to grayscale? from keras.preprocessing.image import ImageDataGenerator data_generator = ImageDataGenerator() data = data_generator.flow_from_dataframe(df, color_mode="grayscale") The df variable is your read csv. your data should return grayscale images using the code above.
I want to convert array of intensities to an image
I have the MNIST dataset. The CSV file contains 70,000 rows and 785 columns. The last column is the label. I want to convert the first columns of a row to the respective grayscale image with dimensions 28x28. Image of the data:
[ "So you just want to convert your data from csv to grayscale?\nfrom keras.preprocessing.image import ImageDataGenerator\ndata_generator = ImageDataGenerator()\ndata = data_generator.flow_from_dataframe(df, color_mode=\"grayscale\")\n\nThe df variable is your read csv. your data should return grayscale images using the code above.\n" ]
[ 0 ]
[]
[]
[ "csv", "mnist", "python" ]
stackoverflow_0074667006_csv_mnist_python.txt
Q: How do you loop through api with list of parameters and store resulting calls in one dataframe I'm trying to loop a list of match ids (LMID5) as parameters for api calls. I think I have the looping the API calls correct as it prints the urls but I'm struggling to store the results every time in the same dataframe. The results of the API come through in JSON. Which I then normalise into a DF. When just using one parameter to call api this is how I code it and create a df. responsematchDetails = requests.get(url = matchDetails) dfLM = pd.json_normalize(responseleagueMatches.json()['data']) The issue is when trying to loop through a list of parameters and trying to store in one df. The below code is what I have wrote to try loop many calls to API using parameters from a list, but I'm struggling to store the data each time. for i in list(LMID5): url = 'https://api.football-data-api.com/match?key=&match_id=' + str(i) rm = requests.get(url) print(url) for match in pd.json_normalize(rm.json()["data"]): dfMatchDetails = dfMatchDetails.append({[match] }, ignore_index=True) A: Can you try this: dfMatchDetails=pd.DataFrame() for i in list(LMID5): url = 'https://api.football-data-api.com/match?key=&match_id=' + str(i) rm = requests.get(url) print(url) dfMatchDetails=pd.concat([dfMatchDetails,pd.json_normalize(rm.json()['data'])])
How do you loop through api with list of parameters and store resulting calls in one dataframe
I'm trying to loop a list of match ids (LMID5) as parameters for api calls. I think I have the looping the API calls correct as it prints the urls but I'm struggling to store the results every time in the same dataframe. The results of the API come through in JSON. Which I then normalise into a DF. When just using one parameter to call api this is how I code it and create a df. responsematchDetails = requests.get(url = matchDetails) dfLM = pd.json_normalize(responseleagueMatches.json()['data']) The issue is when trying to loop through a list of parameters and trying to store in one df. The below code is what I have wrote to try loop many calls to API using parameters from a list, but I'm struggling to store the data each time. for i in list(LMID5): url = 'https://api.football-data-api.com/match?key=&match_id=' + str(i) rm = requests.get(url) print(url) for match in pd.json_normalize(rm.json()["data"]): dfMatchDetails = dfMatchDetails.append({[match] }, ignore_index=True)
[ "Can you try this:\ndfMatchDetails=pd.DataFrame()\nfor i in list(LMID5):\n url = 'https://api.football-data-api.com/match?key=&match_id=' + str(i)\n rm = requests.get(url)\n print(url)\n dfMatchDetails=pd.concat([dfMatchDetails,pd.json_normalize(rm.json()['data'])])\n\n" ]
[ 1 ]
[]
[]
[ "api", "loops", "pandas", "python", "python_requests" ]
stackoverflow_0074667638_api_loops_pandas_python_python_requests.txt
Q: Add memoization to recursive function, Python Python. First of all, I did recursive code that find how many the shortests paths has matrix, path from last cell in matrix to frist cell in matrix. This is my code that work: def matrix_explorer(n,m): """ Recursive function that find number of the shortest paths from beginning cell of matrix to last cell :param n: Integer, how many rows has matrix :param m: Integer, how many columns has matrix :return: Number of the shortests paths """ count=0 # Number of paths if n == 1 or m == 1: # Stop condition, if one of cells is equal to 1 return count+1 # Add to number of paths 1 else: return matrix_explorer(n-1, m) + matrix_explorer(n, m-1) # Go to cell above or left to current cell I need to add memoization to this recursive function. What I have, but it's not actually working: def matrix_explorer_cache(n ,m): dictionary = {} count = 0 if n == 1 or m == 1: return count+1 else: dictionary[n][m] = matrix_explorer_cache(n-1, m) + matrix_explorer_cache(n, m-1) return dictionary[n][m]
Add memoization to recursive function, Python
Python. First of all, I did recursive code that find how many the shortests paths has matrix, path from last cell in matrix to frist cell in matrix. This is my code that work: def matrix_explorer(n,m): """ Recursive function that find number of the shortest paths from beginning cell of matrix to last cell :param n: Integer, how many rows has matrix :param m: Integer, how many columns has matrix :return: Number of the shortests paths """ count=0 # Number of paths if n == 1 or m == 1: # Stop condition, if one of cells is equal to 1 return count+1 # Add to number of paths 1 else: return matrix_explorer(n-1, m) + matrix_explorer(n, m-1) # Go to cell above or left to current cell I need to add memoization to this recursive function. What I have, but it's not actually working: def matrix_explorer_cache(n ,m): dictionary = {} count = 0 if n == 1 or m == 1: return count+1 else: dictionary[n][m] = matrix_explorer_cache(n-1, m) + matrix_explorer_cache(n, m-1) return dictionary[n][m]
[]
[]
[ "To add memoization to your matrix_explorer function, you can use a dictionary to store the results of previously computed paths. When the function is called, you can check if the result for the given n and m values has already been computed. If so, you can simply return the stored result from the dictionary instead of recomputing it. If not, you can compute the result and store it in the dictionary for future use.\ndictionary = {}\ndef matrix_explorer_cache(n ,m):\n if n == 1 or m == 1:\n return 1\n else:\n # Check if the result for the given n and m values has already been computed.\n if (n, m) not in dictionary:\n # Compute and store results\n dictionary[(n, m)] = matrix_explorer_cache(n-1, m) + matrix_explorer_cache(n, m-1)\n # At this point, dictionary[(n, m)] is guaranteed to exist\n return dictionary[(n, m)]\n\n" ]
[ -3 ]
[ "function", "matrix", "memoization", "python", "recursion" ]
stackoverflow_0074667818_function_matrix_memoization_python_recursion.txt
Q: switch case matching with array index value I have this function in which I want to assign the values of img array that has 1 to 4 numbers, and I want to put red,yellow,green,blue into array matrixColored, but when I use switch case it gives erros in 4th line, help me thanks. def colorPrint(): for i in range(r): for j in range(c): match img[i][j]: case 1: matrixColored[i][j] = 'red' case 2: matrixColored[i][j] = 'green' case 3: matrixColored[i][j] = 'blue' case 4: matrixColored[i][j] = 'yellow' case _: return "something went wrong" A: Can't say what the problem is without the error message but, match is not the only way to do this. Here's an example using a dictionary: colorDict = {1:'red', 2:'green', 3:'blue', 4:'yellow'} img = 3 color = colorDict.get(img) if img in colorDict: matrixColored = colorDict[img] print(matrixColored) else: print('something went wrong') In your code example, this would transpose to : def colorPrint(): colorDict = {1:'red', 2:'green', 3:'blue', 4:'yellow'} for i in range(r): for j in range(c): if img[i][j] in colorDict: matrixColored[i][j] = colorDict[img[i][j]] else: print('something went wrong')
switch case matching with array index value
I have this function in which I want to assign the values of img array that has 1 to 4 numbers, and I want to put red,yellow,green,blue into array matrixColored, but when I use switch case it gives erros in 4th line, help me thanks. def colorPrint(): for i in range(r): for j in range(c): match img[i][j]: case 1: matrixColored[i][j] = 'red' case 2: matrixColored[i][j] = 'green' case 3: matrixColored[i][j] = 'blue' case 4: matrixColored[i][j] = 'yellow' case _: return "something went wrong"
[ "Can't say what the problem is without the error message but, match is not the only way to do this.\nHere's an example using a dictionary:\ncolorDict = {1:'red', 2:'green', 3:'blue', 4:'yellow'}\n\nimg = 3\ncolor = colorDict.get(img)\nif img in colorDict:\n matrixColored = colorDict[img]\n print(matrixColored)\nelse:\n print('something went wrong')\n\nIn your code example, this would transpose to :\ndef colorPrint():\n colorDict = {1:'red', 2:'green', 3:'blue', 4:'yellow'}\n for i in range(r):\n for j in range(c):\n if img[i][j] in colorDict:\n matrixColored[i][j] = colorDict[img[i][j]]\n else:\n print('something went wrong')\n\n" ]
[ 0 ]
[]
[]
[ "arrays", "for_loop", "indexing", "python", "range" ]
stackoverflow_0074644591_arrays_for_loop_indexing_python_range.txt
Q: npm ERR! gyp ERR! stack Error: Could not find any Visual Studio installation to use Alright, After quite some reinstalling, reading I still can't figure what is going on. I'm trying to run npm install --force on a codecanyon script, reinstalled node to latest version, same as python and build tools, added VCINSTALLDIR to path, restarted windows multiple times and still the same issue. npm ERR! code 1 npm ERR! path C:\Users\denis\OneDrive\Documents\Website\node_modules\node-sass npm ERR! command failed npm ERR! command C:\Windows\system32\cmd.exe /d /s /c node scripts/build.js npm ERR! Building: C:\Program Files\nodejs\node.exe C:\Users\denis\OneDrive\Documents\Website\node_modules\node-gyp\bin\node-gyp.js rebuild --verbose --libsass_ext= --libsass_cflags= --libsass_ldflags= --libsass_library= npm ERR! gyp info it worked if it ends with ok npm ERR! gyp verb cli [ npm ERR! gyp verb cli 'C:\\Program Files\\nodejs\\node.exe', npm ERR! gyp verb cli 'C:\\Users\\denis\\OneDrive\\Documents\\Website\\node_modules\\node-gyp\\bin\\node-gyp.js', npm ERR! gyp verb cli 'rebuild', npm ERR! gyp verb cli '--verbose', npm ERR! gyp verb cli '--libsass_ext=', npm ERR! gyp verb cli '--libsass_cflags=', npm ERR! gyp verb cli '--libsass_ldflags=', npm ERR! gyp verb cli '--libsass_library=' npm ERR! gyp verb cli ] npm ERR! gyp info using [email protected] npm ERR! gyp info using [email protected] | win32 | x64 npm ERR! gyp verb command rebuild [] npm ERR! gyp verb command clean [] npm ERR! gyp verb clean removing "build" directory npm ERR! gyp verb command configure [] npm ERR! gyp verb find Python checking Python explicitly set from command line or npm configuration npm ERR! gyp verb find Python - "--python=" or "npm config get python" is "C:\Python311\python.exe" npm ERR! gyp verb find Python - executing "C:\Python311\python.exe" to get executable path npm ERR! gyp verb find Python - executable path is "C:\Python311\python.exe" npm ERR! gyp verb find Python - executing "C:\Python311\python.exe" to get version npm ERR! gyp verb find Python - version is "3.11.0" npm ERR! gyp info find Python using Python version 3.11.0 found at "C:\Python311\python.exe" npm ERR! gyp verb get node dir no --target version specified, falling back to host node version: 19.2.0 npm ERR! gyp verb command install [ '19.2.0' ] npm ERR! gyp verb install input version string "19.2.0" npm ERR! gyp verb install installing version: 19.2.0 npm ERR! gyp verb install --ensure was passed, so won't reinstall if already installed npm ERR! gyp verb install version is already installed, need to check "installVersion" npm ERR! gyp verb got "installVersion" 9 npm ERR! gyp verb needs "installVersion" 9 npm ERR! gyp verb install version is good npm ERR! gyp verb get node dir target node version installed: 19.2.0 npm ERR! gyp verb build dir attempting to create "build" dir: C:\Users\denis\OneDrive\Documents\Website\node_modules\node-sass\build npm ERR! gyp verb build dir "build" dir needed to be created? Yes npm ERR! gyp verb find VS msvs_version was set from command line or npm config npm ERR! gyp verb find VS - looking for Visual Studio version 2022 npm ERR! gyp verb find VS running in VS Command Prompt, installation path is: npm ERR! gyp verb find VS "C:\Program Files (x86)\Microsoft Visual Studio\2022\Community" npm ERR! gyp verb find VS - will only use this version npm ERR! gyp verb find VS checking VS2022 (17.4.33122.133) found at: npm ERR! gyp verb find VS "C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools" npm ERR! gyp verb find VS - found "Visual Studio C++ core features" npm ERR! gyp verb find VS - found VC++ toolset: v143 npm ERR! gyp verb find VS - missing any Windows SDK npm ERR! gyp verb find VS could not find a version of Visual Studio 2017 or newer to use npm ERR! gyp verb find VS looking for Visual Studio 2015 npm ERR! gyp verb find VS - not found npm ERR! gyp verb find VS not looking for VS2013 as it is only supported up to Node.js 8 npm ERR! gyp ERR! find VS npm ERR! gyp ERR! find VS msvs_version was set from command line or npm config npm ERR! gyp ERR! find VS - looking for Visual Studio version 2022 npm ERR! gyp ERR! find VS running in VS Command Prompt, installation path is: npm ERR! gyp ERR! find VS "C:\Program Files (x86)\Microsoft Visual Studio\2022\Community" npm ERR! gyp ERR! find VS - will only use this version npm ERR! gyp ERR! find VS checking VS2022 (17.4.33122.133) found at: npm ERR! gyp ERR! find VS "C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools" npm ERR! gyp ERR! find VS - found "Visual Studio C++ core features" npm ERR! gyp ERR! find VS - found VC++ toolset: v143 npm ERR! gyp ERR! find VS - missing any Windows SDK npm ERR! gyp ERR! find VS could not find a version of Visual Studio 2017 or newer to use npm ERR! gyp ERR! find VS looking for Visual Studio 2015 npm ERR! gyp ERR! find VS - not found npm ERR! gyp ERR! find VS not looking for VS2013 as it is only supported up to Node.js 8 npm ERR! gyp ERR! find VS msvs_version does not match this VS Command Prompt or the npm ERR! gyp ERR! find VS installation cannot be used. npm ERR! gyp ERR! find VS npm ERR! gyp ERR! find VS ************************************************************** npm ERR! gyp ERR! find VS You need to install the latest version of Visual Studio npm ERR! gyp ERR! find VS including the "Desktop development with C++" workload. npm ERR! gyp ERR! find VS For more information consult the documentation at: npm ERR! gyp ERR! find VS https://github.com/nodejs/node-gyp#on-windows npm ERR! gyp ERR! find VS ************************************************************** npm ERR! gyp ERR! find VS npm ERR! gyp ERR! configure error npm ERR! gyp ERR! stack Error: Could not find any Visual Studio installation to use npm ERR! gyp ERR! stack at VisualStudioFinder.fail (C:\Users\denis\OneDrive\Documents\Website\node_modules\node-gyp\lib\find-visualstudio.js:122:47) npm ERR! gyp ERR! stack at C:\Users\denis\OneDrive\Documents\Website\node_modules\node-gyp\lib\find-visualstudio.js:75:16 npm ERR! gyp ERR! stack at VisualStudioFinder.findVisualStudio2013 (C:\Users\denis\OneDrive\Documents\Website\node_modules\node-gyp\lib\find-visualstudio.js:363:14) npm ERR! gyp ERR! stack at C:\Users\denis\OneDrive\Documents\Website\node_modules\node-gyp\lib\find-visualstudio.js:71:14 npm ERR! gyp ERR! stack at C:\Users\denis\OneDrive\Documents\Website\node_modules\node-gyp\lib\find-visualstudio.js:384:16 npm ERR! gyp ERR! stack at C:\Users\denis\OneDrive\Documents\Website\node_modules\node-gyp\lib\util.js:54:7 npm ERR! gyp ERR! stack at C:\Users\denis\OneDrive\Documents\Website\node_modules\node-gyp\lib\util.js:33:16 npm ERR! gyp ERR! stack at ChildProcess.exithandler (node:child_process:427:5) npm ERR! gyp ERR! stack at ChildProcess.emit (node:events:513:28) npm ERR! gyp ERR! stack at maybeClose (node:internal/child_process:1098:16) npm ERR! gyp ERR! stack at ChildProcess._handle.onexit (node:internal/child_process:304:5) npm ERR! gyp ERR! System Windows_NT 10.0.22621 npm ERR! gyp ERR! command "C:\\Program Files\\nodejs\\node.exe" "C:\\Users\\denis\\OneDrive\\Documents\\Website\\node_modules\\node-gyp\\bin\\node-gyp.js" "rebuild" "--verbose" "--libsass_ext=" "--libsass_cflags=" "--libsass_ldflags=" "--libsass_library=" npm ERR! gyp ERR! cwd C:\Users\denis\OneDrive\Documents\Website\node_modules\node-sass npm ERR! gyp ERR! node -v v19.2.0 npm ERR! gyp ERR! node-gyp -v v8.4.1 npm ERR! gyp ERR! not ok npm ERR! Build failed with error code: 1 System Information Build tools 2022 Installed + SDK for windows 11 Env path Visual Studio Code output of python and msvs_version All mentioned in pictures. A: It seems it has something to do with Windows 11, running a VM of Win10 Pro where it executes perfectly with the latest packages. PS: Windows Build Tools are now embedded in the latest Node, so no need to install them manually. Just Node, Git, Visual Studio Code and restart for PATH to update automatically.
npm ERR! gyp ERR! stack Error: Could not find any Visual Studio installation to use
Alright, After quite some reinstalling, reading I still can't figure what is going on. I'm trying to run npm install --force on a codecanyon script, reinstalled node to latest version, same as python and build tools, added VCINSTALLDIR to path, restarted windows multiple times and still the same issue. npm ERR! code 1 npm ERR! path C:\Users\denis\OneDrive\Documents\Website\node_modules\node-sass npm ERR! command failed npm ERR! command C:\Windows\system32\cmd.exe /d /s /c node scripts/build.js npm ERR! Building: C:\Program Files\nodejs\node.exe C:\Users\denis\OneDrive\Documents\Website\node_modules\node-gyp\bin\node-gyp.js rebuild --verbose --libsass_ext= --libsass_cflags= --libsass_ldflags= --libsass_library= npm ERR! gyp info it worked if it ends with ok npm ERR! gyp verb cli [ npm ERR! gyp verb cli 'C:\\Program Files\\nodejs\\node.exe', npm ERR! gyp verb cli 'C:\\Users\\denis\\OneDrive\\Documents\\Website\\node_modules\\node-gyp\\bin\\node-gyp.js', npm ERR! gyp verb cli 'rebuild', npm ERR! gyp verb cli '--verbose', npm ERR! gyp verb cli '--libsass_ext=', npm ERR! gyp verb cli '--libsass_cflags=', npm ERR! gyp verb cli '--libsass_ldflags=', npm ERR! gyp verb cli '--libsass_library=' npm ERR! gyp verb cli ] npm ERR! gyp info using [email protected] npm ERR! gyp info using [email protected] | win32 | x64 npm ERR! gyp verb command rebuild [] npm ERR! gyp verb command clean [] npm ERR! gyp verb clean removing "build" directory npm ERR! gyp verb command configure [] npm ERR! gyp verb find Python checking Python explicitly set from command line or npm configuration npm ERR! gyp verb find Python - "--python=" or "npm config get python" is "C:\Python311\python.exe" npm ERR! gyp verb find Python - executing "C:\Python311\python.exe" to get executable path npm ERR! gyp verb find Python - executable path is "C:\Python311\python.exe" npm ERR! gyp verb find Python - executing "C:\Python311\python.exe" to get version npm ERR! gyp verb find Python - version is "3.11.0" npm ERR! gyp info find Python using Python version 3.11.0 found at "C:\Python311\python.exe" npm ERR! gyp verb get node dir no --target version specified, falling back to host node version: 19.2.0 npm ERR! gyp verb command install [ '19.2.0' ] npm ERR! gyp verb install input version string "19.2.0" npm ERR! gyp verb install installing version: 19.2.0 npm ERR! gyp verb install --ensure was passed, so won't reinstall if already installed npm ERR! gyp verb install version is already installed, need to check "installVersion" npm ERR! gyp verb got "installVersion" 9 npm ERR! gyp verb needs "installVersion" 9 npm ERR! gyp verb install version is good npm ERR! gyp verb get node dir target node version installed: 19.2.0 npm ERR! gyp verb build dir attempting to create "build" dir: C:\Users\denis\OneDrive\Documents\Website\node_modules\node-sass\build npm ERR! gyp verb build dir "build" dir needed to be created? Yes npm ERR! gyp verb find VS msvs_version was set from command line or npm config npm ERR! gyp verb find VS - looking for Visual Studio version 2022 npm ERR! gyp verb find VS running in VS Command Prompt, installation path is: npm ERR! gyp verb find VS "C:\Program Files (x86)\Microsoft Visual Studio\2022\Community" npm ERR! gyp verb find VS - will only use this version npm ERR! gyp verb find VS checking VS2022 (17.4.33122.133) found at: npm ERR! gyp verb find VS "C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools" npm ERR! gyp verb find VS - found "Visual Studio C++ core features" npm ERR! gyp verb find VS - found VC++ toolset: v143 npm ERR! gyp verb find VS - missing any Windows SDK npm ERR! gyp verb find VS could not find a version of Visual Studio 2017 or newer to use npm ERR! gyp verb find VS looking for Visual Studio 2015 npm ERR! gyp verb find VS - not found npm ERR! gyp verb find VS not looking for VS2013 as it is only supported up to Node.js 8 npm ERR! gyp ERR! find VS npm ERR! gyp ERR! find VS msvs_version was set from command line or npm config npm ERR! gyp ERR! find VS - looking for Visual Studio version 2022 npm ERR! gyp ERR! find VS running in VS Command Prompt, installation path is: npm ERR! gyp ERR! find VS "C:\Program Files (x86)\Microsoft Visual Studio\2022\Community" npm ERR! gyp ERR! find VS - will only use this version npm ERR! gyp ERR! find VS checking VS2022 (17.4.33122.133) found at: npm ERR! gyp ERR! find VS "C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools" npm ERR! gyp ERR! find VS - found "Visual Studio C++ core features" npm ERR! gyp ERR! find VS - found VC++ toolset: v143 npm ERR! gyp ERR! find VS - missing any Windows SDK npm ERR! gyp ERR! find VS could not find a version of Visual Studio 2017 or newer to use npm ERR! gyp ERR! find VS looking for Visual Studio 2015 npm ERR! gyp ERR! find VS - not found npm ERR! gyp ERR! find VS not looking for VS2013 as it is only supported up to Node.js 8 npm ERR! gyp ERR! find VS msvs_version does not match this VS Command Prompt or the npm ERR! gyp ERR! find VS installation cannot be used. npm ERR! gyp ERR! find VS npm ERR! gyp ERR! find VS ************************************************************** npm ERR! gyp ERR! find VS You need to install the latest version of Visual Studio npm ERR! gyp ERR! find VS including the "Desktop development with C++" workload. npm ERR! gyp ERR! find VS For more information consult the documentation at: npm ERR! gyp ERR! find VS https://github.com/nodejs/node-gyp#on-windows npm ERR! gyp ERR! find VS ************************************************************** npm ERR! gyp ERR! find VS npm ERR! gyp ERR! configure error npm ERR! gyp ERR! stack Error: Could not find any Visual Studio installation to use npm ERR! gyp ERR! stack at VisualStudioFinder.fail (C:\Users\denis\OneDrive\Documents\Website\node_modules\node-gyp\lib\find-visualstudio.js:122:47) npm ERR! gyp ERR! stack at C:\Users\denis\OneDrive\Documents\Website\node_modules\node-gyp\lib\find-visualstudio.js:75:16 npm ERR! gyp ERR! stack at VisualStudioFinder.findVisualStudio2013 (C:\Users\denis\OneDrive\Documents\Website\node_modules\node-gyp\lib\find-visualstudio.js:363:14) npm ERR! gyp ERR! stack at C:\Users\denis\OneDrive\Documents\Website\node_modules\node-gyp\lib\find-visualstudio.js:71:14 npm ERR! gyp ERR! stack at C:\Users\denis\OneDrive\Documents\Website\node_modules\node-gyp\lib\find-visualstudio.js:384:16 npm ERR! gyp ERR! stack at C:\Users\denis\OneDrive\Documents\Website\node_modules\node-gyp\lib\util.js:54:7 npm ERR! gyp ERR! stack at C:\Users\denis\OneDrive\Documents\Website\node_modules\node-gyp\lib\util.js:33:16 npm ERR! gyp ERR! stack at ChildProcess.exithandler (node:child_process:427:5) npm ERR! gyp ERR! stack at ChildProcess.emit (node:events:513:28) npm ERR! gyp ERR! stack at maybeClose (node:internal/child_process:1098:16) npm ERR! gyp ERR! stack at ChildProcess._handle.onexit (node:internal/child_process:304:5) npm ERR! gyp ERR! System Windows_NT 10.0.22621 npm ERR! gyp ERR! command "C:\\Program Files\\nodejs\\node.exe" "C:\\Users\\denis\\OneDrive\\Documents\\Website\\node_modules\\node-gyp\\bin\\node-gyp.js" "rebuild" "--verbose" "--libsass_ext=" "--libsass_cflags=" "--libsass_ldflags=" "--libsass_library=" npm ERR! gyp ERR! cwd C:\Users\denis\OneDrive\Documents\Website\node_modules\node-sass npm ERR! gyp ERR! node -v v19.2.0 npm ERR! gyp ERR! node-gyp -v v8.4.1 npm ERR! gyp ERR! not ok npm ERR! Build failed with error code: 1 System Information Build tools 2022 Installed + SDK for windows 11 Env path Visual Studio Code output of python and msvs_version All mentioned in pictures.
[ "It seems it has something to do with Windows 11, running a VM of Win10 Pro where it executes perfectly with the latest packages.\nPS: Windows Build Tools are now embedded in the latest Node, so no need to install them manually. Just Node, Git, Visual Studio Code and restart for PATH to update automatically.\n" ]
[ 0 ]
[]
[]
[ "node.js", "npm", "python" ]
stackoverflow_0074632361_node.js_npm_python.txt
Q: update a value by running through every row in a data frame with conditions (extension) This is an extension to the question, 'update a value by running through every row in a data frame with conditions' update a value by running through every row in a data frame with conditions Working with the same data frame: Df: Index A B A_yes B_yes 0 2.43 1.55 1 0 1 2.58 1.49 0 1 2 1.61 2.32 1 0 3 2.7 1.46 1 0 I've attached an image of the new conditions. (For example, for the first row: I have the 500 at the start, halve it into two 250's. I take one of the 250's and multiply this by the row to get 607.5 but then before proceeding onto the next row, I add the other half (250) so now I have 857.5. Then I continue this pattern through all the rows.) Desired output: Df: Index A B A_yes B_yes Points 0 2.43 1.55 1 0 875.5 1 2.58 1.49 0 1 1067.5875 2 1.61 2.32 1 0 1393.2017 3 2.7 1.46 1 0 2557.42324 A: You could try the following with df your dataframe: start = 500 weights = (df["A"].where(df["A_yes"].eq(1), df["B"]) * 0.5 + 0.5).cumprod() df["Points"] = start * weights Use .where to select between the value in A or B based on the A_yes and B_yes entries (my assumption here is that there's a 1 either in A_yes or B_yes, but not in both). Calculate the average between the selected value and 1. Build the actual weights with .cumprod. Multiply the weights with the start value to get the Points. Result for df = A B A_yes B_yes 0 2.43 1.55 1 0 1 2.58 1.49 0 1 2 1.61 2.32 1 0 3 2.70 1.46 1 0 is A B A_yes B_yes Points 0 2.43 1.55 1 0 857.500000 1 2.58 1.49 0 1 1067.587500 2 1.61 2.32 1 0 1393.201688 3 2.70 1.46 1 0 2577.423122
update a value by running through every row in a data frame with conditions (extension)
This is an extension to the question, 'update a value by running through every row in a data frame with conditions' update a value by running through every row in a data frame with conditions Working with the same data frame: Df: Index A B A_yes B_yes 0 2.43 1.55 1 0 1 2.58 1.49 0 1 2 1.61 2.32 1 0 3 2.7 1.46 1 0 I've attached an image of the new conditions. (For example, for the first row: I have the 500 at the start, halve it into two 250's. I take one of the 250's and multiply this by the row to get 607.5 but then before proceeding onto the next row, I add the other half (250) so now I have 857.5. Then I continue this pattern through all the rows.) Desired output: Df: Index A B A_yes B_yes Points 0 2.43 1.55 1 0 875.5 1 2.58 1.49 0 1 1067.5875 2 1.61 2.32 1 0 1393.2017 3 2.7 1.46 1 0 2557.42324
[ "You could try the following with df your dataframe:\nstart = 500\nweights = (df[\"A\"].where(df[\"A_yes\"].eq(1), df[\"B\"]) * 0.5 + 0.5).cumprod()\ndf[\"Points\"] = start * weights\n\n\nUse .where to select between the value in A or B based on the A_yes and B_yes entries (my assumption here is that there's a 1 either in A_yes or B_yes, but not in both).\nCalculate the average between the selected value and 1.\nBuild the actual weights with .cumprod.\nMultiply the weights with the start value to get the Points.\n\nResult for df =\n A B A_yes B_yes\n0 2.43 1.55 1 0\n1 2.58 1.49 0 1\n2 1.61 2.32 1 0\n3 2.70 1.46 1 0\n\nis\n A B A_yes B_yes Points\n0 2.43 1.55 1 0 857.500000\n1 2.58 1.49 0 1 1067.587500\n2 1.61 2.32 1 0 1393.201688\n3 2.70 1.46 1 0 2577.423122\n\n" ]
[ 0 ]
[]
[]
[ "conditional_statements", "dataframe", "loops", "pandas", "python" ]
stackoverflow_0074662788_conditional_statements_dataframe_loops_pandas_python.txt
Q: How to acces all the file from the directory using Python? for i in os.listdir(r"C:\Users\Xmall\same-resume-year-wise-master\same-resume-year-wise-master"): print(i) if i.endswith('.pdf'): a=open(i) s=PyPDF2.PdfFileReader(a) for j in range(s.numPages): z=s.getPage(j) er=z.extractText() print(re.findall('\S+@\S+',er)) I am not able to read the file using this code I want to extract E-mail address from the pdf A: I'll post two solutions using two different libraries . I'm not sure if this will work , or is what you are looking for but it could lead you somewhere . #extract email addresses using pyPDF2 def extractEmails(pdfFile): pdfReader = PyPDF2.PdfFileReader(pdfFile) emails = [] for pageNum in range(pdfReader.numPages): pageObj = pdfReader.getPage(pageNum) text = pageObj.extractText() emailRegex = re.compile(r'[\w\.-]+@[\w\.-]+') mo = emailRegex.findall(text) if mo != []: emails.extend(mo) return emails #extract email addresses using pdfminer def extractEmails2(pdfFile): emails = [] for pageNum in range(pdfReader.numPages): pageObj = pdfReader.getPage(pageNum) text = pageObj.extractText() emailRegex = re.compile(r'[\w\.-]+@[\w\.-]+') mo = emailRegex.findall(text) if mo != []: emails.extend(mo) return emails
How to acces all the file from the directory using Python?
for i in os.listdir(r"C:\Users\Xmall\same-resume-year-wise-master\same-resume-year-wise-master"): print(i) if i.endswith('.pdf'): a=open(i) s=PyPDF2.PdfFileReader(a) for j in range(s.numPages): z=s.getPage(j) er=z.extractText() print(re.findall('\S+@\S+',er)) I am not able to read the file using this code I want to extract E-mail address from the pdf
[ "I'll post two solutions using two different libraries .\nI'm not sure if this will work , or is what you are looking for but it could lead you somewhere .\n #extract email addresses using pyPDF2\ndef extractEmails(pdfFile):\n pdfReader = PyPDF2.PdfFileReader(pdfFile)\n emails = []\n for pageNum in range(pdfReader.numPages):\n pageObj = pdfReader.getPage(pageNum)\n text = pageObj.extractText()\n emailRegex = re.compile(r'[\\w\\.-]+@[\\w\\.-]+')\n mo = emailRegex.findall(text)\n if mo != []:\n emails.extend(mo)\n return emails\n#extract email addresses using pdfminer\ndef extractEmails2(pdfFile):\n emails = []\n for pageNum in range(pdfReader.numPages):\n pageObj = pdfReader.getPage(pageNum)\n text = pageObj.extractText()\n emailRegex = re.compile(r'[\\w\\.-]+@[\\w\\.-]+')\n mo = emailRegex.findall(text)\n if mo != []:\n emails.extend(mo)\n return emails\n\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074667852_python.txt
Q: python sqlite3 update binary field SELECT from one database sqlite3 gives me result: b'\n\x0b24 JUN 1974"-\x08\x01\x10\x00\x18\x00 \x18(\x060\xb6\x0f8\x00@\x00H\x00P\x00X\xbf\x84=`\x00h\x00p\x00x\x00\x80\x01\x00\x88\x01\x01\x90\x01\xa0\xdc\x90^' This is field data (VARCHAR(255)) of database My heritage and i want to save the same result to the same database. conn = sqlite3.connect(robocza+"database.ftb") conn.text_factory = bytes cursor = conn.cursor() cursor.execute( "SELECT date FROM individual_fact_main_data where guid ='xxxxxx' " ) rows = cursor.fetchone() given result: b'\n\x0b24 JUN 1974"-\x08\x01\x10\x00\x18\x00 \x18(\x060\xb6\x0f8\x00@\x00H\x00P\x00X\xbf\x84=`\x00h\x00p\x00x\x00\x80\x01\x00\x88\x01\x01\x90\x01\xa0\xdc\x90^' How can i do command UPDATE in the same but new database to write give result properly? A: To update a field in a SQLite database using Python, you can use the UPDATE statement in combination with the execute() method provided by the sqlite3 module. For example, if you want to update the date field of the individual_fact_main_data table in your database, you can use the following Python code: import sqlite3 # Connect to the database conn = sqlite3.connect(robocza+"database.ftb") conn.text_factory = bytes # Create a cursor object cursor = conn.cursor() # Define the UPDATE query query = "UPDATE individual_fact_main_data SET date = ? WHERE guid = ?" # Define the values to update values = (b'\n\x0b24 JUN 1974"-\x08\x01\x10\x00\x18\x00 \x18(\x060\xb6\x0f8\x00@\x00H\x00P\x00X\xbf\x84=`\x00h\x00p\x00x\x00\x80\x01\x00\x88\x01\x01\x90\x01\xa0\xdc\x90^', 'xxxxxx') # Execute the UPDATE query cursor.execute(query, values) # Save the changes to the database conn.commit() # Close the database connection conn.close() This code will update the date field of the individual_fact_main_data table with the specified value, where the guid field has the value xxxxxx. The b'\n\x0b24 JUN 1974"-\x08\x01\x10\x00\x18\x00 \x18(\x060\xb6\x0f8\x00@\x00H\x00P\x00X\xbf\x84=\x00h\x00p\x00x\x00\x80\x01\x00\x88\x01\x01\x90\x01\xa0\xdc\x90^'` value is the binary representation of the date, which is the default representation for VARCHAR(255) fields in SQLite. For more information about the UPDATE statement and other SQLite commands, you can refer to the SQLite documentation.
python sqlite3 update binary field
SELECT from one database sqlite3 gives me result: b'\n\x0b24 JUN 1974"-\x08\x01\x10\x00\x18\x00 \x18(\x060\xb6\x0f8\x00@\x00H\x00P\x00X\xbf\x84=`\x00h\x00p\x00x\x00\x80\x01\x00\x88\x01\x01\x90\x01\xa0\xdc\x90^' This is field data (VARCHAR(255)) of database My heritage and i want to save the same result to the same database. conn = sqlite3.connect(robocza+"database.ftb") conn.text_factory = bytes cursor = conn.cursor() cursor.execute( "SELECT date FROM individual_fact_main_data where guid ='xxxxxx' " ) rows = cursor.fetchone() given result: b'\n\x0b24 JUN 1974"-\x08\x01\x10\x00\x18\x00 \x18(\x060\xb6\x0f8\x00@\x00H\x00P\x00X\xbf\x84=`\x00h\x00p\x00x\x00\x80\x01\x00\x88\x01\x01\x90\x01\xa0\xdc\x90^' How can i do command UPDATE in the same but new database to write give result properly?
[ "To update a field in a SQLite database using Python, you can use the UPDATE statement in combination with the execute() method provided by the sqlite3 module.\nFor example, if you want to update the date field of the individual_fact_main_data table in your database, you can use the following Python code:\nimport sqlite3\n\n# Connect to the database\nconn = sqlite3.connect(robocza+\"database.ftb\")\nconn.text_factory = bytes\n\n# Create a cursor object\ncursor = conn.cursor()\n\n# Define the UPDATE query\nquery = \"UPDATE individual_fact_main_data SET date = ? WHERE guid = ?\"\n\n# Define the values to update\nvalues = (b'\\n\\x0b24 JUN 1974\"-\\x08\\x01\\x10\\x00\\x18\\x00 \\x18(\\x060\\xb6\\x0f8\\x00@\\x00H\\x00P\\x00X\\xbf\\x84=`\\x00h\\x00p\\x00x\\x00\\x80\\x01\\x00\\x88\\x01\\x01\\x90\\x01\\xa0\\xdc\\x90^', 'xxxxxx')\n\n# Execute the UPDATE query\ncursor.execute(query, values)\n\n# Save the changes to the database\nconn.commit()\n\n# Close the database connection\nconn.close()\n\nThis code will update the date field of the individual_fact_main_data table with the specified value, where the guid field has the value xxxxxx. The b'\\n\\x0b24 JUN 1974\"-\\x08\\x01\\x10\\x00\\x18\\x00 \\x18(\\x060\\xb6\\x0f8\\x00@\\x00H\\x00P\\x00X\\xbf\\x84=\\x00h\\x00p\\x00x\\x00\\x80\\x01\\x00\\x88\\x01\\x01\\x90\\x01\\xa0\\xdc\\x90^'` value is the binary representation of the date, which is the default representation for VARCHAR(255) fields in SQLite.\nFor more information about the UPDATE statement and other SQLite commands, you can refer to the SQLite documentation.\n" ]
[ 0 ]
[]
[]
[ "python", "sqlite" ]
stackoverflow_0074667914_python_sqlite.txt
Q: How to group by data in a column with pandas? I have a table with 8,000 rows of data and a small sample of it here: Customer ItemDescription Invoice PurchaseDate 1064 Produce 55514 22-01 1064 Snack 55514 22-01 1080 Drink 56511 23-01 1080 Snack 56511 23-01 1230 Drink 55551 26-03 1230 Snack 55551 26-03 1128 Meat 55003 04-03 1128 Snack 55003 04-03 1229 Drink 55100 06-03 1229 Snack 55100 06-03 1230 Meat 55102 07-03 1230 Snack 55102 07-03 I am trying to find the top 3 items that customers have bought along with "Snack". So the printed result should look like this: 0 Drink 1 Meat 2 Produce I have tried df.groupby but it doesn't sort them based on what was purchased along with "snacks". A: You can use groupby. By using groupby, you can group the products according to the customers and store them in the form of a list. dfx=df.groupby('Customer').agg({'ItemDescription':list}) ''' ItemDescription Customer 1064 [Produce, Snack] 1080 [Drink, Snack] 1128 [Meat, Snack] 1229 [Drink, Snack] 1230 [Drink, Snack, Meat, Snack] ''' Here we will need to filter out customers who have not purchased a snack. dfx=dfx[pd.DataFrame(dfx.ItemDescription.tolist()).isin(['Snack']).any(1).values] # https://stackoverflow.com/a/53343080/15415267 Then, convert the remaining rows into a list and get distributions with the Counter function. products=dfx.explode('ItemDescription')['ItemDescription'].to_list() #['Produce', 'Snack', 'Drink', 'Snack', 'Meat', 'Snack', 'Drink', 'Snack', 'Drink', 'Snack', 'Meat', 'Snack'] from collections import Counter occurence_count = Counter(top) occurence_count.most_common(4) #get top 4 product #[('Snack', 6), ('Drink', 3), ('Meat', 2), ('Produce', 1)] If you convert results to dataframe: final =pd.DataFrame(occurence_count.most_common(4),columns=['product','count']) ''' product count 0 Snack 6 1 Drink 3 2 Meat 2 3 Produce 1 ''' or (shorter): dfx=df.groupby('Customer').agg({'ItemDescription':list}) ''' ItemDescription Customer 1064 [Produce, Snack] 1080 [Drink, Snack] 1128 [Meat, Snack] 1229 [Drink, Snack] 1230 [Drink, Snack, Meat, Snack] ''' dfx=dfx[pd.DataFrame(dfx.ItemDescription.tolist()).isin(['Snack']).any(1).values] # https://stackoverflow.com/a/53343080/15415267 products=dfx2.explode('ItemDescription')['ItemDescription'].value_counts()[0:4] ''' ItemDescription Snack 6 Drink 3 Meat 2 Produce 1 ''' A: To find the top 3 items that customers have bought along with "Snack", you can use the groupby() and value_counts() methods in pandas. Here is an example of how you can do this: import pandas as pd # Create a sample DataFrame df = pd.DataFrame({'Customer': [1064, 1064, 1080, 1080, 1230, 1230, 1128, 1128, 1229, 1229, 1230, 1230], 'ItemDescription': ['Produce', 'Snack', 'Drink', 'Snack', 'Drink', 'Snack', 'Meat', 'Snack', 'Drink', 'Snack', 'Meat', 'Snack'], 'Invoice': [55514, 55514, 56511, 56511, 55551, 55551, 55003, 55003, 55100, 55100, 55102, 55102], 'PurchaseDate': ['22-01', '22-01', '23-01', '23-01', '26-03', '26-03', '04-03', '04-03', '06-03', '06-03', '07-03', '07-03']}) # Group the data by Customer df_grouped = df.groupby('Customer') # Create a dictionary to store the counts of items bought along with "Snack" for each customer item_counts = {} # Loop through each customer group for customer, group in df_grouped: # Create a new DataFrame that only includes rows where the ItemDescription is "Snack" snacks = group[group['ItemDescription'] == 'Snack'] # Loop through each row in the snacks DataFrame for index, row in snacks.iterrows(): # Get the Invoice number for the current row invoice = row['Invoice'] # Get the rows in the original DataFrame that have the same Invoice number as the current row invoice_rows = df[df['Invoice'] == invoice] # Loop through each row in the invoice_rows DataFrame for i, r in invoice_rows.iterrows(): # If the ItemDescription is not "Snack", increment the count for that item in the item_counts dictionary if r['ItemDescription'] != 'Snack': item = r['ItemDescription'] if item not in item_counts: item_counts[item] = 0 item_counts[item] += 1 # Sort the item_counts dictionary by value in descending order sorted_items = sorted(item_counts.items(), key=lambda x: x[1], reverse=True) # Print the top 3 items that customers have bought along with "Snack" for i in range(3): print(i, sorted_items[i][0]) In the example above, the data in the DataFrame is first grouped by the values in the Customer column. Then, for each customer group, the rows where the ItemDescription is "Snack" are extracted and stored in a new DataFrame. For each row in the snacks DataFrame, the rows in the original DataFrame that have the same Invoice number are extracted and stored in a new DataFrame. Finally, for each row in the invoice_rows DataFrame, the ItemDescription is checked. If the ItemDescription is not "Snack", the count for that item is incremented in the item_counts dictionary. After all the customer groups have been processed, the item_counts dictionary is sorted by value in descending order, and the top 3 items are printed. A: IIUC your requirement is to get the top items that were taken along with "Snack". So you first want to filter those Customers who didn't buy snack. Use groupby.filter for that. And then you want to compute the top items in terms of count. For that you use value_counts and then you take top 3 other than "Snack". s = ( df.groupby("Customer") .filter(lambda x: "Snack" not in x)["ItemDescription"] .value_counts() ) top_3 = s.loc[s.index.difference(["Snack"])][:3] print(top_3): Drink 3 Meat 2 Produce 1
How to group by data in a column with pandas?
I have a table with 8,000 rows of data and a small sample of it here: Customer ItemDescription Invoice PurchaseDate 1064 Produce 55514 22-01 1064 Snack 55514 22-01 1080 Drink 56511 23-01 1080 Snack 56511 23-01 1230 Drink 55551 26-03 1230 Snack 55551 26-03 1128 Meat 55003 04-03 1128 Snack 55003 04-03 1229 Drink 55100 06-03 1229 Snack 55100 06-03 1230 Meat 55102 07-03 1230 Snack 55102 07-03 I am trying to find the top 3 items that customers have bought along with "Snack". So the printed result should look like this: 0 Drink 1 Meat 2 Produce I have tried df.groupby but it doesn't sort them based on what was purchased along with "snacks".
[ "You can use groupby. By using groupby, you can group the products according to the customers and store them in the form of a list.\ndfx=df.groupby('Customer').agg({'ItemDescription':list})\n'''\n ItemDescription\nCustomer \n1064 [Produce, Snack]\n1080 [Drink, Snack]\n1128 [Meat, Snack]\n1229 [Drink, Snack]\n1230 [Drink, Snack, Meat, Snack]\n'''\n\nHere we will need to filter out customers who have not purchased a snack.\ndfx=dfx[pd.DataFrame(dfx.ItemDescription.tolist()).isin(['Snack']).any(1).values] # https://stackoverflow.com/a/53343080/15415267\n\nThen, convert the remaining rows into a list and get distributions with the Counter function.\nproducts=dfx.explode('ItemDescription')['ItemDescription'].to_list()\n#['Produce', 'Snack', 'Drink', 'Snack', 'Meat', 'Snack', 'Drink', 'Snack', 'Drink', 'Snack', 'Meat', 'Snack']\n\nfrom collections import Counter\noccurence_count = Counter(top)\noccurence_count.most_common(4) #get top 4 product \n#[('Snack', 6), ('Drink', 3), ('Meat', 2), ('Produce', 1)]\n\nIf you convert results to dataframe:\nfinal =pd.DataFrame(occurence_count.most_common(4),columns=['product','count'])\n'''\n product count\n0 Snack 6\n1 Drink 3\n2 Meat 2\n3 Produce 1\n\n'''\n\nor (shorter):\ndfx=df.groupby('Customer').agg({'ItemDescription':list})\n'''\n ItemDescription\nCustomer \n1064 [Produce, Snack]\n1080 [Drink, Snack]\n1128 [Meat, Snack]\n1229 [Drink, Snack]\n1230 [Drink, Snack, Meat, Snack]\n'''\n\ndfx=dfx[pd.DataFrame(dfx.ItemDescription.tolist()).isin(['Snack']).any(1).values] # https://stackoverflow.com/a/53343080/15415267\nproducts=dfx2.explode('ItemDescription')['ItemDescription'].value_counts()[0:4]\n'''\n ItemDescription\nSnack 6\nDrink 3\nMeat 2\nProduce 1\n\n'''\n\n\n", "To find the top 3 items that customers have bought along with \"Snack\", you can use the groupby() and value_counts() methods in pandas. Here is an example of how you can do this:\nimport pandas as pd\n\n# Create a sample DataFrame\ndf = pd.DataFrame({'Customer': [1064, 1064, 1080, 1080, 1230, 1230, 1128, 1128, 1229, 1229, 1230, 1230],\n 'ItemDescription': ['Produce', 'Snack', 'Drink', 'Snack', 'Drink', 'Snack', 'Meat', 'Snack', 'Drink', 'Snack', 'Meat', 'Snack'],\n 'Invoice': [55514, 55514, 56511, 56511, 55551, 55551, 55003, 55003, 55100, 55100, 55102, 55102],\n 'PurchaseDate': ['22-01', '22-01', '23-01', '23-01', '26-03', '26-03', '04-03', '04-03', '06-03', '06-03', '07-03', '07-03']})\n\n# Group the data by Customer\ndf_grouped = df.groupby('Customer')\n\n# Create a dictionary to store the counts of items bought along with \"Snack\" for each customer\nitem_counts = {}\n\n# Loop through each customer group\nfor customer, group in df_grouped:\n # Create a new DataFrame that only includes rows where the ItemDescription is \"Snack\"\n snacks = group[group['ItemDescription'] == 'Snack']\n\n # Loop through each row in the snacks DataFrame\n for index, row in snacks.iterrows():\n # Get the Invoice number for the current row\n invoice = row['Invoice']\n\n # Get the rows in the original DataFrame that have the same Invoice number as the current row\n invoice_rows = df[df['Invoice'] == invoice]\n\n # Loop through each row in the invoice_rows DataFrame\n for i, r in invoice_rows.iterrows():\n # If the ItemDescription is not \"Snack\", increment the count for that item in the item_counts dictionary\n if r['ItemDescription'] != 'Snack':\n item = r['ItemDescription']\n if item not in item_counts:\n item_counts[item] = 0\n item_counts[item] += 1\n\n# Sort the item_counts dictionary by value in descending order\nsorted_items = sorted(item_counts.items(), key=lambda x: x[1], reverse=True)\n\n# Print the top 3 items that customers have bought along with \"Snack\"\nfor i in range(3):\n print(i, sorted_items[i][0])\n\nIn the example above, the data in the DataFrame is first grouped by the values in the Customer column.\nThen, for each customer group, the rows where the ItemDescription is \"Snack\" are extracted and stored in a new DataFrame.\nFor each row in the snacks DataFrame, the rows in the original DataFrame that have the same Invoice number are extracted and stored in a new DataFrame.\nFinally, for each row in the invoice_rows DataFrame, the ItemDescription is checked. If the ItemDescription is not \"Snack\", the count for that item is incremented in the item_counts dictionary. After all the customer groups have been processed, the item_counts dictionary is sorted by value in descending order, and the top 3 items are printed.\n", "IIUC your requirement is to get the top items that were taken along with \"Snack\". So you first want to filter those Customers who didn't buy snack.\nUse groupby.filter for that. And then you want to compute the top items in terms of count. For that you use value_counts and then you take top 3 other than \"Snack\".\ns = (\n df.groupby(\"Customer\")\n .filter(lambda x: \"Snack\" not in x)[\"ItemDescription\"]\n .value_counts()\n)\ntop_3 = s.loc[s.index.difference([\"Snack\"])][:3]\n\nprint(top_3):\nDrink 3\nMeat 2\nProduce 1\n\n" ]
[ 0, 0, 0 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074666835_dataframe_pandas_python.txt
Q: Fit data with a function that equals 0 and could not be converted to the form f(x) = x I have 2 columns and 31 rows in a pandas dataframe. I want to plot this x,y data and fit them to a complex function with 4 parameters. The function looks something like this. The function has to be 0 # Data: T,p = df["T"], df["p"] #31 rows # known constants: a,b,Ta,c0,x def c(T,v,VP,a=...,b=...,Ta=...,c0=...): c = c0 + a*(T-Ta) + b*t_r(T)**v/(t_r(T)**v-K(T,VP)) return c # t_r and K are other functions def function(p,T,p0,v,N,VP,a,b,c0,x): return np.log(1-p) + p + (x*c(T,v,VP))*p**2 + (p0/N)*(R-0.5*R**3) # =0 I am interested in fitting the parameters N,p0,Vp I tried to use Lmfit and changed my function to -> function(params,T,p) from Lmfit import minimize, Parameters ## add Parameters params = Parameters() ##Class with a list of parameters # add all constant Parameters with vary = False params.add("a", value=...,vary=False) ... ## add variables to fit with vary = True, limits with min,max params.add("N",value=..., vary=True,min=0,max=...) ... output = minimize(function,params) #Fit Results output.params.pretty_print() #Show Results Now I acquired the parameters, but I want to check if this makes sense by plot(T,p) for a more continous array, like: Ts = np.linspace(10,60,1000) # x-array ps = ... ? # y-array plt.plot(Ts,ps,label="Fit") # Plot Data How could I obtain a function to calculate p for T on each point to plot it ? A: I find an answer myself. First I wrap my function in a way that the y-value p is the first argument. And I used lmfit parameter class as arguments. Lmfit Parameters are basically dictionaries. p_solvable = lambda p,T,parameter : function(p,T,parameter["p0"],parameter["v"],...) Then I solve the equation by scipy.optimize.root_scalar with the brentq method. The brentq method needs Brackets where the signs are changing. I chose 1 as lower limit so np.log(1-p > 0) is defined and just the doubled maximum as upper limit. p_max = np.max(p) def p(T): P_Init = [root_scalar(p_solvable,args=(T,parameter), method="brentq", bracket=[1,p_max*2]).root for T in T] return P_Init Now I have a function where I can input f(x) = y and fit it with lmfit or scipy curve_fit
Fit data with a function that equals 0 and could not be converted to the form f(x) = x
I have 2 columns and 31 rows in a pandas dataframe. I want to plot this x,y data and fit them to a complex function with 4 parameters. The function looks something like this. The function has to be 0 # Data: T,p = df["T"], df["p"] #31 rows # known constants: a,b,Ta,c0,x def c(T,v,VP,a=...,b=...,Ta=...,c0=...): c = c0 + a*(T-Ta) + b*t_r(T)**v/(t_r(T)**v-K(T,VP)) return c # t_r and K are other functions def function(p,T,p0,v,N,VP,a,b,c0,x): return np.log(1-p) + p + (x*c(T,v,VP))*p**2 + (p0/N)*(R-0.5*R**3) # =0 I am interested in fitting the parameters N,p0,Vp I tried to use Lmfit and changed my function to -> function(params,T,p) from Lmfit import minimize, Parameters ## add Parameters params = Parameters() ##Class with a list of parameters # add all constant Parameters with vary = False params.add("a", value=...,vary=False) ... ## add variables to fit with vary = True, limits with min,max params.add("N",value=..., vary=True,min=0,max=...) ... output = minimize(function,params) #Fit Results output.params.pretty_print() #Show Results Now I acquired the parameters, but I want to check if this makes sense by plot(T,p) for a more continous array, like: Ts = np.linspace(10,60,1000) # x-array ps = ... ? # y-array plt.plot(Ts,ps,label="Fit") # Plot Data How could I obtain a function to calculate p for T on each point to plot it ?
[ "I find an answer myself.\nFirst I wrap my function in a way that the y-value p is the first argument.\nAnd I used lmfit parameter class as arguments. Lmfit Parameters are basically dictionaries.\np_solvable = lambda p,T,parameter : function(p,T,parameter[\"p0\"],parameter[\"v\"],...)\n\nThen I solve the equation by scipy.optimize.root_scalar with the brentq method.\nThe brentq method needs Brackets where the signs are changing. I chose 1 as lower limit so np.log(1-p > 0) is defined and just the doubled maximum as upper limit.\np_max = np.max(p)\n\ndef p(T):\n P_Init = [root_scalar(p_solvable,args=(T,parameter), method=\"brentq\", bracket=[1,p_max*2]).root for T in T]\n return P_Init\n\nNow I have a function where I can input f(x) = y and fit it with lmfit or scipy curve_fit\n" ]
[ 0 ]
[]
[]
[ "curve_fitting", "lmfit", "pandas", "python", "scipy_optimize" ]
stackoverflow_0074437309_curve_fitting_lmfit_pandas_python_scipy_optimize.txt
Q: How to freeze a requirement with pipenv? For example we have some pipfile (below) and I'd like to freeze the django version. We don't have a requirements.txt and we only use pipenv. How can I freeze the django version? [[source]] url = "https://pypi.org/simple" verify_ssl = true name = "pypi" [packages] django = "*" [dev-packages] black = "*" [requires] python_version = "3.6" A: Pipenv do natively implement freezing requirements.txt. It is as simple as: pipenv lock -r > requirements.txt A: Assuming you have your virtual environment activated, you have three simple approaches. I will list them from less verbose to more verbose. pip $ pip freeze > requirements.txt pip3 $ pip3 freeze > requirements.txt If a virtual environment is active, pip is most certainly equivalent to pip3. pipenv run $ pipenv run pip freeze > requirements.txt $ pipenv run pip3 freeze > requirements.txt pipenv run spawns a command installed into the virtual environment, so these commands are equivalent to the ones run without pipenv run. Once again, it is assumed that your virtual environment is active. A: As of v2022.8.13 of pipenv, the "old" lock -r functionality has been removed. Going forward, this should be accomplished with: pipenv requirements > requirements.txt A: By using run You can run given command from virtualenv, with any arguments forwarded $ pipenv run pip freeze > requirements.txt A: Recent pipenv versions (e.g. version 2022.6.7) are using the requirements subcommand and pipenv lock -r is deprecated. To freeze default dependencies pipenv requirements > requirements.txt to freeze development dependencies as well pipenv requirements --dev > dev-requirements.txt A: It's as simple as changing django = "*" to django = "your-preferred-version". So if you wanted to freeze it to 2.1, the latest release at the time of this writing, you could do this: [packages] django="2.1" The pipfile Git repo has some good examples of different ways to specify version strings: https://github.com/pypa/pipfile#pipfile Note that when you generate a lockfile from your pipfile, that lockfile is actually the file that's supposed to "freeze" your dependency to a specific version. That way, you don't have to concern yourself with which version works with your code, since by distributing the lockfile everyone else must use the same dependency versions as you. The developers of pipenv intended for developers to use it like this A: first, you ensure that your virtual environment is active then you open the terminal and run the command pip3 freeze > reqirements.txt (pip3) pip3 freeze > reqirements.txt (pip3) A: This is the way that I was prompted by pipenv to generate a requirements.txt file from the project's Pipfile: pipenv lock --requirements A: Use this as -r flag is deprecated pipenv requirements > requirements.txt A: pipenv run python -m pip freeze > requirements.txt
How to freeze a requirement with pipenv?
For example we have some pipfile (below) and I'd like to freeze the django version. We don't have a requirements.txt and we only use pipenv. How can I freeze the django version? [[source]] url = "https://pypi.org/simple" verify_ssl = true name = "pypi" [packages] django = "*" [dev-packages] black = "*" [requires] python_version = "3.6"
[ "Pipenv do natively implement freezing requirements.txt.\nIt is as simple as:\npipenv lock -r > requirements.txt\n\n", "Assuming you have your virtual environment activated, you have three simple approaches. I will list them from less verbose to more verbose.\npip\n$ pip freeze > requirements.txt\n\npip3\n$ pip3 freeze > requirements.txt\n\nIf a virtual environment is active, pip is most certainly equivalent to pip3.\npipenv run\n$ pipenv run pip freeze > requirements.txt\n$ pipenv run pip3 freeze > requirements.txt\n\npipenv run spawns a command installed into the virtual environment, so these commands are equivalent to the ones run without pipenv run. Once again, it is assumed that your virtual environment is active.\n", "As of v2022.8.13 of pipenv, the \"old\" lock -r functionality has been removed.\nGoing forward, this should be accomplished with:\npipenv requirements > requirements.txt\n\n", "By using run You can run given command from virtualenv, with any arguments forwarded\n$ pipenv run pip freeze > requirements.txt \n\n", "Recent pipenv versions (e.g. version 2022.6.7) are using the requirements subcommand and pipenv lock -r is deprecated.\nTo freeze default dependencies\npipenv requirements > requirements.txt\n\nto freeze development dependencies as well\npipenv requirements --dev > dev-requirements.txt\n\n", "It's as simple as changing django = \"*\" to django = \"your-preferred-version\". So if you wanted to freeze it to 2.1, the latest release at the time of this writing, you could do this:\n[packages]\ndjango=\"2.1\"\n\nThe pipfile Git repo has some good examples of different ways to specify version strings: https://github.com/pypa/pipfile#pipfile\nNote that when you generate a lockfile from your pipfile, that lockfile is actually the file that's supposed to \"freeze\" your dependency to a specific version. That way, you don't have to concern yourself with which version works with your code, since by distributing the lockfile everyone else must use the same dependency versions as you. The developers of pipenv intended for developers to use it like this\n", "first, you ensure that your virtual environment is active then you open the terminal and run the command\npip3 freeze > reqirements.txt (pip3)\npip3 freeze > reqirements.txt (pip3)\n", "This is the way that I was prompted by pipenv to generate a requirements.txt file from the project's Pipfile:\npipenv lock --requirements\n\n", "Use this as -r flag is deprecated\npipenv requirements > requirements.txt\n\n", "pipenv run python -m pip freeze > requirements.txt\n\n" ]
[ 99, 26, 24, 11, 9, 1, 0, 0, 0, 0 ]
[ "You can create a requirements.txt using this command : \npip3 freeze > requirements.txt\n\n" ]
[ -3 ]
[ "pipenv", "pipfile", "python" ]
stackoverflow_0051845562_pipenv_pipfile_python.txt
Q: jupyter notebook can not import keras I have installed Keras and TensorFlow-GPU but when I try to import these libraries into Jupiter notebook there is an error Keras-applications 1.0.8 pypi_0 pypi keras-preprocessing 1.1.2 pypi_0 pypi tensorboard 2.1.1 pypi_0 pypi tensorflow-gpu 2.1.0 pypi_0 pypi tensorflow-gpu-estimator 2.1.0 pypi_0 pypi numpy 1.19.2 pypi_0 pypi opencv-python 4.4.0.44 pypi_0 pypi pip 19.2.3 py37_0 here are the libraries using conda list . and here is the error that jupyter displays to me : ModuleNotFoundError Traceback (most recent call last) in ----> 1 import keras 2 from keras.models import Sequential 3 from keras.layers import Dense, Activation 4 import numpy as np 5 ModuleNotFoundError: No module named 'keras' I try this one in anaconda environment: pip3 install keras Requirement already satisfied: keras in c:\users\msi-pc\appdata\local\programs\python\python39\lib\site-packages (2.4.3) Requirement already satisfied: numpy>=1.9.1 in c:\users\msi-pc\appdata\local\programs\python\python39\lib\site-packages (from keras) (1.19.4) Requirement already satisfied: scipy>=0.14 in c:\users\msi-pc\appdata\local\programs\python\python39\lib\site-packages (from keras) (1.5.4) Requirement already satisfied: h5py in c:\users\msi-pc\appdata\local\programs\python\python39\lib\site-packages (from keras) (3.1.0) Requirement already satisfied: pyyaml in c:\users\msi-pc\appdata\local\programs\python\python39\lib\site-packages (from keras) (5.3.1) I'm grateful if you help me. P. S : I realized that in order to import keras /tensorflow from the second version on (tensorflow>=2.0.0 ) i have to use import tensorflow.keras And everything will be good. A: If you're using tensorflow >= 2.0, then import keras using from tensorflow import keras Common convention is to import it as kr A: Can you please tell me if you're using multiple versions of python on the same device, if so please check if you've installed TensorFlow on the same version of python which you're using for jupyter notebook, to check that and install again: Go to the path where you've installed python(which you're using for jupyter notebook) if you've installed anaconda then go to the path where anaconda is installed and follow the procedure. Go to the site-packages folder in the path of Anaconda or python. Check if all the dependencies of TensorFlow and TensorFlow are installed there. If you can't find it then add the current python version to environment variables, see: https://www.javatpoint.com/how-to-set-python-path#:~:text=SETTING%20PATH%20IN%20PYTHON%201%20Right%20click%20on,on%20Ok%20button%3A%209%20Click%20on%20Ok%20button%3A and https://www.geeksforgeeks.org/how-to-setup-anaconda-path-to-environment-variable/ After you've added the current version of python to the environment path variables then follow this link to install TensorFlow: https://www.geeksforgeeks.org/how-to-install-python-tensorflow-in-windows/#:~:text=%20%20%201%20Step%201%3A%20Click%20on,done%20with%20the%20use%20of%20following...%20More%20 and https://machinelearningspace.com/installing-tensorflow-2-0-in-anaconda-environment/ Then again follow step 2 and 3 and if it's still not appearing in the site-packages folder then follow this link: https://www.quora.com/How-can-I-work-with-Keras-on-a-Jupyter-notebook-using-Tensorflow-as-backend for some details(not that helpful) Also, Try installing Keras by this command: pip3 install Keras If you're using one version of python then please check if jupyter and TensorFlow are installed in the same Virtual Environment Please tell me if it works or not. A: I am not sure how you have imported keras, but in past even I have faced a similar issue. What I had done is i did something like this import keras which is wrong! We have to import it like this from tensorflow import keras which worked fine for me! Hope this Helps you!
jupyter notebook can not import keras
I have installed Keras and TensorFlow-GPU but when I try to import these libraries into Jupiter notebook there is an error Keras-applications 1.0.8 pypi_0 pypi keras-preprocessing 1.1.2 pypi_0 pypi tensorboard 2.1.1 pypi_0 pypi tensorflow-gpu 2.1.0 pypi_0 pypi tensorflow-gpu-estimator 2.1.0 pypi_0 pypi numpy 1.19.2 pypi_0 pypi opencv-python 4.4.0.44 pypi_0 pypi pip 19.2.3 py37_0 here are the libraries using conda list . and here is the error that jupyter displays to me : ModuleNotFoundError Traceback (most recent call last) in ----> 1 import keras 2 from keras.models import Sequential 3 from keras.layers import Dense, Activation 4 import numpy as np 5 ModuleNotFoundError: No module named 'keras' I try this one in anaconda environment: pip3 install keras Requirement already satisfied: keras in c:\users\msi-pc\appdata\local\programs\python\python39\lib\site-packages (2.4.3) Requirement already satisfied: numpy>=1.9.1 in c:\users\msi-pc\appdata\local\programs\python\python39\lib\site-packages (from keras) (1.19.4) Requirement already satisfied: scipy>=0.14 in c:\users\msi-pc\appdata\local\programs\python\python39\lib\site-packages (from keras) (1.5.4) Requirement already satisfied: h5py in c:\users\msi-pc\appdata\local\programs\python\python39\lib\site-packages (from keras) (3.1.0) Requirement already satisfied: pyyaml in c:\users\msi-pc\appdata\local\programs\python\python39\lib\site-packages (from keras) (5.3.1) I'm grateful if you help me. P. S : I realized that in order to import keras /tensorflow from the second version on (tensorflow>=2.0.0 ) i have to use import tensorflow.keras And everything will be good.
[ "If you're using tensorflow >= 2.0, then import keras using\nfrom tensorflow import keras\n\nCommon convention is to import it as kr\n", "Can you please tell me if you're using multiple versions of python on the same device, if so please check if you've installed TensorFlow on the same version of python which you're using for jupyter notebook, to check that and install again:\n\nGo to the path where you've installed python(which you're using for\njupyter notebook) if you've installed anaconda then go to the path\nwhere anaconda is installed and follow the procedure.\n\nGo to the site-packages folder in the path of Anaconda or python.\n\nCheck if all the dependencies of TensorFlow and TensorFlow are\ninstalled there.\n\nIf you can't find it then add the current python version to\nenvironment variables, see:\nhttps://www.javatpoint.com/how-to-set-python-path#:~:text=SETTING%20PATH%20IN%20PYTHON%201%20Right%20click%20on,on%20Ok%20button%3A%209%20Click%20on%20Ok%20button%3A\nand\nhttps://www.geeksforgeeks.org/how-to-setup-anaconda-path-to-environment-variable/\n\nAfter you've added the current version of python to the environment path\nvariables then follow this link to install TensorFlow:\nhttps://www.geeksforgeeks.org/how-to-install-python-tensorflow-in-windows/#:~:text=%20%20%201%20Step%201%3A%20Click%20on,done%20with%20the%20use%20of%20following...%20More%20\nand\nhttps://machinelearningspace.com/installing-tensorflow-2-0-in-anaconda-environment/\n\n\nThen again follow step 2 and 3 and if it's still not appearing in the site-packages folder then follow this link:\nhttps://www.quora.com/How-can-I-work-with-Keras-on-a-Jupyter-notebook-using-Tensorflow-as-backend\nfor some details(not that helpful)\nAlso, Try installing Keras by this command:\npip3 install Keras\n\nIf you're using one version of python then please check if jupyter and TensorFlow are installed in the same Virtual Environment\nPlease tell me if it works or not.\n", "I am not sure how you have imported keras, but in past even I have faced a similar issue. What I had done is i did something like this\nimport keras\n\nwhich is wrong! We have to import it like this\nfrom tensorflow import keras\n\nwhich worked fine for me! Hope this Helps you!\n" ]
[ 1, 0, 0 ]
[]
[]
[ "anaconda", "jupyter_notebook", "keras", "python", "tensorflow" ]
stackoverflow_0064861794_anaconda_jupyter_notebook_keras_python_tensorflow.txt
Q: Fetching data from GCP cloud storage (avro files) based on last modified date I am in the process of fetching the latest data in Avro format from the GCP cloud storage to Bigquery. I have come across this resource that shows how to do it. Questions Is it possible to get the latest modified Avro file ? Are there metadata files from the GCP storage bucket that can help with this? A: You can use this command to sort files to get the latest file from GCS bucket, you can change the condition based on the requirement. gsutil ls -l gs://[bucket-name]/ | sort -k 2 | tail -n 2 To specifically get the latest .avro file from the GCS Bucket, you can consider this code: from google.cloud import storage import re storage_client = storage.Client() bucket = storage_client.get_bucket('bucket-name') files = bucket.list_blobs() fileList = [file.name for file in files if '.avro' in file.name] latestFile = fileList[0] latestTimeStamp = bucket.get_blob(fileList[0]).updated for i in range(len(fileList)): timeStamp = bucket.get_blob(fileList[i]).updated if timeStamp > latestTimeStamp: latestFile = fileList[i] latestTimeStamp = timeStamp print(latestFile) To know more about Object Metadata you can refer to this document.
Fetching data from GCP cloud storage (avro files) based on last modified date
I am in the process of fetching the latest data in Avro format from the GCP cloud storage to Bigquery. I have come across this resource that shows how to do it. Questions Is it possible to get the latest modified Avro file ? Are there metadata files from the GCP storage bucket that can help with this?
[ "You can use this command to sort files to get the latest file from GCS bucket, you can change the condition based on the requirement.\ngsutil ls -l gs://[bucket-name]/ | sort -k 2 | tail -n 2\n\nTo specifically get the latest .avro file from the GCS Bucket, you can consider this code:\nfrom google.cloud import storage\nimport re\n\nstorage_client = storage.Client()\nbucket = storage_client.get_bucket('bucket-name')\n\nfiles = bucket.list_blobs() \nfileList = [file.name for file in files if '.avro' in file.name]\n \nlatestFile = fileList[0]\nlatestTimeStamp = bucket.get_blob(fileList[0]).updated\n \nfor i in range(len(fileList)):\n \n timeStamp = bucket.get_blob(fileList[i]).updated\n \n if timeStamp > latestTimeStamp:\n latestFile = fileList[i]\n latestTimeStamp = timeStamp\n \nprint(latestFile)\n\nTo know more about Object Metadata you can refer to this document.\n" ]
[ 0 ]
[]
[]
[ "avro", "google_bigquery", "google_cloud_platform", "google_cloud_storage", "python" ]
stackoverflow_0074655356_avro_google_bigquery_google_cloud_platform_google_cloud_storage_python.txt
Q: mysql + sqlalchemy create table auto increment In sqlalchemy, I am trying to create a table with a primary key tenant_id and a different auto increment column tenant_index as below class Tenant(Base): """Data Model for tenants table""" __tablename__ = "tenants" __table_args__ = {"schema": DATABASE} tenant_index = Column( BigInteger, primary_key=True,---->1 nullable=False, autoincrement=True, ) name = Column(String(64), nullable=False) prefix = Column(String(32), nullable=True, unique=True) tenant_id = Column(String(32), primary_key=True, ---->2 nullable=False, server_default="" ) def __repr__(self): return f"<Tenant model {self.tenant_id}>" Table generated mysql> desc tenants; +--------------+-------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +--------------+-------------+------+-----+---------+----------------+ | tenant_index | bigint | NO | PRI | NULL | auto_increment | | name | varchar(64) | NO | | NULL | | | prefix | varchar(32) | YES | UNI | NULL | | | tenant_id | varchar(32) | NO | PRI | | | +--------------+-------------+------+-----+---------+----------------+ Note: it has 2 primary keys -- is it ok to have 2 primary keys in 1 table ? If I remove primary_key=true, from tenant_index column, table is getting created as below mysql> desc tenants; +--------------+-------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +--------------+-------------+------+-----+---------+-------+ | tenant_index | bigint | NO | | NULL | | | name | varchar(64) | NO | | NULL | | | prefix | varchar(32) | YES | UNI | NULL | | | tenant_id | varchar(32) | NO | PRI | | | +--------------+-------------+------+-----+---------+-------+ but when i try to add data, its telling no default value -- which means auto increment is not working mysql> insert into tenants (name,prefix,tenant_id) values("naveen1","kumar1","11"); ERROR 1364 (HY000): Field 'tenant_index' doesn't have a default value what am i doing wrong, please help I need tenant_id as primary key, i need tenant_index as auto-increment A: A table cannot have two primary keys, but a single primary key may include two columns. This means you need to define the primary key constraint separately, not as an attribute of either column. https://docs.sqlalchemy.org/en/14/core/constraints.html#primary-key-constraint shows an example: my_table = Table( "mytable", metadata_obj, Column("id", Integer), Column("version_id", Integer), Column("data", String(50)), PrimaryKeyConstraint("id", "version_id", name="mytable_pk"), ) Edit: I tested some more options and I looked at the SQLAlchemy code. It seems that although MySQL does support any column to be auto-increment as long as the column is the first column in any index (unique or non-unique), SQLAlchemy is more limited. I can't get it to add a column as auto-increment unless it's part of the primary key.
mysql + sqlalchemy create table auto increment
In sqlalchemy, I am trying to create a table with a primary key tenant_id and a different auto increment column tenant_index as below class Tenant(Base): """Data Model for tenants table""" __tablename__ = "tenants" __table_args__ = {"schema": DATABASE} tenant_index = Column( BigInteger, primary_key=True,---->1 nullable=False, autoincrement=True, ) name = Column(String(64), nullable=False) prefix = Column(String(32), nullable=True, unique=True) tenant_id = Column(String(32), primary_key=True, ---->2 nullable=False, server_default="" ) def __repr__(self): return f"<Tenant model {self.tenant_id}>" Table generated mysql> desc tenants; +--------------+-------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +--------------+-------------+------+-----+---------+----------------+ | tenant_index | bigint | NO | PRI | NULL | auto_increment | | name | varchar(64) | NO | | NULL | | | prefix | varchar(32) | YES | UNI | NULL | | | tenant_id | varchar(32) | NO | PRI | | | +--------------+-------------+------+-----+---------+----------------+ Note: it has 2 primary keys -- is it ok to have 2 primary keys in 1 table ? If I remove primary_key=true, from tenant_index column, table is getting created as below mysql> desc tenants; +--------------+-------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +--------------+-------------+------+-----+---------+-------+ | tenant_index | bigint | NO | | NULL | | | name | varchar(64) | NO | | NULL | | | prefix | varchar(32) | YES | UNI | NULL | | | tenant_id | varchar(32) | NO | PRI | | | +--------------+-------------+------+-----+---------+-------+ but when i try to add data, its telling no default value -- which means auto increment is not working mysql> insert into tenants (name,prefix,tenant_id) values("naveen1","kumar1","11"); ERROR 1364 (HY000): Field 'tenant_index' doesn't have a default value what am i doing wrong, please help I need tenant_id as primary key, i need tenant_index as auto-increment
[ "A table cannot have two primary keys, but a single primary key may include two columns. This means you need to define the primary key constraint separately, not as an attribute of either column.\nhttps://docs.sqlalchemy.org/en/14/core/constraints.html#primary-key-constraint shows an example:\nmy_table = Table(\n \"mytable\",\n metadata_obj,\n Column(\"id\", Integer),\n Column(\"version_id\", Integer),\n Column(\"data\", String(50)),\n PrimaryKeyConstraint(\"id\", \"version_id\", name=\"mytable_pk\"),\n)\n\n\nEdit: I tested some more options and I looked at the SQLAlchemy code. It seems that although MySQL does support any column to be auto-increment as long as the column is the first column in any index (unique or non-unique), SQLAlchemy is more limited. I can't get it to add a column as auto-increment unless it's part of the primary key.\n" ]
[ 0 ]
[]
[]
[ "mysql", "python", "sqlalchemy" ]
stackoverflow_0074667920_mysql_python_sqlalchemy.txt
Q: build speech to text system from scratch using python I am in need to Speech to text system so that I can transcribe audio files to text format. While researching on that I found systems created by big companies e.g Amazon Transcribe, Google Speech to Text, IBM Watson etc. And found all the libraries in python internal make use of those APIs. What would be the steps if I want to create such a system myself? I could not find any detailed article on that. How to build your own system for speech recognition. The main reason I want to create my own system is because I cannot send the audio files to external APIs due to security reasons. The main goal is I have recordings of persons talking mostly in English language and I want to transcribe that audio to text. Please let me know if you have any other ideas of doing the same instead of sending audio files to external systems. A: One place to start would be to review the offerings of www.voxforge.org; review the tutorial and forums sections to get an overview of the use of open source projects such as Julius and CMU Sphinx. It's a quite extensive subject and you will find that many people have trodden the path before you, so you can learn from their experience. A: You can run open.ai's whisper locally on your own hardware. You'll only need a network connection to download the neural models once. Once that's done none of the data you will be processing will leave your computer. To have it running at reasonable speed you'll need a beefy GPU setup with cuda properly configured so that pytorch can use it. Running it on CPU will be orders of magnitude slower and likely to last for days (depending on your required throughput).
build speech to text system from scratch using python
I am in need to Speech to text system so that I can transcribe audio files to text format. While researching on that I found systems created by big companies e.g Amazon Transcribe, Google Speech to Text, IBM Watson etc. And found all the libraries in python internal make use of those APIs. What would be the steps if I want to create such a system myself? I could not find any detailed article on that. How to build your own system for speech recognition. The main reason I want to create my own system is because I cannot send the audio files to external APIs due to security reasons. The main goal is I have recordings of persons talking mostly in English language and I want to transcribe that audio to text. Please let me know if you have any other ideas of doing the same instead of sending audio files to external systems.
[ "One place to start would be to review the offerings of www.voxforge.org; review the tutorial and forums sections to get an overview of the use of open source projects such as Julius and CMU Sphinx. It's a quite extensive subject and you will find that many people have trodden the path before you, so you can learn from their experience.\n", "You can run open.ai's whisper locally on your own hardware. You'll only need a network connection to download the neural models once. Once that's done none of the data you will be processing will leave your computer.\nTo have it running at reasonable speed you'll need a beefy GPU setup with cuda properly configured so that pytorch can use it. Running it on CPU will be orders of magnitude slower and likely to last for days (depending on your required throughput).\n" ]
[ 0, 0 ]
[]
[]
[ "deep_learning", "machine_learning", "python", "speech_recognition", "speech_to_text" ]
stackoverflow_0058796931_deep_learning_machine_learning_python_speech_recognition_speech_to_text.txt
Q: Quicksight Dashboard using existing Template I am trying to create a template in Quicksight, so that it allows me to create dashboards with different datasets, but with the same structure. I am using boto3 (Python) and the documentation indicates that a template is capable of creating a dashboard using different datasets, as long as the new dataset has the same structure as the dataset with which the template was generated. However, when I try to create the dashboard, I get the following error: An error occurred (InvalidParameterValueException) when calling the CreateDashboard operation: Given placeholders [test_2] are not part of template It would be helpful if someone could tell me the steps in the code to follow. Thanks a lot! A: Follow link to image here https://i.stack.imgur.com/69rHj.png See line 32 and description on line33. This had me going for 2 or 3 hours, too. Same error as yourself. From AWS CLI I derived my QS data set id. That was wrong in my case. Use the TEMPLATE data set id instead. Issue resolved, dashboard created. A: Keep the placeholder same as of template dataset but change Dataset ARN according to your target/replaceable dataset. I also had the same issue. But found the solution
Quicksight Dashboard using existing Template
I am trying to create a template in Quicksight, so that it allows me to create dashboards with different datasets, but with the same structure. I am using boto3 (Python) and the documentation indicates that a template is capable of creating a dashboard using different datasets, as long as the new dataset has the same structure as the dataset with which the template was generated. However, when I try to create the dashboard, I get the following error: An error occurred (InvalidParameterValueException) when calling the CreateDashboard operation: Given placeholders [test_2] are not part of template It would be helpful if someone could tell me the steps in the code to follow. Thanks a lot!
[ "Follow link to image here\nhttps://i.stack.imgur.com/69rHj.png\nSee line 32 and description on line33.\nThis had me going for 2 or 3 hours, too. Same error as yourself.\nFrom AWS CLI I derived my QS data set id. That was wrong in my case.\nUse the TEMPLATE data set id instead. Issue resolved, dashboard created.\n", "Keep the placeholder same as of template dataset but change Dataset ARN according to your target/replaceable dataset. I also had the same issue. But found the solution\n" ]
[ 0, 0 ]
[]
[]
[ "amazon_quicksight", "amazon_web_services", "boto3", "dashboard", "python" ]
stackoverflow_0070516228_amazon_quicksight_amazon_web_services_boto3_dashboard_python.txt
Q: how could i count longest sequence of 01 in list i need to count longest 01 from list ex: [1,1,1,0,0,1,1,1,0,1,0,1,0,1,0] suppose to print 4 (sequence could also start with 10): 1,0,1,0 = 2 import itertools with open("file.txt", 'r+') as file: file_context = file.read() print(file_context) def func1(arg): global key key = list(arg) print(key) func1(file_context) A = [0,1,0,1] key2 = [ int(x) for x in key ] c=0 k = max(len(list(lent)) for (A[c],lent) in itertools.groupby(A) if A[c]==0 and A[c+1]==1) print(k) A: You can do this with a fairly straightforward double loop - i.e., two iterations checking for 0,1 then 1,0 pairs lst = [1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0] ms = 0 for t in [0, 1], [1, 0]: i, c = 0, 0 while i < len(lst)-1: if lst[i:i+2] == t: c += 1 i += 1 elif c > 0: ms = max(ms, c) c = 0 i += 1 print(max(ms, c)) Output: 4 A: By using a regular expression you can work directly on string-data, no further casting to int, list, ... is needed. Here an example: import re with open("file.txt", 'r+') as file: file_context = file.read() # remove extra whitespaces and commas file_context = file_context.replace(', ', '') # search by pattern matches_01 = re.findall(r'((?:01)+)', file_context) matches_10 = re.findall(r'((?:10)+)', file_context) # get lengths max_01 = len(max(matches_01, key=len)) // 2 max_10 = len(max(matches_10, key=len)) // 2 max_length = max(max_01, max_10) # result print(f'Length longest sequence: {max_length}')
how could i count longest sequence of 01 in list
i need to count longest 01 from list ex: [1,1,1,0,0,1,1,1,0,1,0,1,0,1,0] suppose to print 4 (sequence could also start with 10): 1,0,1,0 = 2 import itertools with open("file.txt", 'r+') as file: file_context = file.read() print(file_context) def func1(arg): global key key = list(arg) print(key) func1(file_context) A = [0,1,0,1] key2 = [ int(x) for x in key ] c=0 k = max(len(list(lent)) for (A[c],lent) in itertools.groupby(A) if A[c]==0 and A[c+1]==1) print(k)
[ "You can do this with a fairly straightforward double loop - i.e., two iterations checking for 0,1 then 1,0 pairs\nlst = [1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0]\n\nms = 0\n\nfor t in [0, 1], [1, 0]:\n i, c = 0, 0\n while i < len(lst)-1:\n if lst[i:i+2] == t:\n c += 1\n i += 1\n elif c > 0:\n ms = max(ms, c)\n c = 0\n i += 1\n\nprint(max(ms, c))\n\nOutput:\n4\n\n", "By using a regular expression you can work directly on string-data, no further casting to int, list, ... is needed. Here an example:\nimport re\n\nwith open(\"file.txt\", 'r+') as file:\n file_context = file.read()\n\n# remove extra whitespaces and commas\nfile_context = file_context.replace(', ', '')\n\n# search by pattern\nmatches_01 = re.findall(r'((?:01)+)', file_context)\nmatches_10 = re.findall(r'((?:10)+)', file_context)\n\n# get lengths\nmax_01 = len(max(matches_01, key=len)) // 2\nmax_10 = len(max(matches_10, key=len)) // 2\n\nmax_length = max(max_01, max_10)\n\n# result\nprint(f'Length longest sequence: {max_length}')\n\n" ]
[ 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0074667264_python.txt
Q: OpenAI Gym Manual Play function automatically presses key import gym from gym.utils import play play.play(gym.make('MountainCar-v0', render_mode='rgb_array').env, zoom=1, keys_to_action={"0":0, "2":2, "1":1}) The above code is all that is needed to play MountainCar manually. The controls are as follows: 0 = nothing 1 = left 2 = right However when I run the code, if I'm not pressing anything, the car automatically goes to the left as if the "1" key on the keyboard is being pressed down. I've tried searching through the docs for a solution but no luck. https://github.com/openai/gym/blob/master/gym/utils/play.py A: import gym from gym.utils import play env = play.play(gym.make('MountainCar-v0', render_mode='rgb_array').env, zoom=1, keys_to_action={"2":2, "1":0}, noop=1) the noop sets the default action
OpenAI Gym Manual Play function automatically presses key
import gym from gym.utils import play play.play(gym.make('MountainCar-v0', render_mode='rgb_array').env, zoom=1, keys_to_action={"0":0, "2":2, "1":1}) The above code is all that is needed to play MountainCar manually. The controls are as follows: 0 = nothing 1 = left 2 = right However when I run the code, if I'm not pressing anything, the car automatically goes to the left as if the "1" key on the keyboard is being pressed down. I've tried searching through the docs for a solution but no luck. https://github.com/openai/gym/blob/master/gym/utils/play.py
[ "import gym\nfrom gym.utils import play\nenv = play.play(gym.make('MountainCar-v0', render_mode='rgb_array').env, zoom=1, keys_to_action={\"2\":2, \"1\":0}, noop=1)\n\nthe noop sets the default action\n" ]
[ 0 ]
[]
[]
[ "openai_gym", "python" ]
stackoverflow_0074658467_openai_gym_python.txt
Q: Connecting the board to the player objects Having coded for the players for my boardgame, I am facing difficulties with creating the board and connectying it to the players. The board is a list containig 10 slots, where each slot is a string with a hidden from the player letter (A,B,C,D,E,F,G,I,J,K). The letter is only known to the owner of the slot. At the beggining of the game all players are placed in the very first slot Every time the player(object) throws a dice, it moves according to the dice. Code for the players(no problem here) from dataclasses import dataclass @dataclass class Player: firstname: str lastname: str coins: int slot: int def full_info(self) -> str: return f"{self.firstname} {self.lastname} {self.coins} {self.slot}" @classmethod def from_user_input(cls) -> 'Player': return cls( firstname=input("Please enter your first name:"), lastname=input("Please enter your second name: "), coins=100, slot= 0) player1 = Player.from_user_input() Player(firstname='', lastname='', coins=100, slot= 0) player2 = Player.from_user_input() Player(firstname='', lastname='', coins=100, slot= 0) playersingame = [player1, player2] print(playersingame) The board is printing only the emtpty slots, it does not show players in the slot. In the attributes of my players I put slot= 0, when I run the code it does not show that. board = [None] *10 print(board) board.insert(0, player1) board.insert(0, player2) print(board)``` A: Here; I modified the player creation a little too. from dataclasses import dataclass @dataclass class Player: firstname: str lastname: str coins: int slot: int def full_info(self) -> str: return f"{self.firstname} {self.lastname} {self.coins} {self.slot}" @classmethod def from_user_input(cls) -> 'Player': return cls( firstname=input("Please enter your first name:"), lastname=input("Please enter your second name: "), coins=100, slot= 0) # Creating players minplayer, maxplayer, n = 2, 5, -1 while not(minplayer <= n <= maxplayer): n = int(input(f"Please choose a number of players between {minplayer} and {maxplayer}: ")) playersingame = [] for i in range(n): playersingame.append(Player.from_user_input()) print([player.full_info() for player in playersingame]) board = [[] for i in range(10)] print(board) for player in playersingame: board[player.slot].append(player) print(board) Example: Please choose a number of players between 2 and 5: 3 Please enter your first name:Tom Please enter your second name: Bombadil Please enter your first name:Frodo Please enter your second name: Baggins Please enter your first name:Saruman Please enter your second name: ZeWhite ['Tom Bombadil 100 0', 'Frodo Baggins 100 0', 'Saruman ZeWhite 100 0'] [[], [], [], [], [], [], [], [], [], []] [[Player(firstname='Tom', lastname='Bombadil', coins=100, slot=0), Player(firstname='Frodo', lastname='Baggins', coins=100, slot=0), Player(firstname='Saruman', lastname='ZeWhite', coins=100, slot=0)], [], [], [], [], [], [], [], [], []] Here's code for one round (you can wrap it in a loop afterwards to implement the full game): # Playing a round import random for player in playersingame: input(f"{player.firstname} {player.lastname}, please press enter to roll your die...") die = random.randint(1,6) print(f"You take {die} step{'s'*(die>1)} forward") board[player.slot].remove(player) player.slot += die board[player.slot].append(player) print(board) Example: Tom Bombadil, please press enter to roll your die... You take 3 steps forward Frodo Baggins, please press enter to roll your die... You take 1 step forward Saruman ZeWhite, please press enter to roll your die... You take 4 steps forward [[], [Player(firstname='Frodo', lastname='Baggins', coins=100, slot=1)], [], [Player(firstname='Tom', lastname='Bombadil', coins=100, slot=3)], [Player(firstname='Saruman', lastname='ZeWhite', coins=100, slot=4)], [], [], [], [], []]
Connecting the board to the player objects
Having coded for the players for my boardgame, I am facing difficulties with creating the board and connectying it to the players. The board is a list containig 10 slots, where each slot is a string with a hidden from the player letter (A,B,C,D,E,F,G,I,J,K). The letter is only known to the owner of the slot. At the beggining of the game all players are placed in the very first slot Every time the player(object) throws a dice, it moves according to the dice. Code for the players(no problem here) from dataclasses import dataclass @dataclass class Player: firstname: str lastname: str coins: int slot: int def full_info(self) -> str: return f"{self.firstname} {self.lastname} {self.coins} {self.slot}" @classmethod def from_user_input(cls) -> 'Player': return cls( firstname=input("Please enter your first name:"), lastname=input("Please enter your second name: "), coins=100, slot= 0) player1 = Player.from_user_input() Player(firstname='', lastname='', coins=100, slot= 0) player2 = Player.from_user_input() Player(firstname='', lastname='', coins=100, slot= 0) playersingame = [player1, player2] print(playersingame) The board is printing only the emtpty slots, it does not show players in the slot. In the attributes of my players I put slot= 0, when I run the code it does not show that. board = [None] *10 print(board) board.insert(0, player1) board.insert(0, player2) print(board)```
[ "Here; I modified the player creation a little too.\nfrom dataclasses import dataclass\n@dataclass\nclass Player: \n firstname: str\n lastname: str\n coins: int\n slot: int\n def full_info(self) -> str:\n return f\"{self.firstname} {self.lastname} {self.coins} {self.slot}\"\n\n @classmethod\n def from_user_input(cls) -> 'Player':\n return cls(\n firstname=input(\"Please enter your first name:\"),\n lastname=input(\"Please enter your second name: \"),\n coins=100,\n slot= 0)\n\n# Creating players\nminplayer, maxplayer, n = 2, 5, -1\nwhile not(minplayer <= n <= maxplayer):\n n = int(input(f\"Please choose a number of players between {minplayer} and {maxplayer}: \"))\nplayersingame = []\nfor i in range(n):\n playersingame.append(Player.from_user_input())\n\n\nprint([player.full_info() for player in playersingame])\n\nboard = [[] for i in range(10)]\nprint(board)\n\nfor player in playersingame:\n board[player.slot].append(player)\n\nprint(board)\n\nExample:\nPlease choose a number of players between 2 and 5: 3\n\nPlease enter your first name:Tom\n\nPlease enter your second name: Bombadil\n\nPlease enter your first name:Frodo\n\nPlease enter your second name: Baggins\n\nPlease enter your first name:Saruman\n\nPlease enter your second name: ZeWhite\n['Tom Bombadil 100 0', 'Frodo Baggins 100 0', 'Saruman ZeWhite 100 0']\n[[], [], [], [], [], [], [], [], [], []]\n[[Player(firstname='Tom', lastname='Bombadil', coins=100, slot=0), Player(firstname='Frodo', lastname='Baggins', coins=100, slot=0), Player(firstname='Saruman', lastname='ZeWhite', coins=100, slot=0)], [], [], [], [], [], [], [], [], []]\n\nHere's code for one round (you can wrap it in a loop afterwards to implement the full game):\n# Playing a round\nimport random\n\nfor player in playersingame:\n input(f\"{player.firstname} {player.lastname}, please press enter to roll your die...\")\n die = random.randint(1,6)\n print(f\"You take {die} step{'s'*(die>1)} forward\")\n board[player.slot].remove(player)\n player.slot += die\n board[player.slot].append(player)\n \nprint(board)\n\nExample:\nTom Bombadil, please press enter to roll your die...\nYou take 3 steps forward\n\nFrodo Baggins, please press enter to roll your die...\nYou take 1 step forward\n\nSaruman ZeWhite, please press enter to roll your die...\nYou take 4 steps forward\n[[], [Player(firstname='Frodo', lastname='Baggins', coins=100, slot=1)], [], [Player(firstname='Tom', lastname='Bombadil', coins=100, slot=3)], [Player(firstname='Saruman', lastname='ZeWhite', coins=100, slot=4)], [], [], [], [], []]\n\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074667709_python.txt
Q: Select column containing several value with range of number in pandas If I have a dataframe ` A Variant&Price Qty AAC 7:124|25: 443 1 AAD 35:|35: 1 AAS 32:98|3:40 1 AAG 2: |25: 1 AAC 25:443|26:344 1 and I want to get variant which has one of its values is below 7 A Variant&Price Qty AAC 7:124|25: 443 1 AAS 32:9|3:40 1 AAG 2: |25: 1 Note that first digit is the variant, as well as the third digit (variant always before :) I can apply this code, split_df = df['Variant&Price'].str.split(':|\|', expand=True) print(df[split_df.iloc[:, [0,2]].astype(int).min(axis=1) <= 7]) But what if I want to get, instead of 7, it is now range from 2 to 7. I ve tried >=2 & <=7 but not working A: You can use a regex to extractall the number before :, convert to integer and check if any is between 2 and 7: m = (df['Variant&Price'].str.extractall('(\d+):')[0] .astype(int).between(2,7).groupby(level=0).any() ) out = df[m] Output: A Variant&Price Qty 0 AAC 7:124|25: 443 1 2 AAS 32:98|3:40 1 3 AAG 2: |25: 1 A: cond1 = (df['Variant&Price'].str.split('|').explode() .str.split(':').str[0] .astype('int') .between(2, 7).max(level=0)) df[cond1] output: Am Variant&Price Qty 0 AAC 7:124|25: 443 1 2 AAS 32:98|3:40 1 3 AAG 2: |25: 1
Select column containing several value with range of number in pandas
If I have a dataframe ` A Variant&Price Qty AAC 7:124|25: 443 1 AAD 35:|35: 1 AAS 32:98|3:40 1 AAG 2: |25: 1 AAC 25:443|26:344 1 and I want to get variant which has one of its values is below 7 A Variant&Price Qty AAC 7:124|25: 443 1 AAS 32:9|3:40 1 AAG 2: |25: 1 Note that first digit is the variant, as well as the third digit (variant always before :) I can apply this code, split_df = df['Variant&Price'].str.split(':|\|', expand=True) print(df[split_df.iloc[:, [0,2]].astype(int).min(axis=1) <= 7]) But what if I want to get, instead of 7, it is now range from 2 to 7. I ve tried >=2 & <=7 but not working
[ "You can use a regex to extractall the number before :, convert to integer and check if any is between 2 and 7:\nm = (df['Variant&Price'].str.extractall('(\\d+):')[0]\n .astype(int).between(2,7).groupby(level=0).any()\n )\n\nout = df[m]\n\nOutput:\n A Variant&Price Qty\n0 AAC 7:124|25: 443 1\n2 AAS 32:98|3:40 1\n3 AAG 2: |25: 1\n\n", "cond1 = (df['Variant&Price'].str.split('|').explode()\n .str.split(':').str[0]\n .astype('int')\n .between(2, 7).max(level=0))\n\ndf[cond1]\n\noutput:\n Am Variant&Price Qty\n0 AAC 7:124|25: 443 1\n2 AAS 32:98|3:40 1\n3 AAG 2: |25: 1\n\n" ]
[ 3, 1 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074667993_pandas_python.txt
Q: python program not running is html webpage this my html code <!DOCTYPE html> <html lang="en"> <head> <title>pyscript demo</title> <link rel="stylesheet" href="https://pyscript.net/latest/pyscript.css" /> <script defer src="https://pyscript.net/latest/pyscript.js"></script> </head> <body> <py-script src="pythonfile.py"></py-script> </body> </html> and this my python program code lst = [["a", 45], ["b", 40], ["c", 18], ["d", 17]] name = input("Enter your name:") print("Searching in list") for item in lst: if item[0] == name: print("name:", item[0], "age:", item[1]) I have tried to run python program in html webpage the html web page is working and the python code is not running is html webpage A: If you have vs-code, install "Live-Server" extension and open live-server. or open terminal/command-prompt in the same folder. Assuming you have named your html file as "index.html", enter following command in terminal: python3 -m http.server 8000 then open http://0.0.0.0:8000 in your browser A: You are facing issue as mentioned below. Access to fetch at 'pythonfile.py' from origin 'null' has been blocked by CORS policy: Cross origin requests are only supported for protocol schemes: http, data, isolated-app, chrome-extension, chrome, https, chrome-untrusted. then you can try this instead of attaching a python file write it inline. <py-script> lst = [["a", 45], ["b", 40], ["c", 18], ["d", 17]] name = input("Enter your name:") print("Searching in list") for item in lst: if item[0] == name: print("name:", item[0], "age:", item[1]) </py-script> A: When some thing does not work, please describe what is happening instead. In a web browser you can get more information from the developer tools Console, which can be opened by the menu or probably by pressing F12 or Ctrl+Shift+I (See here). As GauravGiri already answered, the most probable reason is the difference between opening an HTML file locally and visiting it through a webserver: HTML Sites are not allowed to read the contents of other local files on your computer, while accessing them from the same webserver is allowed.
python program not running is html webpage
this my html code <!DOCTYPE html> <html lang="en"> <head> <title>pyscript demo</title> <link rel="stylesheet" href="https://pyscript.net/latest/pyscript.css" /> <script defer src="https://pyscript.net/latest/pyscript.js"></script> </head> <body> <py-script src="pythonfile.py"></py-script> </body> </html> and this my python program code lst = [["a", 45], ["b", 40], ["c", 18], ["d", 17]] name = input("Enter your name:") print("Searching in list") for item in lst: if item[0] == name: print("name:", item[0], "age:", item[1]) I have tried to run python program in html webpage the html web page is working and the python code is not running is html webpage
[ "If you have vs-code, install \"Live-Server\" extension and open live-server.\nor\nopen terminal/command-prompt in the same folder. Assuming you have named your html file as \"index.html\", enter following command in terminal:\npython3 -m http.server 8000\nthen open http://0.0.0.0:8000 in your browser\n", "You are facing issue as mentioned below.\nAccess to fetch at 'pythonfile.py' from origin 'null' has been blocked by CORS policy: Cross origin requests are only supported for protocol schemes: http, data, isolated-app, chrome-extension, chrome, https, chrome-untrusted.\n\nthen you can try this instead of attaching a python file write it inline.\n<py-script>\n lst = [[\"a\", 45], [\"b\", 40], [\"c\", 18], [\"d\", 17]]\n\n name = input(\"Enter your name:\")\n\n print(\"Searching in list\")\n for item in lst:\n if item[0] == name:\n print(\"name:\", item[0], \"age:\", item[1])\n</py-script>\n\n", "When some thing does not work, please describe what is happening instead. In a web browser you can get more information from the developer tools Console, which can be opened by the menu or probably by pressing F12 or Ctrl+Shift+I (See here).\nAs GauravGiri already answered, the most probable reason is the difference between opening an HTML file locally and visiting it through a webserver:\nHTML Sites are not allowed to read the contents of other local files on your computer, while accessing them from the same webserver is allowed.\n" ]
[ 0, 0, 0 ]
[]
[]
[ "html", "python", "python_3.x" ]
stackoverflow_0074667822_html_python_python_3.x.txt
Q: Expression of double summation in python I am trying to solve an optimization problem using Pulp in Python, but I'm having some trouble in expressing my constraints. def Kakuro(M): prob = pulp.LpProblem() rows = range(1,4) cols = range(1,4) vals = range(1,10) X = pulp.LpVariable.dicts("X",(rows,cols,vals),cat='Binary') for i in rows: for j in cols: prob += sum([X[i][j][k] for k in vals]) == 1 for i in rows: for k in vals: prob += sum([X[i][j][k] for j in cols]) == M[i][0] #This is expressing x_111+x_121+x_131=M[i]; x_112+x_122+x_132=M[i]... for j in cols: for k in vals: prob += sum([X[i][j][k] for i in rows]) == M[0][j] prob.solve() #prob.solve(pulp.PULP_CBC_CMD(msg=0)) solution = np.zeros((4,4)) for i in rows: solution[i][0] = M[i][0] for j in cols: solution[0][j]=M[0][j] for i in rows: for j in cols: for k in vals: if X[i][j][k].value() == 1: solution[i,j] = k return solution For the second and third for-loop, what I want to say is Equation like x_{111}+x_{121}+x_{131}+x_{112}+ ... +x_{119}+x_{129}+x_{139}=M[i][0], but now it is x_111+x_121+x_131=M[i][0]. How can I have 2 loops inside the prob equation? A: If I understand what you are trying to do, you can just augment the summation expression to include 2 variables... Something like: for i in cols: prob += sum(x[i][j][k] for j in rows for k in vals) <= M[i][0]
Expression of double summation in python
I am trying to solve an optimization problem using Pulp in Python, but I'm having some trouble in expressing my constraints. def Kakuro(M): prob = pulp.LpProblem() rows = range(1,4) cols = range(1,4) vals = range(1,10) X = pulp.LpVariable.dicts("X",(rows,cols,vals),cat='Binary') for i in rows: for j in cols: prob += sum([X[i][j][k] for k in vals]) == 1 for i in rows: for k in vals: prob += sum([X[i][j][k] for j in cols]) == M[i][0] #This is expressing x_111+x_121+x_131=M[i]; x_112+x_122+x_132=M[i]... for j in cols: for k in vals: prob += sum([X[i][j][k] for i in rows]) == M[0][j] prob.solve() #prob.solve(pulp.PULP_CBC_CMD(msg=0)) solution = np.zeros((4,4)) for i in rows: solution[i][0] = M[i][0] for j in cols: solution[0][j]=M[0][j] for i in rows: for j in cols: for k in vals: if X[i][j][k].value() == 1: solution[i,j] = k return solution For the second and third for-loop, what I want to say is Equation like x_{111}+x_{121}+x_{131}+x_{112}+ ... +x_{119}+x_{129}+x_{139}=M[i][0], but now it is x_111+x_121+x_131=M[i][0]. How can I have 2 loops inside the prob equation?
[ "If I understand what you are trying to do, you can just augment the summation expression to include 2 variables... Something like:\nfor i in cols:\n prob += sum(x[i][j][k] for j in rows for k in vals) <= M[i][0]\n\n" ]
[ 0 ]
[]
[]
[ "for_loop", "optimization", "pulp", "python" ]
stackoverflow_0074665043_for_loop_optimization_pulp_python.txt
Q: Creating an Equal Area Spatial Grid Over a Large Area (R or Python) I am facing a challenge trying to create a 12km spatial grid covering the African continent with open source tools. The main challenge appears to be that most of these tools are based on projected (metric) coordinate reference systems (CRS), which are inaccurate for very large areas. I need grid creating software based on geographic CRS. To illustrate the problem in R: library(sf) #> Linking to GEOS 3.9.1, GDAL 3.2.3, PROJ 7.2.1; sf_use_s2() is TRUE library(magrittr) # Bounding box for Africa africa_bbox <- rbind(c(-26, 55), c(-36, 38)) dimnames(africa_bbox) <- list(c("lon", "lat"), c("min", "max")) africa_bbox %<>% t() print(africa_bbox) #> lon lat #> min -26 -36 #> max 55 38 # Creating a geometry africa_sfc <- africa_bbox %>% as.data.frame() %>% st_as_sf(coords = c("lon", "lat"), crs = "EPSG:4326") %>% st_bbox() %>% st_as_sfc() print(africa_sfc) #> Geometry set for 1 feature #> Geometry type: POLYGON #> Dimension: XY #> Bounding box: xmin: -26 ymin: -36 xmax: 55 ymax: 38 #> Geodetic CRS: WGS 84 #> POLYGON ((-26 -36, 55 -36, 55 38, -26 38, -26 -... st_area(africa_sfc) # Area of grid #> 7.706798e+13 [m^2] # Now this unfortunately does not work with Geodetic CRS st_make_grid(africa_sfc, cellsize = c(12000, 12000)) #> Geometry set for 1 feature #> Geometry type: POLYGON #> Dimension: XY #> Bounding box: xmin: -26 ymin: -36 xmax: 11974 ymax: 11964 #> Geodetic CRS: WGS 84 #> POLYGON ((-26 -36, 11974 -36, 11974 11964, -26 ... # To make it work I need to project to metric CRS. # I use UTM 34, which is in the center of Africa, see: https://www.dmap.co.uk/utmworld.htm africa_sfc_metric <- africa_sfc %>% st_transform("EPSG:32634") print(africa_sfc_metric) #> Geometry set for 1 feature #> Geometry type: POLYGON #> Dimension: XY #> Bounding box: xmin: -3842510 ymin: -5189967 xmax: 3613422 ymax: 5419593 #> Projected CRS: WGS 84 / UTM zone 34N #> POLYGON ((-3842510 -5189967, 3613422 -4567059, ... # Now computing the grid. africa_12km <- st_make_grid(africa_sfc_metric, cellsize = c(12000, 12000)) head(africa_12km, 3) #> Geometry set for 3 features #> Geometry type: POLYGON #> Dimension: XY #> Bounding box: xmin: -3842510 ymin: -5189967 xmax: -3806510 ymax: -5177967 #> Projected CRS: WGS 84 / UTM zone 34N #> POLYGON ((-3842510 -5189967, -3830510 -5189967,... #> POLYGON ((-3830510 -5189967, -3818510 -5189967,... #> POLYGON ((-3818510 -5189967, -3806510 -5189967,... length(africa_12km) # Number of squares #> [1] 550470 areas = st_area(africa_12km) all(unclass(signif(areas, 4)) == 12000^2) # Checking sizes #> [1] TRUE sum(areas) / st_area(africa_sfc) # Grid is 2.85% too large #> 1.028542 [1] # To put this into perspective, I compute the area of a 12km border around the continent perimeter_12km_area <- africa_sfc %>% st_cast("MULTILINESTRING") %>% st_length() %>% multiply_by(12000) # That's 0.5% of the area, so the 2.85% too large is significant perimeter_12km_area / st_area(africa_sfc) #> 0.004725914 [1/m] Created on 2022-12-02 by the reprex package (v2.0.1) Now of course I could somehow reverse engineer the Haversine Formula to come up with a program that creates an accurate grid over a large area, but first I would like to ask if there are already software solutions to this (preferably R or Python) that I am not aware of. A: I think you are trying to do the impossible. You can either have a 12km x 12km grid in a projected CRS that is approximately 12km x 12km and approximately square on the ground, or you have a regular Z by Z degrees grid in a lat-long projection that is approximately square and approximately 12km x 12 km on the ground. You can't chop a spherical surface into exact 12km x 12km squares where the distance is measured along the surface.
Creating an Equal Area Spatial Grid Over a Large Area (R or Python)
I am facing a challenge trying to create a 12km spatial grid covering the African continent with open source tools. The main challenge appears to be that most of these tools are based on projected (metric) coordinate reference systems (CRS), which are inaccurate for very large areas. I need grid creating software based on geographic CRS. To illustrate the problem in R: library(sf) #> Linking to GEOS 3.9.1, GDAL 3.2.3, PROJ 7.2.1; sf_use_s2() is TRUE library(magrittr) # Bounding box for Africa africa_bbox <- rbind(c(-26, 55), c(-36, 38)) dimnames(africa_bbox) <- list(c("lon", "lat"), c("min", "max")) africa_bbox %<>% t() print(africa_bbox) #> lon lat #> min -26 -36 #> max 55 38 # Creating a geometry africa_sfc <- africa_bbox %>% as.data.frame() %>% st_as_sf(coords = c("lon", "lat"), crs = "EPSG:4326") %>% st_bbox() %>% st_as_sfc() print(africa_sfc) #> Geometry set for 1 feature #> Geometry type: POLYGON #> Dimension: XY #> Bounding box: xmin: -26 ymin: -36 xmax: 55 ymax: 38 #> Geodetic CRS: WGS 84 #> POLYGON ((-26 -36, 55 -36, 55 38, -26 38, -26 -... st_area(africa_sfc) # Area of grid #> 7.706798e+13 [m^2] # Now this unfortunately does not work with Geodetic CRS st_make_grid(africa_sfc, cellsize = c(12000, 12000)) #> Geometry set for 1 feature #> Geometry type: POLYGON #> Dimension: XY #> Bounding box: xmin: -26 ymin: -36 xmax: 11974 ymax: 11964 #> Geodetic CRS: WGS 84 #> POLYGON ((-26 -36, 11974 -36, 11974 11964, -26 ... # To make it work I need to project to metric CRS. # I use UTM 34, which is in the center of Africa, see: https://www.dmap.co.uk/utmworld.htm africa_sfc_metric <- africa_sfc %>% st_transform("EPSG:32634") print(africa_sfc_metric) #> Geometry set for 1 feature #> Geometry type: POLYGON #> Dimension: XY #> Bounding box: xmin: -3842510 ymin: -5189967 xmax: 3613422 ymax: 5419593 #> Projected CRS: WGS 84 / UTM zone 34N #> POLYGON ((-3842510 -5189967, 3613422 -4567059, ... # Now computing the grid. africa_12km <- st_make_grid(africa_sfc_metric, cellsize = c(12000, 12000)) head(africa_12km, 3) #> Geometry set for 3 features #> Geometry type: POLYGON #> Dimension: XY #> Bounding box: xmin: -3842510 ymin: -5189967 xmax: -3806510 ymax: -5177967 #> Projected CRS: WGS 84 / UTM zone 34N #> POLYGON ((-3842510 -5189967, -3830510 -5189967,... #> POLYGON ((-3830510 -5189967, -3818510 -5189967,... #> POLYGON ((-3818510 -5189967, -3806510 -5189967,... length(africa_12km) # Number of squares #> [1] 550470 areas = st_area(africa_12km) all(unclass(signif(areas, 4)) == 12000^2) # Checking sizes #> [1] TRUE sum(areas) / st_area(africa_sfc) # Grid is 2.85% too large #> 1.028542 [1] # To put this into perspective, I compute the area of a 12km border around the continent perimeter_12km_area <- africa_sfc %>% st_cast("MULTILINESTRING") %>% st_length() %>% multiply_by(12000) # That's 0.5% of the area, so the 2.85% too large is significant perimeter_12km_area / st_area(africa_sfc) #> 0.004725914 [1/m] Created on 2022-12-02 by the reprex package (v2.0.1) Now of course I could somehow reverse engineer the Haversine Formula to come up with a program that creates an accurate grid over a large area, but first I would like to ask if there are already software solutions to this (preferably R or Python) that I am not aware of.
[ "I think you are trying to do the impossible. You can either have a 12km x 12km grid in a projected CRS that is approximately 12km x 12km and approximately square on the ground, or you have a regular Z by Z degrees grid in a lat-long projection that is approximately square and approximately 12km x 12 km on the ground.\nYou can't chop a spherical surface into exact 12km x 12km squares where the distance is measured along the surface.\n" ]
[ 0 ]
[]
[]
[ "geospatial", "python", "r" ]
stackoverflow_0074655449_geospatial_python_r.txt
Q: VSCode 1.39.x & Python 3.7.x: "ImportError: attempted relative import with no known parent package" - when started without debugging (CTRL+F5)) when running Python test from withing VS Code using CTRL+F5 I'm getting error message ImportError: attempted relative import with no known parent package when running Python test from VS Code terminal by using command line python test_HelloWorld.py I'm getting error message ValueError: attempted relative import beyong top-level package Here is the project structure How to solve the subject issue(s) with minimal (code/project structure) change efforts? TIA! [Update] I have got the following solution using sys.path correction: import sys from pathlib import Path sys.path[0] = str(Path(sys.path[0]).parent) but I guess there still could be a more effective solution without source code corrections by using some (VS Code) settings or Python running context/environment settings (files)? A: You're bumping into two issues. One is you're running your test file from within the directory it's written, and so Python doesn't know what .. represents. There are a couple of ways to fix this. One is to take the solution that @lesiak proposed by changing the import to from solutions import helloWorldPackage but to execute your tests by running python tests/test_helloWorld.py. That will make sure that your project's top-level is in Python's search path and so it will see solutions. The other solution is to open your project in VS Code one directory higher (whatever directory that contains solutions and tests). You will still need to change how you execute your code, though, so you are doing it from the top-level as I suggested above. Even better would be to either run your code using python -m tests.test_helloWorld, use the Python extension's Run command, or use the extension's Test Explorer. All of those options should help you with how to run your code (you will still need to either change the import or open the higher directory in VS Code). A: Do not use relative import. Simply change it to from solutions import helloWorldPackage as hw Update I initially tested this in PyCharm. PyCharm has a nice feature - it adds content root and source roots to PYTHONPATH (both options are configurable). You can achieve the same effect in VS Code by adding a .env file: PYTHONPATH=.:${PYTHONPATH} Now, the project directory will be in the PYTHONPATH for every tool that is launched via VS Code. Now Ctrl+F5 works fine. A: Setup a main module and its source packages paths Solution found at: https://k0nze.dev/posts/python-relative-imports-vscode/#:~:text=create%20a%20settings.json%20within%20.vscode https://k0nze.dev/posts/python-relative-imports-vscode/#:~:text=Inside%20the-,launch.json,-you%20have%20to Which also provide a neat in-depth video explanation The solution to the attempted relative import with no known parent package issue, which is especially tricky in VScode (in opposite to Pycharm that provide GUI tools to flag folders as package), is to: Add configuration files for the VScode debugger Id Est add launch.json as a Module (this will always execute the file given in "module" key) and settings.json inside the MyProjectRoot/.vscode folder (manually add them if it's not there yet, or be guided by VScode GUI for Run & Debug) launch.json setup Id Est add an "env" key to launch.json containing an object with "PYTHONPATH" as key, and "${workspaceFolder}/mysourcepackage" as value final launch.json configuration settings.json setup Id Est add a "python.analysis.extraPaths" key to settings.json containing a list of paths for the debugger to be aware of, which in our case is one ["${workspaceFolder}/mysourcepackage"] as value (note that we put the string in a list only for the case in which we want to include other paths too, it is not needed for our specific example but it's still a standard de facto as I know) final settings.json configuration This should be everything needed to both work by calling the script with python from the terminal and from the VScode debugger. A: An Answer From 2022 Here's a potential approach from 2022. The issue is identified correctly and if you're using an IDE like VS Code, it doesn't automatically extend the python path and discover modules. One way you can do this using an .env file that will automatically extend the python path. I used this website k0nze.dev repeatedly to find an answer and actually discovered another solution. Here are the drawbacks of the solution provided in the k0nze.dev solution: It only extends the python path via the launch.json file which doesn't effect running python outside of the debugger in this case You can only use the ${workspaceFolder} and other variables within an "env" variable in the launch.json, which gets overwritten in precedence by the existence of a .env file. The solution works only within VS Code since it has to be written within the launch.json (- overall portability) The .env File In your example tests falls under it's own directory and has it's own init.py. In an IDE like VS Code, it's not going to automatically discover this directory and module. You can see this by creating the below script anywhere in your project and running it: _path.py from sys import path as pythonpath print("\n ,".join(pythonpath)) You shouldn't see your ${workspaceFolder}/tests/ or if you do, it's because your _path.py script is sitting in that directory and python automatically adds the script path to pythonpath. To solve this issue across your project, you need to extend the python path using .env file across all files in your project. To do this, use dot notation to indicate your ${workspaceFolder} in lieu of being able to actually use ${workspaceFolder}. You have to do dot notation because .env files do not do variable assignment like ${workspaceFolder}. Your env file should look like: Windows PYTHONPATH = ".\\tests\\;.\\" Mac / Linux / etc PYTHONPATH = "./tests/:./" where: ; and : are the path separators for environment variables for windows and Mac respectively ./tests/ and .\tests\ extend python path to the files within the module tests for import in the init.py ./ and .\ extend the python path to modules tests and presumably solutions? I don't know if solutions is a module but I'm going to run with it. Test It Out Now re-run your _path.py script and you should see permanent additions to your path. This works for deeply nested modules as well if your company has a more stringent project structure. VS Code If you are using VS Code, you cannot use environment variables provided by VS Code in the .env file. This includes ${workspaceFolder}, which is very handy to extend a file path to your currently open folder. I've beaten myself up trying to figure out why it's not adding these environment variables to the path for a very long time now and it seems The solution is instead to use dot notation to prepend the path by using relative file path. This allows the user to append a file path relative to the project structure and not their own file structure. For Other IDE's The reason the above is written for VS Code is because it automatically reads in the .env file every time you run a python file. This functionality is very handy and unless your IDE does this, you will need the help of the dotenv package. You can actually see the location that your version of VS Code is looking for by searching for the below setting in your preferences: VSCode settings env file Anyways, to install the package you need to import .env files with, run: pip install python-dotenv In your python script, you need to run and import the below to get it to load the .env file as your environment variables: from dotenv import load_dotenv() # load env variables load_dotenv() """ The rest of your code here """ That's It Congrats on making it to the bottom. This topic nearly drove me insane when I went to tackle it but I think it's helpful to be elaborate and to understand the issue and how to tackle it without doing hacky sys.path appends or absolute file paths. This also gives you a way to test what's on your path and an explanation of why each path is added in your project structure. A: I was just going through this with VS Code and Python (using Win10) and found a solution. Below is my project folder. Files in folder "core" import functions from folder "event", and files in folder "unit tests" import functions from folder "core". I could run and debug the top-level file file_gui_tk.py within VS Code but I couldn't run/debug any of the files in the sub-folders due to import errors. I believe the issue is that when I try to run/debug those files, the working directory is no longer the project directory and consequently the import path declarations no longer work. Folder Structure: testexample core __init__.py core_os.py dir_parser.py events __inits__.py event.py unit tests list_files.py test_parser.py .env file_gui_tk.py My file import statements: in core/core_os.py: from events.event import post_event in core/dir_parser.py: from core.core_os import compare_file_bytes, check_dir from events.event import post_event To run/debug any file within the project directory, I added a top level .env file with contents: PYTHONPATH="./" Added this statement to the launch.json file: "env": {"PYTHONPATH": "/testexample"}, And added this to the settings.json file "terminal.integrated.env.windows": {"PYTHONPATH": "./",} Now I can run and debug any file and VS Code finds the import dependencies within the project. I haven't tried this with a project dir structure more than two levels deep.
VSCode 1.39.x & Python 3.7.x: "ImportError: attempted relative import with no known parent package" - when started without debugging (CTRL+F5))
when running Python test from withing VS Code using CTRL+F5 I'm getting error message ImportError: attempted relative import with no known parent package when running Python test from VS Code terminal by using command line python test_HelloWorld.py I'm getting error message ValueError: attempted relative import beyong top-level package Here is the project structure How to solve the subject issue(s) with minimal (code/project structure) change efforts? TIA! [Update] I have got the following solution using sys.path correction: import sys from pathlib import Path sys.path[0] = str(Path(sys.path[0]).parent) but I guess there still could be a more effective solution without source code corrections by using some (VS Code) settings or Python running context/environment settings (files)?
[ "You're bumping into two issues. One is you're running your test file from within the directory it's written, and so Python doesn't know what .. represents. There are a couple of ways to fix this.\nOne is to take the solution that @lesiak proposed by changing the import to from solutions import helloWorldPackage but to execute your tests by running python tests/test_helloWorld.py. That will make sure that your project's top-level is in Python's search path and so it will see solutions.\nThe other solution is to open your project in VS Code one directory higher (whatever directory that contains solutions and tests). You will still need to change how you execute your code, though, so you are doing it from the top-level as I suggested above.\nEven better would be to either run your code using python -m tests.test_helloWorld, use the Python extension's Run command, or use the extension's Test Explorer. All of those options should help you with how to run your code (you will still need to either change the import or open the higher directory in VS Code).\n", "Do not use relative import.\nSimply change it to\nfrom solutions import helloWorldPackage as hw\n\nUpdate\nI initially tested this in PyCharm. PyCharm has a nice feature - it adds content root and source roots to PYTHONPATH (both options are configurable).\nYou can achieve the same effect in VS Code by adding a .env file:\nPYTHONPATH=.:${PYTHONPATH}\n\nNow, the project directory will be in the PYTHONPATH for every tool that is launched via VS Code. Now Ctrl+F5 works fine.\n", "\nSetup a main module and its source packages paths\n\nSolution found at:\n\nhttps://k0nze.dev/posts/python-relative-imports-vscode/#:~:text=create%20a%20settings.json%20within%20.vscode\nhttps://k0nze.dev/posts/python-relative-imports-vscode/#:~:text=Inside%20the-,launch.json,-you%20have%20to\n\nWhich also provide a neat in-depth video explanation\n\nThe solution to the attempted relative import with no known parent package issue, which is especially tricky in VScode (in opposite to Pycharm that provide GUI tools to flag folders as package), is to:\nAdd configuration files for the VScode debugger\n\nId Est add launch.json as a Module (this will always execute the file given in \"module\" key) and settings.json inside the MyProjectRoot/.vscode folder (manually add them if it's not there yet, or be guided by VScode GUI for Run & Debug)\n\nlaunch.json setup\n\nId Est add an \"env\" key to launch.json containing an object with \"PYTHONPATH\" as key, and \"${workspaceFolder}/mysourcepackage\" as value\nfinal launch.json configuration\n\nsettings.json setup\n\nId Est add a \"python.analysis.extraPaths\" key to settings.json containing a list of paths for the debugger to be aware of, which in our case is one [\"${workspaceFolder}/mysourcepackage\"] as value (note that we put the string in a list only for the case in which we want to include other paths too, it is not needed for our specific example but it's still a standard de facto as I know)\nfinal settings.json configuration\n\nThis should be everything needed to both work by calling the script with python from the terminal and from the VScode debugger.\n", "An Answer From 2022\nHere's a potential approach from 2022. The issue is identified correctly and if you're using an IDE like VS Code, it doesn't automatically extend the python path and discover modules.\nOne way you can do this using an .env file that will automatically extend the python path. I used this website k0nze.dev repeatedly to find an answer and actually discovered another solution.\nHere are the drawbacks of the solution provided in the k0nze.dev solution:\n\nIt only extends the python path via the launch.json file which doesn't effect running python outside of the debugger in this case\nYou can only use the ${workspaceFolder} and other variables within an \"env\" variable in the launch.json, which gets overwritten in precedence by the existence of a .env file.\nThe solution works only within VS Code since it has to be written within the launch.json (- overall portability)\n\nThe .env File\nIn your example tests falls under it's own directory and has it's own init.py. In an IDE like VS Code, it's not going to automatically discover this directory and module. You can see this by creating the below script anywhere in your project and running it:\n_path.py\nfrom sys import path as pythonpath\n\nprint(\"\\n ,\".join(pythonpath))\n\nYou shouldn't see your ${workspaceFolder}/tests/ or if you do, it's because your _path.py script is sitting in that directory and python automatically adds the script path to pythonpath. To solve this issue across your project, you need to extend the python path using .env file across all files in your project.\nTo do this, use dot notation to indicate your ${workspaceFolder} in lieu of being able to actually use ${workspaceFolder}. You have to do dot notation because .env files do not do variable assignment like ${workspaceFolder}. Your env file should look like:\nWindows\nPYTHONPATH = \".\\\\tests\\\\;.\\\\\" \n\nMac / Linux / etc\nPYTHONPATH = \"./tests/:./\"\n\nwhere:\n\n; and : are the path separators for environment variables for windows and Mac respectively\n./tests/ and .\\tests\\ extend python path to the files within the module tests for import in the init.py\n./ and .\\ extend the python path to modules tests and presumably solutions? I don't know if solutions is a module but I'm going to run with it.\n\nTest It Out\nNow re-run your _path.py script and you should see permanent additions to your path. This works for deeply nested modules as well if your company has a more stringent project structure.\nVS Code\nIf you are using VS Code, you cannot use environment variables provided by VS Code in the .env file. This includes ${workspaceFolder}, which is very handy to extend a file path to your currently open folder. I've beaten myself up trying to figure out why it's not adding these environment variables to the path for a very long time now and it seems\nThe solution is instead to use dot notation to prepend the path by using relative file path. This allows the user to append a file path relative to the project structure and not their own file structure.\nFor Other IDE's\nThe reason the above is written for VS Code is because it automatically reads in the .env file every time you run a python file. This functionality is very handy and unless your IDE does this, you will need the help of the dotenv package.\nYou can actually see the location that your version of VS Code is looking for by searching for the below setting in your preferences:\nVSCode settings env file\nAnyways, to install the package you need to import .env files with, run:\npip install python-dotenv\n\nIn your python script, you need to run and import the below to get it to load the .env file as your environment variables:\nfrom dotenv import load_dotenv()\n\n# load env variables \nload_dotenv()\n\n\"\"\"\nThe rest of your code here\n\"\"\"\n\nThat's It\nCongrats on making it to the bottom. This topic nearly drove me insane when I went to tackle it but I think it's helpful to be elaborate and to understand the issue and how to tackle it without doing hacky sys.path appends or absolute file paths. This also gives you a way to test what's on your path and an explanation of why each path is added in your project structure.\n", "I was just going through this with VS Code and Python (using Win10) and found a solution. Below is my project folder. Files in folder \"core\" import functions from folder \"event\", and files in folder \"unit tests\" import functions from folder \"core\".\nI could run and debug the top-level file file_gui_tk.py within VS Code but I couldn't run/debug any of the files in the sub-folders due to import errors. I believe the issue is that when I try to run/debug those files, the working directory is no longer the project directory and consequently the import path declarations no longer work.\nFolder Structure:\ntestexample\n core\n __init__.py\n core_os.py\n dir_parser.py\n events\n __inits__.py\n event.py\n unit tests\n list_files.py\n test_parser.py\n.env\nfile_gui_tk.py\n\nMy file import statements:\nin core/core_os.py:\n from events.event import post_event\n\nin core/dir_parser.py:\nfrom core.core_os import compare_file_bytes, check_dir\nfrom events.event import post_event\n\nTo run/debug any file within the project directory, I added a top level .env file with contents:\nPYTHONPATH=\"./\" \n\nAdded this statement to the launch.json file:\n\"env\": {\"PYTHONPATH\": \"/testexample\"},\n\nAnd added this to the settings.json file\n\"terminal.integrated.env.windows\": {\"PYTHONPATH\": \"./\",}\n\nNow I can run and debug any file and VS Code finds the import dependencies within the project.\nI haven't tried this with a project dir structure more than two levels deep.\n" ]
[ 4, 3, 1, 0, 0 ]
[]
[]
[ "import", "package", "python", "python_unittest", "visual_studio_code" ]
stackoverflow_0058709973_import_package_python_python_unittest_visual_studio_code.txt
Q: Assigning lines in text files to a list in python I am making an app that stores its settings in a .txt file. I am able to get the line count, but I don't know, how to store the text in one line in a variable. For example: linecount = 0 datainfile = [] with open("txt.txt" , "r") as t: linecount += 1 #Hoe to add lines to datainfile? config1 = datainfile[0] A: First, it is always a good practice to use context manager when opening a file (using the with keyword). Second, read the file to a list and address the line number by index. import os with open("config.txt", "r") as cf: file_lines = [line.replace(os.linesep, "") for line in cf.readlines()] Now each index in file_lines is relative to the line number in the file. A: This is how I did it: lines = [] with open("txt.txt", "r") as t: for i in t: lines.append(t) config1 = datainfile[0]
Assigning lines in text files to a list in python
I am making an app that stores its settings in a .txt file. I am able to get the line count, but I don't know, how to store the text in one line in a variable. For example: linecount = 0 datainfile = [] with open("txt.txt" , "r") as t: linecount += 1 #Hoe to add lines to datainfile? config1 = datainfile[0]
[ "First, it is always a good practice to use context manager when opening a file (using the with keyword). Second, read the file to a list and address the line number by index.\nimport os\n\n\nwith open(\"config.txt\", \"r\") as cf:\n file_lines = [line.replace(os.linesep, \"\") for line in cf.readlines()]\n\nNow each index in file_lines is relative to the line number in the file.\n", "This is how I did it:\nlines = [] \nwith open(\"txt.txt\", \"r\") as t:\n for i in t:\n lines.append(t)\nconfig1 = datainfile[0]\n\n" ]
[ 0, 0 ]
[]
[]
[ "file", "line", "python", "python_3.x", "variables" ]
stackoverflow_0074162036_file_line_python_python_3.x_variables.txt
Q: Why glVertexAttribPointer throws 1282 error while trying to draw one point on screen with pyOpenGL and glfw? I have separated the program to three different files, but I don't understand why I get error on glVertexAttribPointer on line 70. I'm using Python 3.10.8 main.py import glfw import Shaders from OpenGL.GL import * from OpenGL.GLUT import * from Math_3d import Vector2f class Window: def __init__(self, width: int, height: int, title: str): if not glfw.init(): raise Exception("glfw can not be initialized") glfw.window_hint(glfw.CONTEXT_VERSION_MAJOR, 3) glfw.window_hint(glfw.CONTEXT_VERSION_MINOR, 3) glfw.window_hint(glfw.OPENGL_PROFILE, glfw.OPENGL_CORE_PROFILE) self._win = glfw.create_window(width, height, title, None, None) if not self._win: glfw.terminate() raise Exception("glfw window can not be created") glfw.set_window_pos(self._win, 400, 200) glfw.make_context_current(self._win) def createshaders(self): # Request program and shader slots from the GPU program = glCreateProgram() vertex = glCreateShader(GL_VERTEX_SHADER) fragment = glCreateShader(GL_FRAGMENT_SHADER) # Set shader sources glShaderSource(vertex, Shaders.vertex_code) glShaderSource(fragment, Shaders.fragment_code) # Compile shaders glCompileShader(vertex) glCompileShader(fragment) if not glGetShaderiv(vertex, GL_COMPILE_STATUS): report_shader = glGetShaderInfoLog(vertex) print(report_shader) raise RuntimeError("Vertex shader compilation error") if not glGetShaderiv(fragment, GL_COMPILE_STATUS): report_frag = glGetShaderInfoLog(fragment) print(report_frag) raise RuntimeError("Fragment shader compilation error") # Link objects to program glAttachShader(program, vertex) glAttachShader(program, fragment) glLinkProgram(program) if not glGetProgramiv(program, GL_LINK_STATUS): print(glGetProgramInfoLog(program)) raise RuntimeError('Linking error') # Get rid of shaders glDetachShader(program, vertex) glDetachShader(program, fragment) # Make default program to run glUseProgram(program) # Vertex Buffer Object # Create point vertex data v2f_1 = Vector2f(0.0, 0.0) # Request a buffer slot from GPU buffer = glGenBuffers(1) # Make this buffer the default one glBindBuffer(GL_ARRAY_BUFFER, buffer) strides = v2f_1.data.strides[0] loc = glGetAttribLocation(program, 'position') glEnableVertexAttribArray(loc) glVertexAttribPointer(loc, 2, GL_FLOAT, False, strides, None) # glBufferData(GL_ARRAY_BUFFER, v2f_1, v2f_1, GL_DYNAMIC_DRAW) def renderscene(self): while not glfw.window_should_close(self._win): glfw.poll_events() glClear(GL_COLOR_BUFFER_BIT) glDrawArrays(GL_POINTS, 0, 1) glfw.swap_buffers(self._win) glfw.terminate() if __name__ == '__main__': win = Window(1024, 768, "GLFW Window") win.createshaders() # Create and initialize shaders and initialize Vertex Buffer Object win.renderscene() # Swap buffer and render scene Shaders.py vertex_code = """ attribute vec2 position; void main() { gl_Position = vec4(position, 0.0, 1.0); } """ fragment_code = """ void main() { gl_FragColor = vec4(0.0, 0.0, 0.0, 1.0); } """ Math_3d.py import numpy as np class Vector2f: def __init__(self, x, y): self.data = np.array([x, y], dtype=np.float32) if __name__ == '__main__': vec2 = Vector2f(0.0, 0.0) print(vec2.data) print(type(vec2.data.strides[0])) print(vec2.data.strides[0]) I have tried to debug the line 70, but did not get any good result while using PyCharm. Any recommendations on this? Closest answers would be according to 61491497 and 56957118 what I am aiming for. A: You're using a Core profile OpenGL Context (glfw.OPENGL_CORE_PROFILE). Therefore you have to create a Vertex Array Obejct: class Window: # [...] def createshaders(self): # [...] v2f_1 = Vector2f(0.0, 0.0) # Request a buffer slot from GPU buffer = glGenBuffers(1) glBindBuffer(GL_ARRAY_BUFFER, buffer) strides = v2f_1.data.strides[0] glBufferData(GL_ARRAY_BUFFER, v2f_1.data, GL_DYNAMIC_DRAW) vao = glGenVertexArrays(1) glBindVertexArray(vao) loc = glGetAttribLocation(program, 'position') glEnableVertexAttribArray(loc) glVertexAttribPointer(loc, 2, GL_FLOAT, False, strides, None) Additionally, you need to change either the background color or the fragment color, because you won't be able to see a black point on a black background. e.g. red: gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0);
Why glVertexAttribPointer throws 1282 error while trying to draw one point on screen with pyOpenGL and glfw?
I have separated the program to three different files, but I don't understand why I get error on glVertexAttribPointer on line 70. I'm using Python 3.10.8 main.py import glfw import Shaders from OpenGL.GL import * from OpenGL.GLUT import * from Math_3d import Vector2f class Window: def __init__(self, width: int, height: int, title: str): if not glfw.init(): raise Exception("glfw can not be initialized") glfw.window_hint(glfw.CONTEXT_VERSION_MAJOR, 3) glfw.window_hint(glfw.CONTEXT_VERSION_MINOR, 3) glfw.window_hint(glfw.OPENGL_PROFILE, glfw.OPENGL_CORE_PROFILE) self._win = glfw.create_window(width, height, title, None, None) if not self._win: glfw.terminate() raise Exception("glfw window can not be created") glfw.set_window_pos(self._win, 400, 200) glfw.make_context_current(self._win) def createshaders(self): # Request program and shader slots from the GPU program = glCreateProgram() vertex = glCreateShader(GL_VERTEX_SHADER) fragment = glCreateShader(GL_FRAGMENT_SHADER) # Set shader sources glShaderSource(vertex, Shaders.vertex_code) glShaderSource(fragment, Shaders.fragment_code) # Compile shaders glCompileShader(vertex) glCompileShader(fragment) if not glGetShaderiv(vertex, GL_COMPILE_STATUS): report_shader = glGetShaderInfoLog(vertex) print(report_shader) raise RuntimeError("Vertex shader compilation error") if not glGetShaderiv(fragment, GL_COMPILE_STATUS): report_frag = glGetShaderInfoLog(fragment) print(report_frag) raise RuntimeError("Fragment shader compilation error") # Link objects to program glAttachShader(program, vertex) glAttachShader(program, fragment) glLinkProgram(program) if not glGetProgramiv(program, GL_LINK_STATUS): print(glGetProgramInfoLog(program)) raise RuntimeError('Linking error') # Get rid of shaders glDetachShader(program, vertex) glDetachShader(program, fragment) # Make default program to run glUseProgram(program) # Vertex Buffer Object # Create point vertex data v2f_1 = Vector2f(0.0, 0.0) # Request a buffer slot from GPU buffer = glGenBuffers(1) # Make this buffer the default one glBindBuffer(GL_ARRAY_BUFFER, buffer) strides = v2f_1.data.strides[0] loc = glGetAttribLocation(program, 'position') glEnableVertexAttribArray(loc) glVertexAttribPointer(loc, 2, GL_FLOAT, False, strides, None) # glBufferData(GL_ARRAY_BUFFER, v2f_1, v2f_1, GL_DYNAMIC_DRAW) def renderscene(self): while not glfw.window_should_close(self._win): glfw.poll_events() glClear(GL_COLOR_BUFFER_BIT) glDrawArrays(GL_POINTS, 0, 1) glfw.swap_buffers(self._win) glfw.terminate() if __name__ == '__main__': win = Window(1024, 768, "GLFW Window") win.createshaders() # Create and initialize shaders and initialize Vertex Buffer Object win.renderscene() # Swap buffer and render scene Shaders.py vertex_code = """ attribute vec2 position; void main() { gl_Position = vec4(position, 0.0, 1.0); } """ fragment_code = """ void main() { gl_FragColor = vec4(0.0, 0.0, 0.0, 1.0); } """ Math_3d.py import numpy as np class Vector2f: def __init__(self, x, y): self.data = np.array([x, y], dtype=np.float32) if __name__ == '__main__': vec2 = Vector2f(0.0, 0.0) print(vec2.data) print(type(vec2.data.strides[0])) print(vec2.data.strides[0]) I have tried to debug the line 70, but did not get any good result while using PyCharm. Any recommendations on this? Closest answers would be according to 61491497 and 56957118 what I am aiming for.
[ "You're using a Core profile OpenGL Context (glfw.OPENGL_CORE_PROFILE). Therefore you have to create a Vertex Array Obejct:\nclass Window:\n # [...]\n\n def createshaders(self):\n # [...]\n\n v2f_1 = Vector2f(0.0, 0.0)\n # Request a buffer slot from GPU\n buffer = glGenBuffers(1)\n glBindBuffer(GL_ARRAY_BUFFER, buffer)\n strides = v2f_1.data.strides[0]\n glBufferData(GL_ARRAY_BUFFER, v2f_1.data, GL_DYNAMIC_DRAW)\n \n vao = glGenVertexArrays(1)\n glBindVertexArray(vao)\n \n loc = glGetAttribLocation(program, 'position')\n glEnableVertexAttribArray(loc)\n glVertexAttribPointer(loc, 2, GL_FLOAT, False, strides, None)\n\n\nAdditionally, you need to change either the background color or the fragment color, because you won't be able to see a black point on a black background. e.g. red:\ngl_FragColor = vec4(1.0, 0.0, 0.0, 1.0); \n\n" ]
[ 1 ]
[]
[]
[ "opengl", "pyopengl", "python", "python_3.x" ]
stackoverflow_0074668037_opengl_pyopengl_python_python_3.x.txt
Q: Generate random binary matrix constrained to no null row I want to generate a random binary matrix, so I'm using W=np.random.binomial(1, p, (n,n)). It works fine, but I want a constraint that no row is just of 0s. I create the following function: def random_matrix(p,n): m=0 while m==0: W = np.random.binomial(1, p, (n,n)) m=min(W.sum(axis=1)) return W It also works fine, but it seems to me too inefficient. Is there a faster way to create this constraint? A: One way to make the process of generating a random binary matrix with no rows of only 0s more efficient is to use the np.random.choice function to randomly choose a non-zero entry from each row of the matrix and set its value to 1. This avoids the need to use a while loop and repeatedly check for rows of only 0s, which can be computationally expensive for large matrices. Here is an example of how you could use the np.random.choice function to generate a random binary matrix with no rows of only 0s: W = np.random.binomial(1, p, (n,n)) for row in W: nonzero_indices = np.where(row != 0)[0] if nonzero_indices.size == 0: random_index = np.random.randint(0, n) row[random_index] = 1 else: random_index = np.random.choice(nonzero_indices) row[random_index] = 1 A: When the matrix is large, regenerating the entire matrix just because few rows are full of zeros is not efficient. It should be statistically safe to only regenerate the target rows. Here is an example: def random_matrix(p,n): W = np.random.binomial(1, p, (n,n)) while True: null_rows = np.where(W.sum(axis=1) == 0)[0] # If there is no null row, then m>0 so we stop the replacement if null_rows.size == 0: break # Replace only the null rows W[null_rows] = np.random.binomial(1, p, (null_rows.shape[0],n)) return W Even faster solutions There is an even more efficient approach when p is close to 0 (when p is close to 1, then the above function is already fast). Indeed, a binomial random variable with 0-1 values is a Bernoulli random variable. The sum of Bernoulli random values with a probability p repeated many times is a binomial random value! Thus, you can generate the sum for all row using S = np.random.binomial(n, p, (n,n)), then apply the above method to remove null values and then build the final matrix by generating S[i] one values for the ith row and use np.shuffle so to randomize the order of the 0-1 values in each row. This method solve conflicts much more efficiently than all others. Indeed, it does not need to generate the full row to check if it is full of zeros. It is n times faster to solve conflicts! If this is not enough, you can use the uint8 datatype to generate W. Indeed, the memory is slow so generating smaller matrices is generally faster, not to mention it takes less RAM. If this is not enough, you can generate S item per item using Numba JIT compiler and a basic loop. This should be faster since there is no temporary array to create except the final one. For large matrices, this algorithm can even be parallelized (every row can be independently generated). This last solution should be close to be optimal.
Generate random binary matrix constrained to no null row
I want to generate a random binary matrix, so I'm using W=np.random.binomial(1, p, (n,n)). It works fine, but I want a constraint that no row is just of 0s. I create the following function: def random_matrix(p,n): m=0 while m==0: W = np.random.binomial(1, p, (n,n)) m=min(W.sum(axis=1)) return W It also works fine, but it seems to me too inefficient. Is there a faster way to create this constraint?
[ "One way to make the process of generating a random binary matrix with no rows of only 0s more efficient is to use the np.random.choice function to randomly choose a non-zero entry from each row of the matrix and set its value to 1. This avoids the need to use a while loop and repeatedly check for rows of only 0s, which can be computationally expensive for large matrices.\nHere is an example of how you could use the np.random.choice function to generate a random binary matrix with no rows of only 0s:\nW = np.random.binomial(1, p, (n,n))\nfor row in W:\n nonzero_indices = np.where(row != 0)[0]\n if nonzero_indices.size == 0:\n random_index = np.random.randint(0, n)\n row[random_index] = 1\n else:\n random_index = np.random.choice(nonzero_indices)\n row[random_index] = 1\n\n", "When the matrix is large, regenerating the entire matrix just because few rows are full of zeros is not efficient. It should be statistically safe to only regenerate the target rows. Here is an example:\ndef random_matrix(p,n):\n W = np.random.binomial(1, p, (n,n))\n\n while True:\n null_rows = np.where(W.sum(axis=1) == 0)[0]\n # If there is no null row, then m>0 so we stop the replacement\n if null_rows.size == 0:\n break\n # Replace only the null rows\n W[null_rows] = np.random.binomial(1, p, (null_rows.shape[0],n))\n\n return W\n\n\nEven faster solutions\nThere is an even more efficient approach when p is close to 0 (when p is close to 1, then the above function is already fast). Indeed, a binomial random variable with 0-1 values is a Bernoulli random variable. The sum of Bernoulli random values with a probability p repeated many times is a binomial random value! Thus, you can generate the sum for all row using S = np.random.binomial(n, p, (n,n)), then apply the above method to remove null values and then build the final matrix by generating S[i] one values for the ith row and use np.shuffle so to randomize the order of the 0-1 values in each row. This method solve conflicts much more efficiently than all others. Indeed, it does not need to generate the full row to check if it is full of zeros. It is n times faster to solve conflicts!\nIf this is not enough, you can use the uint8 datatype to generate W. Indeed, the memory is slow so generating smaller matrices is generally faster, not to mention it takes less RAM.\nIf this is not enough, you can generate S item per item using Numba JIT compiler and a basic loop. This should be faster since there is no temporary array to create except the final one. For large matrices, this algorithm can even be parallelized (every row can be independently generated). This last solution should be close to be optimal.\n" ]
[ 0, 0 ]
[]
[]
[ "loops", "matrix", "numpy", "python", "random" ]
stackoverflow_0074667612_loops_matrix_numpy_python_random.txt
Q: What is the Python equivalent of static variables inside a function? What is the idiomatic Python equivalent of this C/C++ code? void foo() { static int counter = 0; counter++; printf("counter is %d\n", counter); } specifically, how does one implement the static member at the function level, as opposed to the class level? And does placing the function into a class change anything? A: A bit reversed, but this should work: def foo(): foo.counter += 1 print "Counter is %d" % foo.counter foo.counter = 0 If you want the counter initialization code at the top instead of the bottom, you can create a decorator: def static_vars(**kwargs): def decorate(func): for k in kwargs: setattr(func, k, kwargs[k]) return func return decorate Then use the code like this: @static_vars(counter=0) def foo(): foo.counter += 1 print "Counter is %d" % foo.counter It'll still require you to use the foo. prefix, unfortunately. (Credit: @ony) A: You can add attributes to a function, and use it as a static variable. def myfunc(): myfunc.counter += 1 print myfunc.counter # attribute must be initialized myfunc.counter = 0 Alternatively, if you don't want to setup the variable outside the function, you can use hasattr() to avoid an AttributeError exception: def myfunc(): if not hasattr(myfunc, "counter"): myfunc.counter = 0 # it doesn't exist yet, so initialize it myfunc.counter += 1 Anyway static variables are rather rare, and you should find a better place for this variable, most likely inside a class. A: One could also consider: def foo(): try: foo.counter += 1 except AttributeError: foo.counter = 1 Reasoning: much pythonic ("ask for forgiveness not permission") use exception (thrown only once) instead of if branch (think StopIteration exception) A: Many people have already suggested testing 'hasattr', but there's a simpler answer: def func(): func.counter = getattr(func, 'counter', 0) + 1 No try/except, no testing hasattr, just getattr with a default. A: Other answers have demonstrated the way you should do this. Here's a way you shouldn't: >>> def foo(counter=[0]): ... counter[0] += 1 ... print("Counter is %i." % counter[0]); ... >>> foo() Counter is 1. >>> foo() Counter is 2. >>> Default values are initialized only when the function is first evaluated, not each time it is executed, so you can use a list or any other mutable object to store static values. A: Python doesn't have static variables but you can fake it by defining a callable class object and then using it as a function. Also see this answer. class Foo(object): # Class variable, shared by all instances of this class counter = 0 def __call__(self): Foo.counter += 1 print Foo.counter # Create an object instance of class "Foo," called "foo" foo = Foo() # Make calls to the "__call__" method, via the object's name itself foo() #prints 1 foo() #prints 2 foo() #prints 3 Note that __call__ makes an instance of a class (object) callable by its own name. That's why calling foo() above calls the class' __call__ method. From the documentation: Instances of arbitrary classes can be made callable by defining a __call__() method in their class. A: Here is a fully encapsulated version that doesn't require an external initialization call: def fn(): fn.counter=vars(fn).setdefault('counter',-1) fn.counter+=1 print (fn.counter) In Python, functions are objects and we can simply add, or monkey patch, member variables to them via the special attribute __dict__. The built-in vars() returns the special attribute __dict__. EDIT: Note, unlike the alternative try:except AttributeError answer, with this approach the variable will always be ready for the code logic following initialization. I think the try:except AttributeError alternative to the following will be less DRY and/or have awkward flow: def Fibonacci(n): if n<2: return n Fibonacci.memo=vars(Fibonacci).setdefault('memo',{}) # use static variable to hold a results cache return Fibonacci.memo.setdefault(n,Fibonacci(n-1)+Fibonacci(n-2)) # lookup result in cache, if not available then calculate and store it EDIT2: I only recommend the above approach when the function will be called from multiple locations. If instead the function is only called in one place, it's better to use nonlocal: def TheOnlyPlaceStaticFunctionIsCalled(): memo={} def Fibonacci(n): nonlocal memo # required in Python3. Python2 can see memo if n<2: return n return memo.setdefault(n,Fibonacci(n-1)+Fibonacci(n-2)) ... print (Fibonacci(200)) ... A: Use a generator function to generate an iterator. def foo_gen(): n = 0 while True: n+=1 yield n Then use it like foo = foo_gen().next for i in range(0,10): print foo() If you want an upper limit: def foo_gen(limit=100000): n = 0 while n < limit: n+=1 yield n If the iterator terminates (like the example above), you can also loop over it directly, like for i in foo_gen(20): print i Of course, in these simple cases it's better to use xrange :) Here is the documentation on the yield statement. A: Other solutions attach a counter attribute to the function, usually with convoluted logic to handle the initialization. This is inappropriate for new code. In Python 3, the right way is to use a nonlocal statement: counter = 0 def foo(): nonlocal counter counter += 1 print(f'counter is {counter}') See PEP 3104 for the specification of the nonlocal statement. If the counter is intended to be private to the module, it should be named _counter instead. A: Using an attribute of a function as static variable has some potential drawbacks: Every time you want to access the variable, you have to write out the full name of the function. Outside code can access the variable easily and mess with the value. Idiomatic python for the second issue would probably be naming the variable with a leading underscore to signal that it is not meant to be accessed, while keeping it accessible after the fact. Using closures An alternative would be a pattern using lexical closures, which are supported with the nonlocal keyword in python 3. def make_counter(): i = 0 def counter(): nonlocal i i = i + 1 return i return counter counter = make_counter() Sadly I know no way to encapsulate this solution into a decorator. Using an internal state parameter Another option might be an undocumented parameter serving as a mutable value container. def counter(*, _i=[0]): _i[0] += 1 return _i[0] This works, because default arguments are evaluated when the function is defined, not when it is called. Cleaner might be to have a container type instead of the list, e.g. def counter(*, _i = Mutable(0)): _i.value += 1 return _i.value but I am not aware of a builtin type, that clearly communicates the purpose. A: A little bit more readable, but more verbose (Zen of Python: explicit is better than implicit): >>> def func(_static={'counter': 0}): ... _static['counter'] += 1 ... print _static['counter'] ... >>> func() 1 >>> func() 2 >>> See here for an explanation of how this works. A: _counter = 0 def foo(): global _counter _counter += 1 print 'counter is', _counter Python customarily uses underscores to indicate private variables. The only reason in C to declare the static variable inside the function is to hide it outside the function, which is not really idiomatic Python. A: def staticvariables(**variables): def decorate(function): for variable in variables: setattr(function, variable, variables[variable]) return function return decorate @staticvariables(counter=0, bar=1) def foo(): print(foo.counter) print(foo.bar) Much like vincent's code above, this would be used as a function decorator and static variables must be accessed with the function name as a prefix. The advantage of this code (although admittedly anyone might be smart enough to figure it out) is that you can have multiple static variables and initialise them in a more conventional manner. A: After trying several approaches I ended up using an improved version of @warvariuc's answer: import types def func(_static=types.SimpleNamespace(counter=0)): _static.counter += 1 print(_static.counter) A: The idiomatic way is to use a class, which can have attributes. If you need instances to not be separate, use a singleton. There are a number of ways you could fake or munge "static" variables into Python (one not mentioned so far is to have a mutable default argument), but this is not the Pythonic, idiomatic way to do it. Just use a class. Or possibly a generator, if your usage pattern fits. A: A static variable inside a Python method class Count: def foo(self): try: self.foo.__func__.counter += 1 except AttributeError: self.foo.__func__.counter = 1 print self.foo.__func__.counter m = Count() m.foo() # 1 m.foo() # 2 m.foo() # 3 A: Another (not recommended!) twist on the callable object like https://stackoverflow.com/a/279598/916373, if you don't mind using a funky call signature, would be to do class foo(object): counter = 0; @staticmethod def __call__(): foo.counter += 1 print "counter is %i" % foo.counter >>> foo()() counter is 1 >>> foo()() counter is 2 A: Soulution n +=1 def foo(): foo.__dict__.setdefault('count', 0) foo.count += 1 return foo.count A: A global declaration provides this functionality. In the example below (python 3.5 or greater to use the "f"), the counter variable is defined outside of the function. Defining it as global in the function signifies that the "global" version outside of the function should be made available to the function. So each time the function runs, it modifies the value outside the function, preserving it beyond the function. counter = 0 def foo(): global counter counter += 1 print("counter is {}".format(counter)) foo() #output: "counter is 1" foo() #output: "counter is 2" foo() #output: "counter is 3" A: Using a decorator and a closure The following decorator can be used create static function variables. It replaces the declared function with the return from itself. This implies that the decorated function must return a function. def static_inner_self(func): return func() Then use the decorator on a function that returns another function with a captured variable: @static_inner_self def foo(): counter = 0 def foo(): nonlocal counter counter += 1 print(f"counter is {counter}") return foo nonlocal is required, otherwise Python thinks that the counter variable is a local variable instead of a captured variable. Python behaves like that because of the variable assignment counter += 1. Any assignment in a function makes Python think that the variable is local. If you are not assigning to the variable in the inner function, then you can ignore the nonlocal statement, for example, in this function I use to indent lines of a string, in which Python can infer that the variable is nonlocal: @static_inner_self def indent_lines(): import re re_start_line = re.compile(r'^', flags=re.MULTILINE) def indent_lines(text, indent=2): return re_start_line.sub(" "*indent, text) return indent_lines P.S. There is a deleted answer that proposed the same. I don't know why the author deleted it. https://stackoverflow.com/a/23366737/195417 A: Prompted by this question, may I present another alternative which might be a bit nicer to use and will look the same for both methods and functions: @static_var2('seed',0) def funccounter(statics, add=1): statics.seed += add return statics.seed print funccounter() #1 print funccounter(add=2) #3 print funccounter() #4 class ACircle(object): @static_var2('seed',0) def counter(statics, self, add=1): statics.seed += add return statics.seed c = ACircle() print c.counter() #1 print c.counter(add=2) #3 print c.counter() #4 d = ACircle() print d.counter() #5 print d.counter(add=2) #7 print d.counter() #8     If you like the usage, here's the implementation: class StaticMan(object): def __init__(self): self.__dict__['_d'] = {} def __getattr__(self, name): return self.__dict__['_d'][name] def __getitem__(self, name): return self.__dict__['_d'][name] def __setattr__(self, name, val): self.__dict__['_d'][name] = val def __setitem__(self, name, val): self.__dict__['_d'][name] = val def static_var2(name, val): def decorator(original): if not hasattr(original, ':staticman'): def wrapped(*args, **kwargs): return original(getattr(wrapped, ':staticman'), *args, **kwargs) setattr(wrapped, ':staticman', StaticMan()) f = wrapped else: f = original #already wrapped getattr(f, ':staticman')[name] = val return f return decorator A: Instead of creating a function having a static local variable, you can always create what is called a "function object" and give it a standard (non-static) member variable. Since you gave an example written C++, I will first explain what a "function object" is in C++. A "function object" is simply any class with an overloaded operator(). Instances of the class will behave like functions. For example, you can write int x = square(5); even if square is an object (with overloaded operator()) and not technically not a "function." You can give a function-object any of the features that you could give a class object. # C++ function object class Foo_class { private: int counter; public: Foo_class() { counter = 0; } void operator() () { counter++; printf("counter is %d\n", counter); } }; Foo_class foo; In Python, we can also overload operator() except that the method is instead named __call__: Here is a class definition: class Foo_class: def __init__(self): # __init__ is similair to a C++ class constructor self.counter = 0 # self.counter is like a static member # variable of a function named "foo" def __call__(self): # overload operator() self.counter += 1 print("counter is %d" % self.counter); foo = Foo_class() # call the constructor Here is an example of the class being used: from foo import foo for i in range(0, 5): foo() # function call The output printed to the console is: counter is 1 counter is 2 counter is 3 counter is 4 counter is 5 If you want your function to take input arguments, you can add those to __call__ as well: # FILE: foo.py - - - - - - - - - - - - - - - - - - - - - - - - - class Foo_class: def __init__(self): self.counter = 0 def __call__(self, x, y, z): # overload operator() self.counter += 1 print("counter is %d" % self.counter); print("x, y, z, are %d, %d, %d" % (x, y, z)); foo = Foo_class() # call the constructor # FILE: main.py - - - - - - - - - - - - - - - - - - - - - - - - - - - - from foo import foo for i in range(0, 5): foo(7, 8, 9) # function call # Console Output - - - - - - - - - - - - - - - - - - - - - - - - - - counter is 1 x, y, z, are 7, 8, 9 counter is 2 x, y, z, are 7, 8, 9 counter is 3 x, y, z, are 7, 8, 9 counter is 4 x, y, z, are 7, 8, 9 counter is 5 x, y, z, are 7, 8, 9 A: This answer builds on @claudiu 's answer. I found that my code was getting less clear when I always had to prepend the function name, whenever I intend to access a static variable. Namely, in my function code I would prefer to write: print(statics.foo) instead of print(my_function_name.foo) So, my solution is to : add a statics attribute to the function in the function scope, add a local variable statics as an alias to my_function.statics from bunch import * def static_vars(**kwargs): def decorate(func): statics = Bunch(**kwargs) setattr(func, "statics", statics) return func return decorate @static_vars(name = "Martin") def my_function(): statics = my_function.statics print("Hello, {0}".format(statics.name)) Remark My method uses a class named Bunch, which is a dictionary that supports attribute-style access, a la JavaScript (see the original article about it, around 2000) It can be installed via pip install bunch It can also be hand-written like so: class Bunch(dict): def __init__(self, **kw): dict.__init__(self,kw) self.__dict__ = self A: I personally prefer the following to decorators. To each their own. def staticize(name, factory): """Makes a pseudo-static variable in calling function. If name `name` exists in calling function, return it. Otherwise, saves return value of `factory()` in name `name` of calling function and return it. :param name: name to use to store static object in calling function :type name: String :param factory: used to initialize name `name` in calling function :type factory: function :rtype: `type(factory())` >>> def steveholt(z): ... a = staticize('a', list) ... a.append(z) >>> steveholt.a Traceback (most recent call last): ... AttributeError: 'function' object has no attribute 'a' >>> steveholt(1) >>> steveholt.a [1] >>> steveholt('a') >>> steveholt.a [1, 'a'] >>> steveholt.a = [] >>> steveholt.a [] >>> steveholt('zzz') >>> steveholt.a ['zzz'] """ from inspect import stack # get scope enclosing calling function calling_fn_scope = stack()[2][0] # get calling function calling_fn_name = stack()[1][3] calling_fn = calling_fn_scope.f_locals[calling_fn_name] if not hasattr(calling_fn, name): setattr(calling_fn, name, factory()) return getattr(calling_fn, name) A: Building on Daniel's answer (additions): class Foo(object): counter = 0 def __call__(self, inc_value=0): Foo.counter += inc_value return Foo.counter foo = Foo() def use_foo(x,y): if(x==5): foo(2) elif(y==7): foo(3) if(foo() == 10): print("yello") use_foo(5,1) use_foo(5,1) use_foo(1,7) use_foo(1,7) use_foo(1,1) The reason why I wanted to add this part is , static variables are used not only for incrementing by some value, but also check if the static var is equal to some value, as a real life example. The static variable is still protected and used only within the scope of the function use_foo() In this example, call to foo() functions exactly as(with respect to the corresponding c++ equivalent) : stat_c +=9; // in c++ foo(9) #python equiv if(stat_c==10){ //do something} // c++ if(foo() == 10): # python equiv #add code here # python equiv Output : yello yello if class Foo is defined restrictively as a singleton class, that would be ideal. This would make it more pythonic. A: I write a simple function to use static variables: def Static(): ### get the func object by which Static() is called. from inspect import currentframe, getframeinfo caller = currentframe().f_back func_name = getframeinfo(caller)[2] # print(func_name) caller = caller.f_back func = caller.f_locals.get( func_name, caller.f_globals.get( func_name ) ) class StaticVars: def has(self, varName): return hasattr(self, varName) def declare(self, varName, value): if not self.has(varName): setattr(self, varName, value) if hasattr(func, "staticVars"): return func.staticVars else: # add an attribute to func func.staticVars = StaticVars() return func.staticVars How to use: def myfunc(arg): if Static().has('test1'): Static().test += 1 else: Static().test = 1 print(Static().test) # declare() only takes effect in the first time for each static variable. Static().declare('test2', 1) print(Static().test2) Static().test2 += 1 A: Miguel Angelo's self-redefinition solution is even possible without any decorator: def fun(increment=1): global fun counter = 0 def fun(increment=1): nonlocal counter counter += increment print(counter) fun(increment) fun() #=> 1 fun() #=> 2 fun(10) #=> 12 The second line has to be adapted to get a limited scope: def outerfun(): def innerfun(increment=1): nonlocal innerfun counter = 0 def innerfun(increment=1): nonlocal counter counter += increment print(counter) innerfun(increment) innerfun() #=> 1 innerfun() #=> 2 innerfun(10) #=> 12 outerfun() The plus of the decorator is that you don't have to pay extra attention to the scope of your construction.
What is the Python equivalent of static variables inside a function?
What is the idiomatic Python equivalent of this C/C++ code? void foo() { static int counter = 0; counter++; printf("counter is %d\n", counter); } specifically, how does one implement the static member at the function level, as opposed to the class level? And does placing the function into a class change anything?
[ "A bit reversed, but this should work:\ndef foo():\n foo.counter += 1\n print \"Counter is %d\" % foo.counter\nfoo.counter = 0\n\nIf you want the counter initialization code at the top instead of the bottom, you can create a decorator:\ndef static_vars(**kwargs):\n def decorate(func):\n for k in kwargs:\n setattr(func, k, kwargs[k])\n return func\n return decorate\n\nThen use the code like this:\n@static_vars(counter=0)\ndef foo():\n foo.counter += 1\n print \"Counter is %d\" % foo.counter\n\nIt'll still require you to use the foo. prefix, unfortunately.\n(Credit: @ony)\n", "You can add attributes to a function, and use it as a static variable.\ndef myfunc():\n myfunc.counter += 1\n print myfunc.counter\n\n# attribute must be initialized\nmyfunc.counter = 0\n\nAlternatively, if you don't want to setup the variable outside the function, you can use hasattr() to avoid an AttributeError exception:\ndef myfunc():\n if not hasattr(myfunc, \"counter\"):\n myfunc.counter = 0 # it doesn't exist yet, so initialize it\n myfunc.counter += 1\n\nAnyway static variables are rather rare, and you should find a better place for this variable, most likely inside a class.\n", "One could also consider:\ndef foo():\n try:\n foo.counter += 1\n except AttributeError:\n foo.counter = 1\n\nReasoning:\n\nmuch pythonic (\"ask for forgiveness not permission\")\nuse exception (thrown only once) instead of if branch (think StopIteration exception)\n\n", "Many people have already suggested testing 'hasattr', but there's a simpler answer:\ndef func():\n func.counter = getattr(func, 'counter', 0) + 1\n\nNo try/except, no testing hasattr, just getattr with a default.\n", "Other answers have demonstrated the way you should do this. Here's a way you shouldn't:\n>>> def foo(counter=[0]):\n... counter[0] += 1\n... print(\"Counter is %i.\" % counter[0]);\n... \n>>> foo()\nCounter is 1.\n>>> foo()\nCounter is 2.\n>>> \n\nDefault values are initialized only when the function is first evaluated, not each time it is executed, so you can use a list or any other mutable object to store static values.\n", "Python doesn't have static variables but you can fake it by defining a callable class object and then using it as a function. Also see this answer.\nclass Foo(object):\n # Class variable, shared by all instances of this class\n counter = 0\n\n def __call__(self):\n Foo.counter += 1\n print Foo.counter\n\n# Create an object instance of class \"Foo,\" called \"foo\"\nfoo = Foo()\n\n# Make calls to the \"__call__\" method, via the object's name itself\nfoo() #prints 1\nfoo() #prints 2\nfoo() #prints 3\n\nNote that __call__ makes an instance of a class (object) callable by its own name. That's why calling foo() above calls the class' __call__ method. From the documentation:\n\nInstances of arbitrary classes can be made callable by defining a __call__() method in their class.\n\n", "Here is a fully encapsulated version that doesn't require an external initialization call:\ndef fn():\n fn.counter=vars(fn).setdefault('counter',-1)\n fn.counter+=1\n print (fn.counter)\n\nIn Python, functions are objects and we can simply add, or monkey patch, member variables to them via the special attribute __dict__. The built-in vars() returns the special attribute __dict__. \nEDIT: Note, unlike the alternative try:except AttributeError answer, with this approach the variable will always be ready for the code logic following initialization. I think the try:except AttributeError alternative to the following will be less DRY and/or have awkward flow: \ndef Fibonacci(n):\n if n<2: return n\n Fibonacci.memo=vars(Fibonacci).setdefault('memo',{}) # use static variable to hold a results cache\n return Fibonacci.memo.setdefault(n,Fibonacci(n-1)+Fibonacci(n-2)) # lookup result in cache, if not available then calculate and store it\n\nEDIT2: I only recommend the above approach when the function will be called from multiple locations. If instead the function is only called in one place, it's better to use nonlocal:\ndef TheOnlyPlaceStaticFunctionIsCalled():\n memo={}\n def Fibonacci(n):\n nonlocal memo # required in Python3. Python2 can see memo\n if n<2: return n\n return memo.setdefault(n,Fibonacci(n-1)+Fibonacci(n-2))\n ...\n print (Fibonacci(200))\n ...\n\n", "Use a generator function to generate an iterator.\ndef foo_gen():\n n = 0\n while True:\n n+=1\n yield n\n\nThen use it like\nfoo = foo_gen().next\nfor i in range(0,10):\n print foo()\n\nIf you want an upper limit:\ndef foo_gen(limit=100000):\n n = 0\n while n < limit:\n n+=1\n yield n\n\nIf the iterator terminates (like the example above), you can also loop over it directly, like\nfor i in foo_gen(20):\n print i\n\nOf course, in these simple cases it's better to use xrange :)\nHere is the documentation on the yield statement.\n", "Other solutions attach a counter attribute to the function, usually with convoluted logic to handle the initialization. This is inappropriate for new code.\nIn Python 3, the right way is to use a nonlocal statement:\ncounter = 0\ndef foo():\n nonlocal counter\n counter += 1\n print(f'counter is {counter}')\n\nSee PEP 3104 for the specification of the nonlocal statement.\nIf the counter is intended to be private to the module, it should be named _counter instead.\n", "Using an attribute of a function as static variable has some potential drawbacks:\n\nEvery time you want to access the variable, you have to write out the full name of the function.\nOutside code can access the variable easily and mess with the value.\n\nIdiomatic python for the second issue would probably be naming the variable with a leading underscore to signal that it is not meant to be accessed, while keeping it accessible after the fact.\nUsing closures\nAn alternative would be a pattern using lexical closures, which are supported with the nonlocal keyword in python 3.\ndef make_counter():\n i = 0\n def counter():\n nonlocal i\n i = i + 1\n return i\n return counter\ncounter = make_counter()\n\nSadly I know no way to encapsulate this solution into a decorator.\nUsing an internal state parameter\nAnother option might be an undocumented parameter serving as a mutable value container.\ndef counter(*, _i=[0]):\n _i[0] += 1\n return _i[0]\n\nThis works, because default arguments are evaluated when the function is defined, not when it is called.\nCleaner might be to have a container type instead of the list, e.g.\ndef counter(*, _i = Mutable(0)):\n _i.value += 1\n return _i.value\n\nbut I am not aware of a builtin type, that clearly communicates the purpose.\n", "A little bit more readable, but more verbose (Zen of Python: explicit is better than implicit):\n>>> def func(_static={'counter': 0}):\n... _static['counter'] += 1\n... print _static['counter']\n...\n>>> func()\n1\n>>> func()\n2\n>>>\n\nSee here for an explanation of how this works.\n", "\n_counter = 0\ndef foo():\n global _counter\n _counter += 1\n print 'counter is', _counter\n\nPython customarily uses underscores to indicate private variables. The only reason in C to declare the static variable inside the function is to hide it outside the function, which is not really idiomatic Python.\n", "def staticvariables(**variables):\n def decorate(function):\n for variable in variables:\n setattr(function, variable, variables[variable])\n return function\n return decorate\n\n@staticvariables(counter=0, bar=1)\ndef foo():\n print(foo.counter)\n print(foo.bar)\n\nMuch like vincent's code above, this would be used as a function decorator and static variables must be accessed with the function name as a prefix. The advantage of this code (although admittedly anyone might be smart enough to figure it out) is that you can have multiple static variables and initialise them in a more conventional manner.\n", "After trying several approaches I ended up using an improved version of @warvariuc's answer:\nimport types\n\ndef func(_static=types.SimpleNamespace(counter=0)):\n _static.counter += 1\n print(_static.counter)\n\n", "The idiomatic way is to use a class, which can have attributes. If you need instances to not be separate, use a singleton.\nThere are a number of ways you could fake or munge \"static\" variables into Python (one not mentioned so far is to have a mutable default argument), but this is not the Pythonic, idiomatic way to do it. Just use a class.\nOr possibly a generator, if your usage pattern fits.\n", "A static variable inside a Python method\nclass Count:\n def foo(self):\n try: \n self.foo.__func__.counter += 1\n except AttributeError: \n self.foo.__func__.counter = 1\n\n print self.foo.__func__.counter\n\nm = Count()\nm.foo() # 1\nm.foo() # 2\nm.foo() # 3\n\n", "Another (not recommended!) twist on the callable object like https://stackoverflow.com/a/279598/916373, if you don't mind using a funky call signature, would be to do\nclass foo(object):\n counter = 0;\n @staticmethod\n def __call__():\n foo.counter += 1\n print \"counter is %i\" % foo.counter\n\n\n>>> foo()()\ncounter is 1\n>>> foo()()\ncounter is 2\n\n", "Soulution n +=1 \ndef foo():\n foo.__dict__.setdefault('count', 0)\n foo.count += 1\n return foo.count\n\n", "A global declaration provides this functionality. In the example below (python 3.5 or greater to use the \"f\"), the counter variable is defined outside of the function. Defining it as global in the function signifies that the \"global\" version outside of the function should be made available to the function. So each time the function runs, it modifies the value outside the function, preserving it beyond the function.\ncounter = 0\n\ndef foo():\n global counter\n counter += 1\n print(\"counter is {}\".format(counter))\n\nfoo() #output: \"counter is 1\"\nfoo() #output: \"counter is 2\"\nfoo() #output: \"counter is 3\"\n\n", "Using a decorator and a closure\nThe following decorator can be used create static function variables. It replaces the declared function with the return from itself. This implies that the decorated function must return a function.\ndef static_inner_self(func):\n return func()\n\nThen use the decorator on a function that returns another function with a captured variable:\n@static_inner_self\ndef foo():\n counter = 0\n def foo():\n nonlocal counter\n counter += 1\n print(f\"counter is {counter}\")\n return foo\n\nnonlocal is required, otherwise Python thinks that the counter variable is a local variable instead of a captured variable. Python behaves like that because of the variable assignment counter += 1. Any assignment in a function makes Python think that the variable is local.\nIf you are not assigning to the variable in the inner function, then you can ignore the nonlocal statement, for example, in this function I use to indent lines of a string, in which Python can infer that the variable is nonlocal:\n@static_inner_self\ndef indent_lines():\n import re\n re_start_line = re.compile(r'^', flags=re.MULTILINE)\n def indent_lines(text, indent=2):\n return re_start_line.sub(\" \"*indent, text)\n return indent_lines\n\nP.S. There is a deleted answer that proposed the same. I don't know why the author deleted it.\nhttps://stackoverflow.com/a/23366737/195417\n", "Prompted by this question, may I present another alternative which might be a bit nicer to use and will look the same for both methods and functions:\n@static_var2('seed',0)\ndef funccounter(statics, add=1):\n statics.seed += add\n return statics.seed\n\nprint funccounter() #1\nprint funccounter(add=2) #3\nprint funccounter() #4\n\nclass ACircle(object):\n @static_var2('seed',0)\n def counter(statics, self, add=1):\n statics.seed += add\n return statics.seed\n\nc = ACircle()\nprint c.counter() #1\nprint c.counter(add=2) #3\nprint c.counter() #4\nd = ACircle()\nprint d.counter() #5\nprint d.counter(add=2) #7\nprint d.counter() #8    \n\nIf you like the usage, here's the implementation:\nclass StaticMan(object):\n def __init__(self):\n self.__dict__['_d'] = {}\n\n def __getattr__(self, name):\n return self.__dict__['_d'][name]\n def __getitem__(self, name):\n return self.__dict__['_d'][name]\n def __setattr__(self, name, val):\n self.__dict__['_d'][name] = val\n def __setitem__(self, name, val):\n self.__dict__['_d'][name] = val\n\ndef static_var2(name, val):\n def decorator(original):\n if not hasattr(original, ':staticman'): \n def wrapped(*args, **kwargs):\n return original(getattr(wrapped, ':staticman'), *args, **kwargs)\n setattr(wrapped, ':staticman', StaticMan())\n f = wrapped\n else:\n f = original #already wrapped\n\n getattr(f, ':staticman')[name] = val\n return f\n return decorator\n\n", "Instead of creating a function having a static local variable, you can always create what is called a \"function object\" and give it a standard (non-static) member variable.\nSince you gave an example written C++, I will first explain what a \"function object\" is in C++. A \"function object\" is simply any class with an overloaded operator(). Instances of the class will behave like functions. For example, you can write int x = square(5); even if square is an object (with overloaded operator()) and not technically not a \"function.\" You can give a function-object any of the features that you could give a class object.\n# C++ function object\nclass Foo_class {\n private:\n int counter; \n public:\n Foo_class() {\n counter = 0;\n }\n void operator() () { \n counter++;\n printf(\"counter is %d\\n\", counter);\n } \n };\n Foo_class foo;\n\nIn Python, we can also overload operator() except that the method is instead named __call__:\nHere is a class definition:\nclass Foo_class:\n def __init__(self): # __init__ is similair to a C++ class constructor\n self.counter = 0\n # self.counter is like a static member\n # variable of a function named \"foo\"\n def __call__(self): # overload operator()\n self.counter += 1\n print(\"counter is %d\" % self.counter);\nfoo = Foo_class() # call the constructor\n\nHere is an example of the class being used:\nfrom foo import foo\n\nfor i in range(0, 5):\n foo() # function call\n\nThe output printed to the console is:\ncounter is 1\ncounter is 2\ncounter is 3\ncounter is 4\ncounter is 5\n\nIf you want your function to take input arguments, you can add those to __call__ as well:\n# FILE: foo.py - - - - - - - - - - - - - - - - - - - - - - - - -\n\nclass Foo_class:\n def __init__(self):\n self.counter = 0\n def __call__(self, x, y, z): # overload operator()\n self.counter += 1\n print(\"counter is %d\" % self.counter);\n print(\"x, y, z, are %d, %d, %d\" % (x, y, z));\nfoo = Foo_class() # call the constructor\n\n# FILE: main.py - - - - - - - - - - - - - - - - - - - - - - - - - - - - \n\nfrom foo import foo\n\nfor i in range(0, 5):\n foo(7, 8, 9) # function call\n\n# Console Output - - - - - - - - - - - - - - - - - - - - - - - - - - \n\ncounter is 1\nx, y, z, are 7, 8, 9\ncounter is 2\nx, y, z, are 7, 8, 9\ncounter is 3\nx, y, z, are 7, 8, 9\ncounter is 4\nx, y, z, are 7, 8, 9\ncounter is 5\nx, y, z, are 7, 8, 9\n\n", "This answer builds on @claudiu 's answer.\nI found that my code was getting less clear when I always had \nto prepend the function name, whenever I intend to access a static variable.\nNamely, in my function code I would prefer to write:\nprint(statics.foo)\n\ninstead of\nprint(my_function_name.foo)\n\nSo, my solution is to :\n\nadd a statics attribute to the function\nin the function scope, add a local variable statics as an alias to my_function.statics\n\nfrom bunch import *\n\ndef static_vars(**kwargs):\n def decorate(func):\n statics = Bunch(**kwargs)\n setattr(func, \"statics\", statics)\n return func\n return decorate\n\n@static_vars(name = \"Martin\")\ndef my_function():\n statics = my_function.statics\n print(\"Hello, {0}\".format(statics.name))\n\n\nRemark\nMy method uses a class named Bunch, which is a dictionary that supports \nattribute-style access, a la JavaScript (see the original article about it, around 2000)\nIt can be installed via pip install bunch\nIt can also be hand-written like so:\nclass Bunch(dict):\n def __init__(self, **kw):\n dict.__init__(self,kw)\n self.__dict__ = self\n\n", "I personally prefer the following to decorators. To each their own.\ndef staticize(name, factory):\n \"\"\"Makes a pseudo-static variable in calling function.\n\n If name `name` exists in calling function, return it. \n Otherwise, saves return value of `factory()` in \n name `name` of calling function and return it.\n\n :param name: name to use to store static object \n in calling function\n :type name: String\n :param factory: used to initialize name `name` \n in calling function\n :type factory: function\n :rtype: `type(factory())`\n\n >>> def steveholt(z):\n ... a = staticize('a', list)\n ... a.append(z)\n >>> steveholt.a\n Traceback (most recent call last):\n ...\n AttributeError: 'function' object has no attribute 'a'\n >>> steveholt(1)\n >>> steveholt.a\n [1]\n >>> steveholt('a')\n >>> steveholt.a\n [1, 'a']\n >>> steveholt.a = []\n >>> steveholt.a\n []\n >>> steveholt('zzz')\n >>> steveholt.a\n ['zzz']\n\n \"\"\"\n from inspect import stack\n # get scope enclosing calling function\n calling_fn_scope = stack()[2][0]\n # get calling function\n calling_fn_name = stack()[1][3]\n calling_fn = calling_fn_scope.f_locals[calling_fn_name]\n if not hasattr(calling_fn, name):\n setattr(calling_fn, name, factory())\n return getattr(calling_fn, name)\n\n", "Building on Daniel's answer (additions):\nclass Foo(object): \n counter = 0 \n\ndef __call__(self, inc_value=0):\n Foo.counter += inc_value\n return Foo.counter\n\nfoo = Foo()\n\ndef use_foo(x,y):\n if(x==5):\n foo(2)\n elif(y==7):\n foo(3)\n if(foo() == 10):\n print(\"yello\")\n\n\nuse_foo(5,1)\nuse_foo(5,1)\nuse_foo(1,7)\nuse_foo(1,7)\nuse_foo(1,1)\n\nThe reason why I wanted to add this part is , static variables are used not only for incrementing by some value, but also check if the static var is equal to some value, as a real life example.\nThe static variable is still protected and used only within the scope of the function use_foo()\nIn this example, call to foo() functions exactly as(with respect to the corresponding c++ equivalent) :\nstat_c +=9; // in c++\nfoo(9) #python equiv\n\nif(stat_c==10){ //do something} // c++\n\nif(foo() == 10): # python equiv\n #add code here # python equiv \n\nOutput :\nyello\nyello\n\nif class Foo is defined restrictively as a singleton class, that would be ideal. This would make it more pythonic.\n", "I write a simple function to use static variables:\ndef Static():\n ### get the func object by which Static() is called.\n from inspect import currentframe, getframeinfo\n caller = currentframe().f_back\n func_name = getframeinfo(caller)[2]\n # print(func_name)\n caller = caller.f_back\n func = caller.f_locals.get(\n func_name, caller.f_globals.get(\n func_name\n )\n )\n \n class StaticVars:\n def has(self, varName):\n return hasattr(self, varName)\n def declare(self, varName, value):\n if not self.has(varName):\n setattr(self, varName, value)\n\n if hasattr(func, \"staticVars\"):\n return func.staticVars\n else:\n # add an attribute to func\n func.staticVars = StaticVars()\n return func.staticVars\n\nHow to use:\ndef myfunc(arg):\n if Static().has('test1'):\n Static().test += 1\n else:\n Static().test = 1\n print(Static().test)\n\n # declare() only takes effect in the first time for each static variable.\n Static().declare('test2', 1)\n print(Static().test2)\n Static().test2 += 1\n\n", "Miguel Angelo's self-redefinition solution is even possible without any decorator:\ndef fun(increment=1):\n global fun\n counter = 0\n def fun(increment=1):\n nonlocal counter\n counter += increment\n print(counter)\n fun(increment)\n\nfun() #=> 1\nfun() #=> 2\nfun(10) #=> 12\n\nThe second line has to be adapted to get a limited scope:\ndef outerfun():\n def innerfun(increment=1):\n nonlocal innerfun\n counter = 0\n def innerfun(increment=1):\n nonlocal counter\n counter += increment\n print(counter)\n innerfun(increment)\n\n innerfun() #=> 1\n innerfun() #=> 2\n innerfun(10) #=> 12\n\nouterfun()\n\nThe plus of the decorator is that you don't have to pay extra attention to the scope of your construction.\n" ]
[ 822, 270, 251, 70, 60, 33, 32, 17, 16, 11, 10, 9, 7, 6, 4, 4, 4, 4, 4, 4, 3, 3, 2, 1, 0, 0, 0 ]
[ "Sure this is an old question but I think I might provide some update.\nIt seems that the performance argument is obsolete. \nThe same test suite appears to give similar results for siInt_try and isInt_re2.\nOf course results vary, but this is one session on my computer with python 3.4.4 on kernel 4.3.01 with Xeon W3550.\nI have run it several times and the results seem to be similar.\nI moved the global regex into function static, but the performance difference is negligible. \nisInt_try: 0.3690\nisInt_str: 0.3981\nisInt_re: 0.5870\nisInt_re2: 0.3632\n\nWith performance issue out of the way, it seems that try/catch would produce the most future- and cornercase- proof code so maybe just wrap it in function\n" ]
[ -2 ]
[ "python", "static" ]
stackoverflow_0000279561_python_static.txt
Q: Verifying types in a python function (including types from the typing module) I am trying to create a function in python that can be used in other functions to verify that arguments passed into the function are of the correct type(s) It works for standard python types, e.g. 'str', 'int', etc., but I want it to be able to check more complex types, such as a list containing strings and integers (typing.List[int, str]) or an iterable object (typing.Iterable) Below is an example of what it should be able to do def some_function(arg1: int, arg2: List[int, str]): # call the check_types function to check argument types check_types(arg1, int, argname="arg1", funcname="some_function") check_types(arg2, typing.List[int,str], argname="arg2", funcname="some_function") some_function(1, 3) # This should raise an error like: # TypeError: 'arg2' to 'some_function' must be type 'typing.List[int, str]', not 'int' A: You can't do this, at least in a general way, because while you can get the type annotations of the function's parameters, the objects that your function receives at runtime as arguments don't have type annotations attached to them. A function's types and their parameters can be inspected via its __annotations__ property: >>> def foo(x: list[int]) -> None: ... print(sum(x)) ... >>> foo.__annotations__ {'x': list[int], 'return': None} >>> foo.__annotations__['x'].__args__ (<class 'int'>,) but an actual list that is referenced by a variable that was annotated as list[int] has no such property that you can get the int type parameter from: >>> a: list[int] = [] >>> dir(a) ['__add__', '__class__', '__class_getitem__', '__contains__', '__delattr__', '__delitem__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__getitem__', '__gt__', '__hash__', '__iadd__', '__imul__', '__init__', '__init_subclass__', '__iter__', '__le__', '__len__', '__lt__', '__mul__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__reversed__', '__rmul__', '__setattr__', '__setitem__', '__sizeof__', '__str__', '__subclasshook__', 'append', 'clear', 'copy', 'count', 'extend', 'index', 'insert', 'pop', 'remove', 'reverse', 'sort'] >>> type(a) <class 'list'> A static typechecking tool (e.g. mypy) can see the annotation at the point where a is declared, and it can use that to validate that foo(a) is a valid function call, but runtime logic inside foo can't do that because the annotation isn't bound to the object at runtime. You can of course hack around it with something like isinstance(a, list) and all(isinstance(i, int) for i in a) but it's hard to generalize this, and it doesn't actually provide a guarantee that a has the annotation you expect (an empty list at runtime is effectively a list[Any], even though in a static typechecking context the type of objects you can add to it is constrained by its type annotation). The simple solution is to use mypy (or similar) statically rather than trying to enforce type annotations at runtime -- or make a best effort at runtime and accept that you'll miss a lot of cases that static type checking would have caught.
Verifying types in a python function (including types from the typing module)
I am trying to create a function in python that can be used in other functions to verify that arguments passed into the function are of the correct type(s) It works for standard python types, e.g. 'str', 'int', etc., but I want it to be able to check more complex types, such as a list containing strings and integers (typing.List[int, str]) or an iterable object (typing.Iterable) Below is an example of what it should be able to do def some_function(arg1: int, arg2: List[int, str]): # call the check_types function to check argument types check_types(arg1, int, argname="arg1", funcname="some_function") check_types(arg2, typing.List[int,str], argname="arg2", funcname="some_function") some_function(1, 3) # This should raise an error like: # TypeError: 'arg2' to 'some_function' must be type 'typing.List[int, str]', not 'int'
[ "You can't do this, at least in a general way, because while you can get the type annotations of the function's parameters, the objects that your function receives at runtime as arguments don't have type annotations attached to them.\nA function's types and their parameters can be inspected via its __annotations__ property:\n>>> def foo(x: list[int]) -> None:\n... print(sum(x))\n...\n>>> foo.__annotations__\n{'x': list[int], 'return': None}\n>>> foo.__annotations__['x'].__args__\n(<class 'int'>,)\n\nbut an actual list that is referenced by a variable that was annotated as list[int] has no such property that you can get the int type parameter from:\n>>> a: list[int] = []\n>>> dir(a)\n['__add__', '__class__', '__class_getitem__', '__contains__', '__delattr__', '__delitem__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__getitem__', '__gt__', '__hash__', '__iadd__', '__imul__', '__init__', '__init_subclass__', '__iter__', '__le__', '__len__', '__lt__', '__mul__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__reversed__', '__rmul__', '__setattr__', '__setitem__', '__sizeof__', '__str__', '__subclasshook__', 'append', 'clear', 'copy', 'count', 'extend', 'index', 'insert', 'pop', 'remove', 'reverse', 'sort']\n>>> type(a)\n<class 'list'>\n\nA static typechecking tool (e.g. mypy) can see the annotation at the point where a is declared, and it can use that to validate that foo(a) is a valid function call, but runtime logic inside foo can't do that because the annotation isn't bound to the object at runtime.\nYou can of course hack around it with something like isinstance(a, list) and all(isinstance(i, int) for i in a) but it's hard to generalize this, and it doesn't actually provide a guarantee that a has the annotation you expect (an empty list at runtime is effectively a list[Any], even though in a static typechecking context the type of objects you can add to it is constrained by its type annotation).\nThe simple solution is to use mypy (or similar) statically rather than trying to enforce type annotations at runtime -- or make a best effort at runtime and accept that you'll miss a lot of cases that static type checking would have caught.\n" ]
[ 2 ]
[]
[]
[ "function", "python", "types", "typing" ]
stackoverflow_0074668288_function_python_types_typing.txt
Q: Django registration form gives error, when password can not pass validation I am trying to make user registration with automatic login. When passwords are different or do not pass validation, there is no message from the form. It throws an error: AttributeError at /accounts/register/ 'AnonymousUser' object has no attribute '_meta' I think the error comes from the login(request, self.object) line. I have tried to fix the problem with overriding the clean() method in the form, but it did not work. I am not sure what to do. my model: class AppUser(auth_models.AbstractBaseUser, auth_models.PermissionsMixin): email = models.EmailField( unique=True, blank=False, null=False, ) is_staff = models.BooleanField( default=False, blank=False, null=False, ) USERNAME_FIELD = 'email' objects = AppUserManager() my form: class SignUpForm(auth_forms.UserCreationForm): class Meta: model = UserModel fields = (UserModel.USERNAME_FIELD, 'password1', 'password2',) field_classes = { 'email': auth_forms.UsernameField, } my view: class SignUpView(views.CreateView): template_name = 'accounts/user/register-user-page.html' form_class = SignUpForm success_url = reverse_lazy('home') def post(self, request, *args, **kwargs): response = super().post(request, *args, **kwargs) login(request, self.object) return response A: You need to override form_valid() instead of post(). from django.http import HttpResponseRedirect def form_valid(self, form): user = form.save() #save the user login(request, user) return HttpResponseRedirect(self.get_success_url())
Django registration form gives error, when password can not pass validation
I am trying to make user registration with automatic login. When passwords are different or do not pass validation, there is no message from the form. It throws an error: AttributeError at /accounts/register/ 'AnonymousUser' object has no attribute '_meta' I think the error comes from the login(request, self.object) line. I have tried to fix the problem with overriding the clean() method in the form, but it did not work. I am not sure what to do. my model: class AppUser(auth_models.AbstractBaseUser, auth_models.PermissionsMixin): email = models.EmailField( unique=True, blank=False, null=False, ) is_staff = models.BooleanField( default=False, blank=False, null=False, ) USERNAME_FIELD = 'email' objects = AppUserManager() my form: class SignUpForm(auth_forms.UserCreationForm): class Meta: model = UserModel fields = (UserModel.USERNAME_FIELD, 'password1', 'password2',) field_classes = { 'email': auth_forms.UsernameField, } my view: class SignUpView(views.CreateView): template_name = 'accounts/user/register-user-page.html' form_class = SignUpForm success_url = reverse_lazy('home') def post(self, request, *args, **kwargs): response = super().post(request, *args, **kwargs) login(request, self.object) return response
[ "You need to override form_valid() instead of post().\nfrom django.http import HttpResponseRedirect\n\ndef form_valid(self, form):\n user = form.save() #save the user\n login(request, user)\n return HttpResponseRedirect(self.get_success_url())\n\n" ]
[ 0 ]
[]
[]
[ "authentication", "django", "python" ]
stackoverflow_0074667644_authentication_django_python.txt
Q: How do you count the number of negative items in a list using a recursive function? I have to make a recursive function that counts how many negative values there are in a given list, but I can't figure out what I am supposed to return for each conditional. def countNegatives(list): """Takes in a list of numbers and returns the number of negative numbers that are inside the list.""" count = 0 if len(list) == 0: return 0 else: if list[0] < 0: return count + 1 else: return countNegatives(list[1:]) print(countNegatives([0, 1, -1, 3, -5, 6])) # should output 2 but gives me 1 print(countNegatives([-1, -3, 50,-4, -5, 1])) #should output 4 but gives me 1 A: When list[0] < 0 your code ignores the rest of the list, yet there could be more negative values there to count. So in that case don't do: return count + 1 but: return 1 + countNegatives(list[1:]) A: The step you were missing is to add the count to the returned value of the recursive call, so that the returned values of all recursive calls get summed up at the end. Here's one way you could do this: def countNegatives(list): #if the list length is zero, we are done if len(list) == 0: return 0 # Get the count of this iteration count = 1 if list[0] < 0 else 0 # sum the count of this iteration with the count of all subsequent iterations return count + countNegatives(list[1:]) So, for your first example, the actual steps taken by your code would look like: return 0 + countNegatives([1, -1, 3, -5, 6]) return 0 + countNegatives([-1, 3, -5, 6]) return 1 + countNegatives([3, -5, 6]) return 0 + countNegatives([-5, 6]) return 1 + countNegatives([6]) return 0 + countNegatives([]) return 0 Expanding all the values gives: return 0 + 0 + 1 + 0 + 1 + 0 + 0 A: Recursion is a functional heritage and so using it with functional style yields the best results. That means avoiding things like mutation, variable reassignments, and other side effects. We can implement using straightforward mathematical induction - If the input list is empty, there is nothing to count. Return the empty result, 0 (inductive) the input list has at least one element. If the first element is negative, return 1 plus the sum of the recursive sub-problem (inductive) the input list has at least one element and the first element is positive. Return 0 plus the sum of the recursive sub-problem def countNegatives(t): if not t: return 0 # 1 elif t[0] < 0: return 1 + countNegatives(t[1:]) # 2 else: return 0 + countNegatives(t[1:]) # 3 Python 3.10 offers an elegant pattern matching solution - def countNegatives(t): match t: case []: #1 return 0 case [n, *next] if n < 0: #2 return 1 + countNegatives(next) case [_, *next]: #3 return 0 + countNegatives(next) A: All of the answers provided so far will fail with a stack overflow condition if given a large list, such as: print(countNegatives([i-5000 for i in range(10000)])) They all break a problem of size n into two subproblems of size 1 and size n-1, so the recursive stack growth is O(n). Python’s default stack size maxes out at around 1000 function calls, so this limits that approach to smaller lists. When possible, it's a good idea to try to split a recursive problem of size n into two subproblems that are half that size. Successive halving yields a stack growth that is O(log n), so it can handle vastly larger problems. For your problem note that the number of negatives in a list of size n > 2 is the number in the first half of the list plus the number in the second half of the list. If the list has one element, return 1 if it's negative. If it's non-negative or the list is empty, return 0. def countNegatives(lst): size = len(lst) # evaluate len() only once if size > 1: mid = size // 2 # find midpoint of lst return countNegatives(lst[:mid]) + countNegatives(lst[mid:]) if size == 1 and lst[0] < 0: return 1 return 0 print(countNegatives([0, 1, -1, 3, -5, 6])) # 2 print(countNegatives([-1, -3, 50, -4, -5, 1])) # 4 print(countNegatives([i - 5000 for i in range(10000)])) # 5000 With this approach you'll run out of memory to store the list long before you would get a stack overflow condition.
How do you count the number of negative items in a list using a recursive function?
I have to make a recursive function that counts how many negative values there are in a given list, but I can't figure out what I am supposed to return for each conditional. def countNegatives(list): """Takes in a list of numbers and returns the number of negative numbers that are inside the list.""" count = 0 if len(list) == 0: return 0 else: if list[0] < 0: return count + 1 else: return countNegatives(list[1:]) print(countNegatives([0, 1, -1, 3, -5, 6])) # should output 2 but gives me 1 print(countNegatives([-1, -3, 50,-4, -5, 1])) #should output 4 but gives me 1
[ "When list[0] < 0 your code ignores the rest of the list, yet there could be more negative values there to count.\nSo in that case don't do:\n return count + 1\n\nbut:\n return 1 + countNegatives(list[1:])\n\n", "The step you were missing is to add the count to the returned value of the recursive call, so that the returned values of all recursive calls get summed up at the end. Here's one way you could do this:\ndef countNegatives(list):\n #if the list length is zero, we are done\n if len(list) == 0:\n return 0\n\n # Get the count of this iteration\n count = 1 if list[0] < 0 else 0\n # sum the count of this iteration with the count of all subsequent iterations\n return count + countNegatives(list[1:])\n\nSo, for your first example, the actual steps taken by your code would look like:\nreturn 0 + countNegatives([1, -1, 3, -5, 6])\nreturn 0 + countNegatives([-1, 3, -5, 6])\nreturn 1 + countNegatives([3, -5, 6])\nreturn 0 + countNegatives([-5, 6])\nreturn 1 + countNegatives([6])\nreturn 0 + countNegatives([])\nreturn 0\n\nExpanding all the values gives:\nreturn 0 + 0 + 1 + 0 + 1 + 0 + 0 \n\n", "Recursion is a functional heritage and so using it with functional style yields the best results. That means avoiding things like mutation, variable reassignments, and other side effects. We can implement using straightforward mathematical induction -\n\nIf the input list is empty, there is nothing to count. Return the empty result, 0\n(inductive) the input list has at least one element. If the first element is negative, return 1 plus the sum of the recursive sub-problem\n(inductive) the input list has at least one element and the first element is positive. Return 0 plus the sum of the recursive sub-problem\n\ndef countNegatives(t):\n if not t: return 0 # 1\n elif t[0] < 0: return 1 + countNegatives(t[1:]) # 2\n else: return 0 + countNegatives(t[1:]) # 3\n\nPython 3.10 offers an elegant pattern matching solution -\ndef countNegatives(t):\n match t:\n case []: #1\n return 0\n case [n, *next] if n < 0: #2\n return 1 + countNegatives(next)\n case [_, *next]: #3\n return 0 + countNegatives(next)\n\n", "All of the answers provided so far will fail with a stack overflow condition if given a large list, such as:\nprint(countNegatives([i-5000 for i in range(10000)]))\n\nThey all break a problem of size n into two subproblems of size 1 and size n-1, so the recursive stack growth is O(n). Python’s default stack size maxes out at around 1000 function calls, so this limits that approach to smaller lists.\nWhen possible, it's a good idea to try to split a recursive problem of size n into two subproblems that are half that size. Successive halving yields a stack growth that is O(log n), so it can handle vastly larger problems.\nFor your problem note that the number of negatives in a list of size n > 2 is the number in the first half of the list plus the number in the second half of the list. If the list has one element, return 1 if it's negative. If it's non-negative or the list is empty, return 0.\ndef countNegatives(lst):\n size = len(lst) # evaluate len() only once\n if size > 1:\n mid = size // 2 # find midpoint of lst\n return countNegatives(lst[:mid]) + countNegatives(lst[mid:])\n if size == 1 and lst[0] < 0:\n return 1\n return 0\n\nprint(countNegatives([0, 1, -1, 3, -5, 6])) # 2\nprint(countNegatives([-1, -3, 50, -4, -5, 1])) # 4\nprint(countNegatives([i - 5000 for i in range(10000)])) # 5000\n\nWith this approach you'll run out of memory to store the list long before you would get a stack overflow condition.\n" ]
[ 0, 0, 0, 0 ]
[ "The problem with your code is that you are not keeping track of the running count of negative numbers in the recursive calls. Specifically, you are returning count + 1 when the first item of the list is negative, and discarding the rest of the list, instead of using a recursive call to count the number of negative items in the rest of the list.\nTo fix the problem, you can add the result of the recursive call to count in both cases, when the first item is negative and when it is not. This way, the running count of negative items will be accumulated through the recursive calls, and returned as the final result when the base case is reached.\nHere's a corrected version of your code:\ndef countNegatives(lst):\n \"\"\"Takes in a list of numbers and\n returns the number of negative numbers\n that are inside the list.\"\"\"\n count = 0\n if len(lst) == 0:\n return 0\n else:\n if lst[0] < 0:\n count += 1\n count += countNegatives(lst[1:])\n return count\n\nprint(countNegatives([0, 1, -1, 3, -5, 6])) # Output: 2\nprint(countNegatives([-1, -3, 50, -4, -5, 1])) # Output: 4\n\nNote that I renamed the parameter list to lst, because list is the name of a built-in Python data type, and it is a good practice to avoid using built-in names as variable names.\n", "You want to accumulate count, so pass it to the recursive function. The first caller starts at zero, so you have a good default.\ndef countNegatives(list, count=0):\n \"\"\"Takes in a list of numbers and\n returns the number of negative numbers\n that are inside the list.\"\"\"\n if len(list):\n count += list[0] < 0\n return countNegatives(list[1:], count)\n else:\n return count\n\nresult = countNegatives([1,99, -3, 6, -66, -7, 12, -1, -1])\nprint(result)\nassert result == 5\n \n\n" ]
[ -1, -1 ]
[ "python", "recursion" ]
stackoverflow_0074659510_python_recursion.txt
Q: How to redirect an authenticated (using Django) user to VueJS frontend? I have a simple VueJS setup that involves some use of a router. This runs on one local server. I also have a django backend project that runs on a second local server. I would like for a django view to take my user to my vue frontend. What approach could I take to reach this? I have not yet made an attempt but if there is a specific part of my code I would need to show to get support then do let me know. I have consider the below SO post however it does not address my issue relating to what a view function would need to consist of. Setup django social auth with vuejs frontend In a webpack project people make use of a vue.config.js file like below but I am using vite of course and so I do not know how to make a connection. const BundleTracker = require('webpack-bundle-tracker'); module.exports = { publicPath: "http://0.0.0.0:8080", outputDir: "./dist/", chainWebpack: config => { config.optimization.splitChunks(false) config.plugin('BundleTracker').use(BundleTracker, [ { filename: './webpack-stats.json' } ]) config.resolve.alias.set('__STATIC__', 'static') config.devServer .public('http://0.0.0.0:8080') .host('0.0.0.0') .port(8080) .hotOnly(true) .watchOptions({poll: 1000}) .https(false) .headers({'Access-Control-Allow-Origin': ['\*']}) } }; A: from django.shortcuts import redirect def login(request): # Verify the user's authentication data # and authenticate the user if the data is valid # ... # Redirect the user to the VueJS frontend return redirect('https://vuejs.app') In this example, when the user is successfully authenticated in your Django application, the redirect function is called to redirect the user to the specified URL (in this case, https://vuejs.app). This will allow the user to access the VueJS frontend without having to log in again.
How to redirect an authenticated (using Django) user to VueJS frontend?
I have a simple VueJS setup that involves some use of a router. This runs on one local server. I also have a django backend project that runs on a second local server. I would like for a django view to take my user to my vue frontend. What approach could I take to reach this? I have not yet made an attempt but if there is a specific part of my code I would need to show to get support then do let me know. I have consider the below SO post however it does not address my issue relating to what a view function would need to consist of. Setup django social auth with vuejs frontend In a webpack project people make use of a vue.config.js file like below but I am using vite of course and so I do not know how to make a connection. const BundleTracker = require('webpack-bundle-tracker'); module.exports = { publicPath: "http://0.0.0.0:8080", outputDir: "./dist/", chainWebpack: config => { config.optimization.splitChunks(false) config.plugin('BundleTracker').use(BundleTracker, [ { filename: './webpack-stats.json' } ]) config.resolve.alias.set('__STATIC__', 'static') config.devServer .public('http://0.0.0.0:8080') .host('0.0.0.0') .port(8080) .hotOnly(true) .watchOptions({poll: 1000}) .https(false) .headers({'Access-Control-Allow-Origin': ['\*']}) } };
[ "\nfrom django.shortcuts import redirect\n\ndef login(request):\n # Verify the user's authentication data\n # and authenticate the user if the data is valid\n # ...\n\n # Redirect the user to the VueJS frontend\n return redirect('https://vuejs.app')\n\n\nIn this example, when the user is successfully authenticated in your Django application, the redirect function is called to redirect the user to the specified URL (in this case, https://vuejs.app). This will allow the user to access the VueJS frontend without having to log in again.\n" ]
[ 0 ]
[]
[]
[ "django", "python", "vite", "vue.js", "webpack" ]
stackoverflow_0074668331_django_python_vite_vue.js_webpack.txt
Q: Inserting cli options into virtualenv.cli_run within a python file Problem Code Picture I want to write a Python script that creates a new virtual environment with the following virtualenv CLI options: --app-data APP_DATA (a folder APP_DATA for the cache) --seeder {app-data,pip} If I give those two as strings in a list (see picture) I get: TypeError: options must be of type VirtualEnvOptions A: When you call cli_run as part of virtualenv, you don't need to include the first argument, in this case "venv". this should work: from virtualenv import cli_run cli_run(["--app-data APP_DATA", "--seeder {app-data,pip}"]); A: According to virtualenv's documentation section "Programmatic API", this seems like the following should work: from virtualenv import cli_run cli_run(['path/to/venv', '--app-data', 'path/to/app_data', '--seeder', 'app-data'])
Inserting cli options into virtualenv.cli_run within a python file
Problem Code Picture I want to write a Python script that creates a new virtual environment with the following virtualenv CLI options: --app-data APP_DATA (a folder APP_DATA for the cache) --seeder {app-data,pip} If I give those two as strings in a list (see picture) I get: TypeError: options must be of type VirtualEnvOptions
[ "When you call cli_run as part of virtualenv, you don't need to include the first argument, in this case \"venv\".\nthis should work:\nfrom virtualenv import cli_run\ncli_run([\"--app-data APP_DATA\", \"--seeder {app-data,pip}\"]);\n\n", "According to virtualenv's documentation section \"Programmatic API\", this seems like the following should work:\nfrom virtualenv import cli_run\n\ncli_run(['path/to/venv', '--app-data', 'path/to/app_data', '--seeder', 'app-data'])\n\n" ]
[ 0, 0 ]
[]
[]
[ "python", "virtualenv" ]
stackoverflow_0074662331_python_virtualenv.txt
Q: Get Text from SVG using Python Selenium My first time trying to extract data from an SVG element, following is the SVG element and the code I have tried to put up by reading stuff on the internet, I have absolutely no clue how wrong I am and why so. <svg class="rv-xy-plot__inner" width="282" height="348"> <g class="rv-xy-plot__series rv-xy-plot__series--bar " transform="rrr"> <rect y="rrr" height="rrr" x="0" width="rrr" style="rrr;"></rect> <rect y="rrr" height="rrr" x="0" width="rrr" style="rrr;"></rect> </g> <g class="rv-xy-plot__series rv-xy-plot__series--bar " transform="rrr"> <rect y="rrr" height="rrr" x="rrr" width="rrr" style="rrr;"></rect> <rect y="rrr" height="rrr" x="rrr" width="rrr" style="rrr;"></rect> </g> <g class="rv-xy-plot__series rv-xy-plot__series--label typography-body-medium-xs text-primary" transform="rrr"> <text dominant-baseline="rrr" class="rv-xy-plot__series--label-text">Category 1</text> <text dominant-baseline="rrr" class="rv-xy-plot__series--label-text">Category 2</text> </g> <g class="rv-xy-plot__series rv-xy-plot__series--label typography-body-medium-xs text-primary" transform="rrr"> <text dominant-baseline="rrr" class="rv-xy-plot__series--label-text">44.83%</text> <text dominant-baseline="rrr" class="rv-xy-plot__series--label-text">0.00%</text> </g> </svg> I am trying to get the Categories and corresponding Percentages from the last 2 blocks of the SVG, I've replaced all the values with the string 'rrr' just to make it more readable here. I'm trying, driver.find_element(By.XPATH,"//*[local-name()='svg' and @class='rv-xy-plot__inner']//*[local-name()='g' and @class='rv-xy-plot__series rv-xy-plot__series--label typography-body-medium-xs text-primary']//*[name()='text']").get_attribute('innerText') Like I said, I don't know what I'm doing here, what I've so far understood is svg elements need to be represented as a 'custom ?' XPATH which involves stacking all elements into an XPATH which is relative to each other, however I have no clue on how to extract the expected output like below. Category 1 - 44.83% Category 2 - 0.00% Any help is appreciated. Thanks. A: You can try something like : for sv in driver.find_elements(By.XPATH,"//*[local-name()='svg' and @class='rv-xy-plot__inner']//*[local-name()='g' and @class='rv-xy-plot__series rv-xy-plot__series--label typography-body-medium-xs text-primary']"): txt= sv.find_emlement(By.XPATH, './/text').text print(txt) #OR for sv in driver.find_elements(By.XPATH,"//*[local-name()='svg' and @class='rv-xy-plot__inner']//*[local-name()='g' and @class='rv-xy-plot__series rv-xy-plot__series--label typography-body-medium-xs text-primary']//text"): txt= sv.text print(txt) A: sv = driver.find_elements(By.XPATH,"//*[local-name()='svg' and @class='rv-xy-plot__inner']//*[local-name()='g' and @class='rv-xy-plot__series rv-xy-plot__series--label typography-body-medium-xs text-primary']//*[name()='text']") This gives me a list I can iterate through to get the values. Thanks to the idea from @Fazlul, the modification I've made is //*[name()='text'] at the end.
Get Text from SVG using Python Selenium
My first time trying to extract data from an SVG element, following is the SVG element and the code I have tried to put up by reading stuff on the internet, I have absolutely no clue how wrong I am and why so. <svg class="rv-xy-plot__inner" width="282" height="348"> <g class="rv-xy-plot__series rv-xy-plot__series--bar " transform="rrr"> <rect y="rrr" height="rrr" x="0" width="rrr" style="rrr;"></rect> <rect y="rrr" height="rrr" x="0" width="rrr" style="rrr;"></rect> </g> <g class="rv-xy-plot__series rv-xy-plot__series--bar " transform="rrr"> <rect y="rrr" height="rrr" x="rrr" width="rrr" style="rrr;"></rect> <rect y="rrr" height="rrr" x="rrr" width="rrr" style="rrr;"></rect> </g> <g class="rv-xy-plot__series rv-xy-plot__series--label typography-body-medium-xs text-primary" transform="rrr"> <text dominant-baseline="rrr" class="rv-xy-plot__series--label-text">Category 1</text> <text dominant-baseline="rrr" class="rv-xy-plot__series--label-text">Category 2</text> </g> <g class="rv-xy-plot__series rv-xy-plot__series--label typography-body-medium-xs text-primary" transform="rrr"> <text dominant-baseline="rrr" class="rv-xy-plot__series--label-text">44.83%</text> <text dominant-baseline="rrr" class="rv-xy-plot__series--label-text">0.00%</text> </g> </svg> I am trying to get the Categories and corresponding Percentages from the last 2 blocks of the SVG, I've replaced all the values with the string 'rrr' just to make it more readable here. I'm trying, driver.find_element(By.XPATH,"//*[local-name()='svg' and @class='rv-xy-plot__inner']//*[local-name()='g' and @class='rv-xy-plot__series rv-xy-plot__series--label typography-body-medium-xs text-primary']//*[name()='text']").get_attribute('innerText') Like I said, I don't know what I'm doing here, what I've so far understood is svg elements need to be represented as a 'custom ?' XPATH which involves stacking all elements into an XPATH which is relative to each other, however I have no clue on how to extract the expected output like below. Category 1 - 44.83% Category 2 - 0.00% Any help is appreciated. Thanks.
[ "You can try something like :\nfor sv in driver.find_elements(By.XPATH,\"//*[local-name()='svg' and @class='rv-xy-plot__inner']//*[local-name()='g' and @class='rv-xy-plot__series rv-xy-plot__series--label typography-body-medium-xs text-primary']\"):\n txt= sv.find_emlement(By.XPATH, './/text').text\n print(txt)\n\n#OR\n for sv in driver.find_elements(By.XPATH,\"//*[local-name()='svg' and @class='rv-xy-plot__inner']//*[local-name()='g' and @class='rv-xy-plot__series rv-xy-plot__series--label typography-body-medium-xs text-primary']//text\"):\n txt= sv.text\n print(txt)\n \n \n\n", "sv = driver.find_elements(By.XPATH,\"//*[local-name()='svg' and @class='rv-xy-plot__inner']//*[local-name()='g' and @class='rv-xy-plot__series rv-xy-plot__series--label typography-body-medium-xs text-primary']//*[name()='text']\")\n\nThis gives me a list I can iterate through to get the values.\nThanks to the idea from @Fazlul, the modification I've made is //*[name()='text'] at the end.\n" ]
[ 1, 0 ]
[]
[]
[ "python", "selenium", "svg", "web_scraping" ]
stackoverflow_0074663657_python_selenium_svg_web_scraping.txt
Q: Why system path behaviour in pycharm seems to be different that using directly the conda env? this is actually my first question in stack overflow :D. As background: I started learning python by myself almost 1 year ago in parallel of my work (Industrial Engineer), so feel free to point any mistakes. Any feedback will be very appreciated (including the format of this question). I was trying to a have a project structure with multiple folders where to organize the scripts clearly. Eveything was going peachy until I wanted to schedulesome scripts using bat files. When running my scripts (with absolute imports) in Pycharm everything works without problems, but when I try to run same scripts via bat files the imports fails! For this question I created a new (simplified) project and created a new conda enviroment (both called test) with a example of the structure of folders where I can reproduce this error. Inside those folders I have the a script (main.py) calling a function from another script (library.py) main.py : from A.B.C import library library.Function_Alpha('hello world ') library.py: def Function_Alpha(txt): print(txt) main.bat "C:\Localdata\ANACONDA\envs\test\python.exe" "C:/Users/bpereira/PycharmProjects/test/X/main.py" pause When I run the script using pycharm everything goes as expected: C:\Localdata\ANACONDA\envs\test\python.exe C:/Users/bpereira/PycharmProjects/test/X/main.py hello world Process finished with exit code 0 But when I try running the bat file: cmd.exe /c main.bat C:\Users\bpereira\PycharmProjects\test\X>"C:\Localdata\ANACONDA\envs\test\python.exe" "C:/Users/bpereira/PycharmProjects/test/X/main.py" Traceback (most recent call last): File "C:/Users/bpereira/PycharmProjects/test/X/main.py", line 1, in <module> from A.B.C import library ModuleNotFoundError: No module named 'A' C:\Users\bpereira\PycharmProjects\test\X>pause Press any key to continue . . . Is Pycharm doing something with the system paths that I am not aware? How I can emulate the behaviour of pycharm using the bat files? I tried adding the system path manually in the script and it works: *main.py: import sys sys.path.append(r'C:/Users/bpereira/PycharmProjects/test') from A.B.C import library library.Function_Alpha('hello world ') main.bat execution: cmd.exe /c main.bat C:\Users\bpereira\PycharmProjects\test\X>"C:\Localdata\ANACONDA\envs\test\python.exe" "C:/Users/bpereira/PycharmProjects/test/X/main.py" hello world C:\Users\bpereira\PycharmProjects\test\X>pause Press any key to continue . . . But I am actually trying to understand how pycharm does this automatically and if I can reproduce that without having to append the sys.path on each script. In the actual project when I do this contaiment (sys.path.append) the scripts are able to run but I face other errors like SLL module missing while calling the request function. Again this works flawlessly within pycharm but from the bat files the request module seems to behave differently, which I think is realted to the system paths. (Caused by SSLError("Can't connect to HTTPS URL because the SSL module is not available.") For info: I am running this on the company laptop where I do not have admin rights and I am not able to edit the system paths. A: Solved. After some more investigation it was clear that I was facing 2 problems: Not declaring the env path in the system Not activating the virtual enviroment properly (hence the SSL error) Since I do not have the admin rights of the laptop (corporate one) I solved the path issue by defining the the project path in the .bat file, adding the path temporally each time the env is activated. The one-liner I wrote in the initial .bat is not activating the enviroment. The correct way is to call the 'activate.bat' in the conda folder. Hereby the solution in the .bat file: @echo off rem Define path to conda installation and env name. set CONDAPATH= #CondaPath set ENVNAME= #EnvName rem Activate the env if %ENVNAME%==base (set ENVPATH=%CONDAPATH%) else (set ENVPATH=%CONDAPATH%\envs\%ENVNAME%) call %CONDAPATH%\Scripts\activate.bat %ENVPATH% set PYTHONPATH= #ProjectPath rem Run a python script in that environment python #ScriptPath_1 python #ScriptPath_2 python #ScriptPath_3 rem Deactivate the environment call conda deactivate Hope this helps someone trying to automate python scripts using the windows task scheduler with .bat files
Why system path behaviour in pycharm seems to be different that using directly the conda env?
this is actually my first question in stack overflow :D. As background: I started learning python by myself almost 1 year ago in parallel of my work (Industrial Engineer), so feel free to point any mistakes. Any feedback will be very appreciated (including the format of this question). I was trying to a have a project structure with multiple folders where to organize the scripts clearly. Eveything was going peachy until I wanted to schedulesome scripts using bat files. When running my scripts (with absolute imports) in Pycharm everything works without problems, but when I try to run same scripts via bat files the imports fails! For this question I created a new (simplified) project and created a new conda enviroment (both called test) with a example of the structure of folders where I can reproduce this error. Inside those folders I have the a script (main.py) calling a function from another script (library.py) main.py : from A.B.C import library library.Function_Alpha('hello world ') library.py: def Function_Alpha(txt): print(txt) main.bat "C:\Localdata\ANACONDA\envs\test\python.exe" "C:/Users/bpereira/PycharmProjects/test/X/main.py" pause When I run the script using pycharm everything goes as expected: C:\Localdata\ANACONDA\envs\test\python.exe C:/Users/bpereira/PycharmProjects/test/X/main.py hello world Process finished with exit code 0 But when I try running the bat file: cmd.exe /c main.bat C:\Users\bpereira\PycharmProjects\test\X>"C:\Localdata\ANACONDA\envs\test\python.exe" "C:/Users/bpereira/PycharmProjects/test/X/main.py" Traceback (most recent call last): File "C:/Users/bpereira/PycharmProjects/test/X/main.py", line 1, in <module> from A.B.C import library ModuleNotFoundError: No module named 'A' C:\Users\bpereira\PycharmProjects\test\X>pause Press any key to continue . . . Is Pycharm doing something with the system paths that I am not aware? How I can emulate the behaviour of pycharm using the bat files? I tried adding the system path manually in the script and it works: *main.py: import sys sys.path.append(r'C:/Users/bpereira/PycharmProjects/test') from A.B.C import library library.Function_Alpha('hello world ') main.bat execution: cmd.exe /c main.bat C:\Users\bpereira\PycharmProjects\test\X>"C:\Localdata\ANACONDA\envs\test\python.exe" "C:/Users/bpereira/PycharmProjects/test/X/main.py" hello world C:\Users\bpereira\PycharmProjects\test\X>pause Press any key to continue . . . But I am actually trying to understand how pycharm does this automatically and if I can reproduce that without having to append the sys.path on each script. In the actual project when I do this contaiment (sys.path.append) the scripts are able to run but I face other errors like SLL module missing while calling the request function. Again this works flawlessly within pycharm but from the bat files the request module seems to behave differently, which I think is realted to the system paths. (Caused by SSLError("Can't connect to HTTPS URL because the SSL module is not available.") For info: I am running this on the company laptop where I do not have admin rights and I am not able to edit the system paths.
[ "Solved.\nAfter some more investigation it was clear that I was facing 2 problems:\n\nNot declaring the env path in the system\nNot activating the virtual enviroment properly (hence the SSL error)\n\nSince I do not have the admin rights of the laptop (corporate one) I solved the path issue by defining the the project path in the .bat file, adding the path temporally each time the env is activated.\nThe one-liner I wrote in the initial .bat is not activating the enviroment. The correct way is to call the 'activate.bat' in the conda folder.\nHereby the solution in the .bat file:\n@echo off\nrem Define path to conda installation and env name.\nset CONDAPATH= #CondaPath\nset ENVNAME= #EnvName\n\nrem Activate the env\nif %ENVNAME%==base (set ENVPATH=%CONDAPATH%) else (set ENVPATH=%CONDAPATH%\\envs\\%ENVNAME%)\ncall %CONDAPATH%\\Scripts\\activate.bat %ENVPATH%\n\nset PYTHONPATH= #ProjectPath\n\nrem Run a python script in that environment\npython #ScriptPath_1\npython #ScriptPath_2\npython #ScriptPath_3\n\nrem Deactivate the environment\ncall conda deactivate\n\nHope this helps someone trying to automate python scripts using the windows task scheduler with .bat files\n" ]
[ 0 ]
[]
[]
[ "import", "path", "pycharm", "python" ]
stackoverflow_0074352567_import_path_pycharm_python.txt
Q: Cutting an array into consistent pieces of any size, with recursion The problem is to, given an array, write a generator function that will yield all combinations of cutting the array into consistent pieces(arrays of elements that are consecutive in the given array) of any size and which together make up the whole given array. The elements in any one of the combinations don't have to be of the same size. For example given an array [1,2,3,4] I want to yield: [[1],[2],[3],[4]] [[1,2],[3],[4]] [[1],[2,3],[4]] [[1],[2],[3,4]] [[1,2],[3,4]] [[1],[2,3,4]] [[1,2,3],[4]] [[1,2,3,4]] def powerset_but_consistent(T): if len(T)==0: return for el in T: yield el for k in range(len(el)): yield ([el[:k],el[k:]]) #for l in range(k, len(el)): # yield ([el[:k],el[k:l],el[l:]]) powerset_but_consistent([T[:k],T[k:]]) T = [[1,2,3,4,5]] subsets = [x for x in powerset_but_consistent(T)] for i in subsets: print(i) And this prints only those combinations that are made of two arrays. If I uncomment what I commented then it will also print combinations consisting of 3 arrays. If I add another inner for loop, it will print those combinations consisting of 4 arrays and so on... How can I use recursion instead of infinite inner for loops? Is it time for using something like: for x in powerset_but_consistent(T[some_slicing] or something else) ? I find it difficult to understand this construction. Can anyone help? A: One of the algorithm commonly used for these type of questions (permutations and combinations) is using depth-first-search (DFS). Here's a link to a more similar but harder leetcode problem on palindrome partitioning that uses backtracking and DFS. My solution is based off of that leetcode post. Algorithm If my explanation is not enough go through the leetcode link provided above it may make sense. The general idea is iteration through the list and get all the combinations from current element by resursively traversing through the remaining elements after the current elements. Pseudocode function recursive (list) recurise condition yield child for element in remaining-elements: // Get the combinations from all elements start from 'element' partitions = recursive (...) // join the list of elements already explored with the provided combinations for every child from partitions: yield combination_before + child The major concept through this is using Depth-First-Search, and maybe figuring out the recursive condition as that really took me a while when. You can also optimize the code by storing the results of the deep recursive operations in a dictionary and access them when you revisit over in the next iterations. I'm also pretty sure there is some optimal dynamic programming solution for this somewhere out there. Goodluck, hope this helped Edit: My bad, I realised i had the actual solution before the edit, had no idea that may slighty conflict with individual community guidelines.
Cutting an array into consistent pieces of any size, with recursion
The problem is to, given an array, write a generator function that will yield all combinations of cutting the array into consistent pieces(arrays of elements that are consecutive in the given array) of any size and which together make up the whole given array. The elements in any one of the combinations don't have to be of the same size. For example given an array [1,2,3,4] I want to yield: [[1],[2],[3],[4]] [[1,2],[3],[4]] [[1],[2,3],[4]] [[1],[2],[3,4]] [[1,2],[3,4]] [[1],[2,3,4]] [[1,2,3],[4]] [[1,2,3,4]] def powerset_but_consistent(T): if len(T)==0: return for el in T: yield el for k in range(len(el)): yield ([el[:k],el[k:]]) #for l in range(k, len(el)): # yield ([el[:k],el[k:l],el[l:]]) powerset_but_consistent([T[:k],T[k:]]) T = [[1,2,3,4,5]] subsets = [x for x in powerset_but_consistent(T)] for i in subsets: print(i) And this prints only those combinations that are made of two arrays. If I uncomment what I commented then it will also print combinations consisting of 3 arrays. If I add another inner for loop, it will print those combinations consisting of 4 arrays and so on... How can I use recursion instead of infinite inner for loops? Is it time for using something like: for x in powerset_but_consistent(T[some_slicing] or something else) ? I find it difficult to understand this construction. Can anyone help?
[ "One of the algorithm commonly used for these type of questions (permutations and combinations) is using depth-first-search (DFS). Here's a link to a more similar but harder leetcode problem on palindrome partitioning that uses backtracking and DFS. My solution is based off of that leetcode post.\nAlgorithm\nIf my explanation is not enough go through the leetcode link provided above it may make sense.\nThe general idea is iteration through the list and get all the combinations from current element by resursively traversing through the remaining elements after the current elements.\nPseudocode\nfunction recursive (list)\n recurise condition\n yield child\n \n for element in remaining-elements:\n // Get the combinations from all elements start from 'element'\n partitions = recursive (...)\n\n // join the list of elements already explored with the provided combinations\n for every child from partitions:\n yield combination_before + child\n\n\nThe major concept through this is using Depth-First-Search, and maybe figuring out the recursive condition as that really took me a while when.\nYou can also optimize the code by storing the results of the deep recursive operations in a dictionary and access them when you revisit over in the next iterations. I'm also pretty sure there is some optimal dynamic programming solution for this somewhere out there. Goodluck, hope this helped\nEdit: My bad, I realised i had the actual solution before the edit, had no idea that may slighty conflict with individual community guidelines.\n" ]
[ 0 ]
[]
[]
[ "generator", "multidimensional_array", "python", "recursion" ]
stackoverflow_0074667555_generator_multidimensional_array_python_recursion.txt
Q: Trying to run Jupyter-Dash and getting "an integer is required" error I am trying to get the first Dash example from https://dash.plotly.com/basic-callbacks running in a Jupyter Notebook with Jupyter Dash and the app runs fine as a standalone application, but errors out when implemented in the notebook and I can't figure this out. I get TypeError: an integer is required (got type NoneType) when I try to run the notebook. from jupyter_dash import JupyterDash import dash_html_components as html import dash_core_components as dcc import dash from dash.dependencies import Input, Output app = JupyterDash(__name__) app.layout = html.Div([ html.H6("Change the value in the text box to see callbacks in action!"), html.Div([ "Input: ", dcc.Input(id='my-input', value='initial value', type='text') ]), html.Br(), html.Div(id='my-output'), ]) @app.callback( Output(component_id='my-output', component_property='children'), [Input(component_id='my-input', component_property='value')] ) def update_output_div(input_value): return f'Output: {input_value}' if __name__ == '__main__': app.run_server(mode="external") The main difference is with the imports and app.run_server() In pycharm I just had from dash import Dash, html, dcc, Input, Output and app.run_server(debug=True). From what I've researched there's some issues with versioning and the updates I made should have fixed the issues and I can't seem to find anything about the error I am getting. EDIT: Traceback from error TypeError Traceback (most recent call last) ~\AppData\Local\Temp\1/ipykernel_31316/4067692673.py in <module> 28 29 if __name__ == '__main__': ---> 30 app.run_server(mode="external") ~\AppData\Local\Programs\Python\Python39\lib\site-packages\jupyter_dash\jupyter_app.py in run_server(self, mode, width, height, inline_exceptions, **kwargs) 220 self._terminate_server_for_port(host, port) 221 --> 222 # Configure pathname prefix 223 requests_pathname_prefix = self.config.get('requests_pathname_prefix', None) 224 if self._input_pathname_prefix is None: ~\AppData\Local\Programs\Python\Python39\lib\site-packages\jupyter_dash\_stoppable_thread.py in kill(self) TypeError: an integer is required (got type NoneType) A: I had the same problem, and I found your question while trying to find a solution. Try adding host as string and port as integer types inside run_server like this: app.run_server(mode='external', host='your_host', port=your_port) My host is 127.0.0.1, and the port is 8050. Hope it helps
Trying to run Jupyter-Dash and getting "an integer is required" error
I am trying to get the first Dash example from https://dash.plotly.com/basic-callbacks running in a Jupyter Notebook with Jupyter Dash and the app runs fine as a standalone application, but errors out when implemented in the notebook and I can't figure this out. I get TypeError: an integer is required (got type NoneType) when I try to run the notebook. from jupyter_dash import JupyterDash import dash_html_components as html import dash_core_components as dcc import dash from dash.dependencies import Input, Output app = JupyterDash(__name__) app.layout = html.Div([ html.H6("Change the value in the text box to see callbacks in action!"), html.Div([ "Input: ", dcc.Input(id='my-input', value='initial value', type='text') ]), html.Br(), html.Div(id='my-output'), ]) @app.callback( Output(component_id='my-output', component_property='children'), [Input(component_id='my-input', component_property='value')] ) def update_output_div(input_value): return f'Output: {input_value}' if __name__ == '__main__': app.run_server(mode="external") The main difference is with the imports and app.run_server() In pycharm I just had from dash import Dash, html, dcc, Input, Output and app.run_server(debug=True). From what I've researched there's some issues with versioning and the updates I made should have fixed the issues and I can't seem to find anything about the error I am getting. EDIT: Traceback from error TypeError Traceback (most recent call last) ~\AppData\Local\Temp\1/ipykernel_31316/4067692673.py in <module> 28 29 if __name__ == '__main__': ---> 30 app.run_server(mode="external") ~\AppData\Local\Programs\Python\Python39\lib\site-packages\jupyter_dash\jupyter_app.py in run_server(self, mode, width, height, inline_exceptions, **kwargs) 220 self._terminate_server_for_port(host, port) 221 --> 222 # Configure pathname prefix 223 requests_pathname_prefix = self.config.get('requests_pathname_prefix', None) 224 if self._input_pathname_prefix is None: ~\AppData\Local\Programs\Python\Python39\lib\site-packages\jupyter_dash\_stoppable_thread.py in kill(self) TypeError: an integer is required (got type NoneType)
[ "I had the same problem, and I found your question while trying to find a solution.\nTry adding host as string and port as integer types inside run_server like this:\napp.run_server(mode='external', host='your_host', port=your_port)\nMy host is 127.0.0.1, and the port is 8050.\nHope it helps\n" ]
[ 0 ]
[]
[]
[ "jupyter_notebook", "jupyterdash", "plotly_dash", "python" ]
stackoverflow_0073421435_jupyter_notebook_jupyterdash_plotly_dash_python.txt
Q: Rolling difference in group and divivded by group sum in Pandas I am wondering if there's an easier/faster way (ideally in pipe method so it looks nicer!) I can work out the rolling difference divided by previous group sum. In the result outout, pc column is the column I am after. import pandas as pd df = pd.DataFrame( { "Date": ["2020-01-01", "2020-01-01", "2020-01-01", "2021-01-01", "2021-01-01", "2021-01-01", "2022-01-01", "2022-01-01", "2022-01-01"], "City": ["London", "New York", "Tokyo", "London", "New York", "Tokyo", "London", "New York", "Tokyo"], "Pop": [90, 70, 60, 85, 60, 45, 70, 40, 32], } ) Date City Pop 0 2020-01-01 London 90 1 2020-01-01 New York 70 2 2020-01-01 Tokyo 60 3 2021-01-01 London 85 4 2021-01-01 New York 60 5 2021-01-01 Tokyo 45 6 2022-01-01 London 70 7 2022-01-01 New York 40 8 2022-01-01 Tokyo 32 df['pop_diff'] = df.groupby(['City'])['Pop'].diff() df['total'] = df.groupby('Date').Pop.transform('sum') df['total_shift'] = df.groupby('City')['total'].shift() df['pc'] = df['pop_diff'] / df['total_shift'] Date City Pop pop_diff total total_shift pc 0 2020-01-01 London 90 NaN 220 NaN NaN 1 2020-01-01 New York 70 NaN 220 NaN NaN 2 2020-01-01 Tokyo 60 NaN 220 NaN NaN 3 2021-01-01 London 85 -5.0 190 220.0 -0.022727 4 2021-01-01 New York 60 -10.0 190 220.0 -0.045455 5 2021-01-01 Tokyo 45 -15.0 190 220.0 -0.068182 6 2022-01-01 London 70 -15.0 142 190.0 -0.078947 7 2022-01-01 New York 40 -20.0 142 190.0 -0.105263 8 2022-01-01 Tokyo 32 -13.0 142 190.0 -0.068421 A: Here is one way to do it with Pandas assign and pipe: df = ( df.assign(total=df.groupby("Date")["Pop"].transform("sum")) .pipe( lambda df_: df_.assign( pc=df_.groupby(["City"]) .agg({"Pop": "diff", "total": "shift"}) .pipe(lambda x: x["Pop"] / x["total"]) ) ) .drop(columns="total") ) Then: print(df) # Output Date City Pop pc 0 2020-01-01 London 90 NaN 1 2020-01-01 New York 70 NaN 2 2020-01-01 Tokyo 60 NaN 3 2021-01-01 London 85 -0.022727 4 2021-01-01 New York 60 -0.045455 5 2021-01-01 Tokyo 45 -0.068182 6 2022-01-01 London 70 -0.078947 7 2022-01-01 New York 40 -0.105263 8 2022-01-01 Tokyo 32 -0.068421
Rolling difference in group and divivded by group sum in Pandas
I am wondering if there's an easier/faster way (ideally in pipe method so it looks nicer!) I can work out the rolling difference divided by previous group sum. In the result outout, pc column is the column I am after. import pandas as pd df = pd.DataFrame( { "Date": ["2020-01-01", "2020-01-01", "2020-01-01", "2021-01-01", "2021-01-01", "2021-01-01", "2022-01-01", "2022-01-01", "2022-01-01"], "City": ["London", "New York", "Tokyo", "London", "New York", "Tokyo", "London", "New York", "Tokyo"], "Pop": [90, 70, 60, 85, 60, 45, 70, 40, 32], } ) Date City Pop 0 2020-01-01 London 90 1 2020-01-01 New York 70 2 2020-01-01 Tokyo 60 3 2021-01-01 London 85 4 2021-01-01 New York 60 5 2021-01-01 Tokyo 45 6 2022-01-01 London 70 7 2022-01-01 New York 40 8 2022-01-01 Tokyo 32 df['pop_diff'] = df.groupby(['City'])['Pop'].diff() df['total'] = df.groupby('Date').Pop.transform('sum') df['total_shift'] = df.groupby('City')['total'].shift() df['pc'] = df['pop_diff'] / df['total_shift'] Date City Pop pop_diff total total_shift pc 0 2020-01-01 London 90 NaN 220 NaN NaN 1 2020-01-01 New York 70 NaN 220 NaN NaN 2 2020-01-01 Tokyo 60 NaN 220 NaN NaN 3 2021-01-01 London 85 -5.0 190 220.0 -0.022727 4 2021-01-01 New York 60 -10.0 190 220.0 -0.045455 5 2021-01-01 Tokyo 45 -15.0 190 220.0 -0.068182 6 2022-01-01 London 70 -15.0 142 190.0 -0.078947 7 2022-01-01 New York 40 -20.0 142 190.0 -0.105263 8 2022-01-01 Tokyo 32 -13.0 142 190.0 -0.068421
[ "Here is one way to do it with Pandas assign and pipe:\ndf = (\n df.assign(total=df.groupby(\"Date\")[\"Pop\"].transform(\"sum\"))\n .pipe(\n lambda df_: df_.assign(\n pc=df_.groupby([\"City\"])\n .agg({\"Pop\": \"diff\", \"total\": \"shift\"})\n .pipe(lambda x: x[\"Pop\"] / x[\"total\"])\n )\n )\n .drop(columns=\"total\")\n)\n\nThen:\nprint(df)\n# Output\n Date City Pop pc\n0 2020-01-01 London 90 NaN\n1 2020-01-01 New York 70 NaN\n2 2020-01-01 Tokyo 60 NaN\n3 2021-01-01 London 85 -0.022727\n4 2021-01-01 New York 60 -0.045455\n5 2021-01-01 Tokyo 45 -0.068182\n6 2022-01-01 London 70 -0.078947\n7 2022-01-01 New York 40 -0.105263\n8 2022-01-01 Tokyo 32 -0.068421\n\n" ]
[ 1 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074654758_pandas_python.txt
Q: How do I get variables from a Python script that are visible in the sh script? I need to send notifications about new ssh connections. I was able to implement this through the sh script. But it is difficult to maintain, I would like to use a python script instead. notify-lo.py #!/usr/bin/env python3 .... .... I made the script an executable file. chmod +x notify-lo.py I added my script call to the pam_exec module calls. session optional pam_exec.so /usr/local/bin/notify-lo.py Is it even possible to implement this? Will I be able to have access from my script to variables such as $PAM_TYPE, $PAM_SERVICE, $PAM_RUSER and others? UPDATE. An example of what my shell script is doing now (I want to replace it with python). #!/bin/bash TOKEN="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" ID="xxxxxxxxxxxxxx" URL="https://api.telegram.org/bot$TOKEN/sendMessage" if [ "$PAM_TYPE" != "open_session" ] then exit 0 else curl -s -X POST $URL -d chat_id=$ID -d text="$(echo -e "Host: `hostname`\nUser: $PAM_USER\nHost: $PAM_RHOST")" > /dev/null 2>&1 exit 0 fi A: These variables that are available to the shell script are called environment variables and are separate from Python variables. To get environment variables in python, you need to use the os.environ dictionary. You can do it like this: import os pam_type = os.environ['PAM_TYPE'] print(pam_type) pam_service = os.environ['PAM_SERVICE'] print(pam_service) pam_ruser = os.environ['PAM_RUSER'] print(pam_ruser) Note that you need to remove the leading dollar sign ($)
How do I get variables from a Python script that are visible in the sh script?
I need to send notifications about new ssh connections. I was able to implement this through the sh script. But it is difficult to maintain, I would like to use a python script instead. notify-lo.py #!/usr/bin/env python3 .... .... I made the script an executable file. chmod +x notify-lo.py I added my script call to the pam_exec module calls. session optional pam_exec.so /usr/local/bin/notify-lo.py Is it even possible to implement this? Will I be able to have access from my script to variables such as $PAM_TYPE, $PAM_SERVICE, $PAM_RUSER and others? UPDATE. An example of what my shell script is doing now (I want to replace it with python). #!/bin/bash TOKEN="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" ID="xxxxxxxxxxxxxx" URL="https://api.telegram.org/bot$TOKEN/sendMessage" if [ "$PAM_TYPE" != "open_session" ] then exit 0 else curl -s -X POST $URL -d chat_id=$ID -d text="$(echo -e "Host: `hostname`\nUser: $PAM_USER\nHost: $PAM_RHOST")" > /dev/null 2>&1 exit 0 fi
[ "These variables that are available to the shell script are called environment variables and are separate from Python variables. To get environment variables in python, you need to use the os.environ dictionary. You can do it like this:\nimport os\n\n\npam_type = os.environ['PAM_TYPE']\nprint(pam_type)\n\npam_service = os.environ['PAM_SERVICE']\nprint(pam_service)\n\npam_ruser = os.environ['PAM_RUSER']\nprint(pam_ruser)\n\nNote that you need to remove the leading dollar sign ($)\n" ]
[ 2 ]
[]
[]
[ "linux", "python", "shell" ]
stackoverflow_0074668478_linux_python_shell.txt
Q: Add a reaction to a message in with interaction Discord.py 2.0 I cant add reaction in interaction message @bot.tree.command() @app_commands.describe(question="Give a title") async def poll(interaction: discord.Interaction, question: str): emb = discord.Embed(title=f":bar_chart: {question}\n", type="rich") message = await interaction.response.send_message(embed=emb) emoji = ("✅") await interaction.message.add_reaction(emoji) Also getting error: discord.app_commands.errors.CommandInvokeError: Command 'poll' raised an exception: AttributeError: 'NoneType' object has no attribute 'add_reaction' A: I think it is: await message.add_reaction(emoji)
Add a reaction to a message in with interaction
Discord.py 2.0 I cant add reaction in interaction message @bot.tree.command() @app_commands.describe(question="Give a title") async def poll(interaction: discord.Interaction, question: str): emb = discord.Embed(title=f":bar_chart: {question}\n", type="rich") message = await interaction.response.send_message(embed=emb) emoji = ("✅") await interaction.message.add_reaction(emoji) Also getting error: discord.app_commands.errors.CommandInvokeError: Command 'poll' raised an exception: AttributeError: 'NoneType' object has no attribute 'add_reaction'
[ "I think it is:\nawait message.add_reaction(emoji)\n\n" ]
[ 0 ]
[]
[]
[ "discord", "discord.py", "python" ]
stackoverflow_0074668456_discord_discord.py_python.txt
Q: DevOps: Building a production server I'm completely new to devops and I'm quickly becoming overwhelmed with all the options. I write python web applications as a solo developer, on my local machine. I have a "staging" server on DigitalOcean, I have multiple websites under different subdomains (eg. myapp.staging.mywebsite.dev). I use git on my local machine and use branches to create multiple versions of my apps and then I use git to push my code to this server so I can see how it looks on the web. When I'm happy with my web app I want to be able to deploy it to a separate production server on DigitalOcean so I can get real users using my apps. I could just use git to push my code to a new server but are there any other options that will help me create a live site? A: In our systems, we handle this scenario with a dedicated publish branch. The GitHub action workflow for publishing starts like this: name: Publish on: push: branches: [publish] so it's only triggered by a push to publish and it deploys to the gh-pages branch in our case. All the draft work and the review versions live in different branches, with the publish branch itself set to be protected so nothing can pushed to it without proper review. We also have a review branch where we stage things for pre-publication review, and we merge into publish from review. A colleague has set that part up, but the review branch gets deployed into a staging subdomain similar to yours so we can review it before publication.
DevOps: Building a production server
I'm completely new to devops and I'm quickly becoming overwhelmed with all the options. I write python web applications as a solo developer, on my local machine. I have a "staging" server on DigitalOcean, I have multiple websites under different subdomains (eg. myapp.staging.mywebsite.dev). I use git on my local machine and use branches to create multiple versions of my apps and then I use git to push my code to this server so I can see how it looks on the web. When I'm happy with my web app I want to be able to deploy it to a separate production server on DigitalOcean so I can get real users using my apps. I could just use git to push my code to a new server but are there any other options that will help me create a live site?
[ "In our systems, we handle this scenario with a dedicated publish branch.\nThe GitHub action workflow for publishing starts like this:\nname: Publish\n\non:\n push:\n branches: [publish]\n\nso it's only triggered by a push to publish and it deploys to the gh-pages branch in our case.\nAll the draft work and the review versions live in different branches, with the publish branch itself set to be protected so nothing can pushed to it without proper review.\nWe also have a review branch where we stage things for pre-publication review, and we merge into publish from review. A colleague has set that part up, but the review branch gets deployed into a staging subdomain similar to yours so we can review it before publication.\n" ]
[ 0 ]
[]
[]
[ "digital_ocean", "git", "python" ]
stackoverflow_0074667043_digital_ocean_git_python.txt
Q: Dict to DataFrame: Value instead of list in DataFrame How when i convert a dictionary to a dataframe do i stop each value being within a list. i tried to converting with the pandas from_dict A: You can use orient='records'. all_kpi.to_dict(orient='records') Check out the pandas documentation for different orientation.
Dict to DataFrame: Value instead of list in DataFrame
How when i convert a dictionary to a dataframe do i stop each value being within a list. i tried to converting with the pandas from_dict
[ "You can use orient='records'.\nall_kpi.to_dict(orient='records')\n\nCheck out the pandas documentation for different orientation.\n" ]
[ 0 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074668474_dataframe_pandas_python.txt
Q: Fail to overwrite a 2D numpy.ndarray in a loop I found my program failed to overwrite an np.ndarray (the X variable) in the for loop by assignment statement like "X[i] = another np.ndarray with matched shape". I have no idea how this could happen... Codes: import numpy as np def qr_tridiagonal(T: np.ndarray): m, n = T.shape X = T.copy() Qt = np.identity(m) for i in range(n-1): ai = X[i, i] ak = X[i+1, i] c = ai/(ai**2 + ak**2)**.5 s = ak/(ai**2 + ak**2)**.5 # Givens rotation tmp1 = c*X[i] + s*X[i+1] tmp2 = c*X[i+1] - s*X[i] print("tmp1 before:", tmp1) print("X[i] before:", X[i]) X[i] = tmp1 X[i+1] = tmp2 print("tmp1 after:", tmp1) print("X[i] after:", X[i]) print() print(X) return Qt.T, X A = np.array([[1, 1, 0, 0], [1, 1, 1, 0], [0, 1, 1, 1], [0, 0, 1, 1]]) Q, R = qr_tridiagonal(A) Output (the first 4 lines): tmp1 before: [1.41421356 1.41421356 0.70710678 0. ] X[i] before: [1 1 0 0] tmp1 after: [1.41421356 1.41421356 0.70710678 0. ] X[i] after: [1 1 0 0] Though X[i] is assigned by tmp1, the values in the array X[i] or X[i, :] remain unchanged. Hope somebody help me out.... Other info: the above is a function to compute QR factorization for tridiagonal matrices using Givens Rotation. I did check that assigning constant values to X[i] work, e.g. X[i] = 10 then the printed results fit this statement. But if X[i] = someArray then in my codes it would fail. I am not sure whether this is a particular issue triggered by the algorithm I was implementing in the above codes, because such scenarios never happen before. I did try to install new environments using conda to make sure that my python is not problematic. The above strange outputs should be able to re-generate on other devices. A: Many thanks to @hpaulj It turns out to be a problem of datatype. The program is ok but the input datatype is int, which results in intermediate trancation errors. A lesson learned: be aware of the dtype of np.ndarray!
Fail to overwrite a 2D numpy.ndarray in a loop
I found my program failed to overwrite an np.ndarray (the X variable) in the for loop by assignment statement like "X[i] = another np.ndarray with matched shape". I have no idea how this could happen... Codes: import numpy as np def qr_tridiagonal(T: np.ndarray): m, n = T.shape X = T.copy() Qt = np.identity(m) for i in range(n-1): ai = X[i, i] ak = X[i+1, i] c = ai/(ai**2 + ak**2)**.5 s = ak/(ai**2 + ak**2)**.5 # Givens rotation tmp1 = c*X[i] + s*X[i+1] tmp2 = c*X[i+1] - s*X[i] print("tmp1 before:", tmp1) print("X[i] before:", X[i]) X[i] = tmp1 X[i+1] = tmp2 print("tmp1 after:", tmp1) print("X[i] after:", X[i]) print() print(X) return Qt.T, X A = np.array([[1, 1, 0, 0], [1, 1, 1, 0], [0, 1, 1, 1], [0, 0, 1, 1]]) Q, R = qr_tridiagonal(A) Output (the first 4 lines): tmp1 before: [1.41421356 1.41421356 0.70710678 0. ] X[i] before: [1 1 0 0] tmp1 after: [1.41421356 1.41421356 0.70710678 0. ] X[i] after: [1 1 0 0] Though X[i] is assigned by tmp1, the values in the array X[i] or X[i, :] remain unchanged. Hope somebody help me out.... Other info: the above is a function to compute QR factorization for tridiagonal matrices using Givens Rotation. I did check that assigning constant values to X[i] work, e.g. X[i] = 10 then the printed results fit this statement. But if X[i] = someArray then in my codes it would fail. I am not sure whether this is a particular issue triggered by the algorithm I was implementing in the above codes, because such scenarios never happen before. I did try to install new environments using conda to make sure that my python is not problematic. The above strange outputs should be able to re-generate on other devices.
[ "Many thanks to @hpaulj\nIt turns out to be a problem of datatype. The program is ok but the input datatype is int, which results in intermediate trancation errors.\nA lesson learned: be aware of the dtype of np.ndarray!\n" ]
[ 0 ]
[]
[]
[ "multidimensional_array", "numpy", "python" ]
stackoverflow_0074668253_multidimensional_array_numpy_python.txt
Q: End conversation in chatbot def send(): send = "You: " + e.get() txt.insert(END, "\n" + send) user = e.get().lower() if (user == "hello" or user == "hi" or user == "hey" or user == "oi" or user == "halo"): txt.insert(END, "\n" + "Rob: Hi there, how can I help you? \n 0.Contact seller directly \n 1.Order Tracking \n 2.Refund and Return \n 3.HELP \n Please enter a number.") elif (user == "whats your name?" or user == "what is your name" or user == "name" or user == "you called" or user == "what is your name?"): txt.insert(END, "\n" + "Rob: My name is Rob.") elif (user == "0"): ** ** elif (user == "1"): txt.insert(END, "\n" + "Rob: Your bag has arrived. \n 00. Homepage \n 01. ** \n 0. ** **") elif (user == "4"): txt.insert(END, "\n" + "Rob: Damage and Broken \n Sorry for the inconvenient. Please take a photo of your item and send it to seller as soon as possible. Thank you.\n 00. Homepage \n 01. ** \n 0. ** **") I am learning to creating a simple Tkinter GUI chatbot Problems I am facing: ** ** = I want to end the conversation when user enters 0, and it will close this conversation and move to another python file. ** = when the user enter 01, chatbot will say goodbye and close the conversation. Thank you. A: If you want a method to end in python you can just use return. In your ** ** case just type return under the if statement and it will ends. And in your ** case you just print goodbye or so and then use return.
End conversation in chatbot
def send(): send = "You: " + e.get() txt.insert(END, "\n" + send) user = e.get().lower() if (user == "hello" or user == "hi" or user == "hey" or user == "oi" or user == "halo"): txt.insert(END, "\n" + "Rob: Hi there, how can I help you? \n 0.Contact seller directly \n 1.Order Tracking \n 2.Refund and Return \n 3.HELP \n Please enter a number.") elif (user == "whats your name?" or user == "what is your name" or user == "name" or user == "you called" or user == "what is your name?"): txt.insert(END, "\n" + "Rob: My name is Rob.") elif (user == "0"): ** ** elif (user == "1"): txt.insert(END, "\n" + "Rob: Your bag has arrived. \n 00. Homepage \n 01. ** \n 0. ** **") elif (user == "4"): txt.insert(END, "\n" + "Rob: Damage and Broken \n Sorry for the inconvenient. Please take a photo of your item and send it to seller as soon as possible. Thank you.\n 00. Homepage \n 01. ** \n 0. ** **") I am learning to creating a simple Tkinter GUI chatbot Problems I am facing: ** ** = I want to end the conversation when user enters 0, and it will close this conversation and move to another python file. ** = when the user enter 01, chatbot will say goodbye and close the conversation. Thank you.
[ "If you want a method to end in python you can just use return. In your ** ** case just type return under the if statement and it will ends. And in your ** case you just print goodbye or so and then use return.\n" ]
[ 0 ]
[]
[]
[ "python", "tkinter" ]
stackoverflow_0074668415_python_tkinter.txt
Q: django rest framework translation does not work for me I tried django rest framework internationalization. doc drf internationalization From official drf documentation a set this code in settings.py from django.utils.translation import gettext_lazy as _ MIDDLEWARE = [ ... 'django.middleware.locale.LocaleMiddleware' ] LANGUAGE_CODE = "it" LANGUAGES = ( ('en', _('English')), ('it', _('Italian')), ('fr', _('French')), ('es', _('Spanish')), ) TIME_ZONE = 'UTC' USE_I18N = True but when I try out POST api rest curl -X 'POST' \ 'http://172.18.0.1:7000/appjud/api/v1/reset-password/' \ -H 'accept: application/json' \ -H 'Authorization: Token 014cb7982f31767a8ce07c9f216653d4674baeaf' \ -H 'Content-Type: application/json' \ -d '{ "new_password": "", "confirm_password": "" }' Response body [ { "newpassword": [ "This field is required." ], "confirmpassword": [ "This field is required." ] } ] Response headers allow: POST,OPTIONS content-language: en content-length: 91 content-type: application/json cross-origin-opener-policy: same-origin date: Sat,03 Dec 2022 16:14:16 GMT referrer-policy: same-origin server: WSGIServer/0.2 CPython/3.9.15 vary: Accept,Accept-Language,Origin x-content-type-options: nosniff x-frame-options: DENY UPDATE MIDDLEWARE = [ 'corsheaders.middleware.CorsMiddleware', 'django.middleware.security.SecurityMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.locale.LocaleMiddleware', 'django.middleware.common.CommonMiddleware', ... ] As we can see it print "This field is required." but I would like "Questo campo è obbligatorio." What Does I miss in config settings.py file? A: Looks like you added LocaleMiddleware to the end of middlewares list. But order is matter here. From the docs: Because middleware order matters, follow these guidelines: Make sure it’s one of the first middleware installed. It should come after SessionMiddleware, because LocaleMiddleware makes use of session data. And it should come before CommonMiddleware because CommonMiddleware needs an activated language in order to resolve the requested URL. If you use CacheMiddleware, put LocaleMiddleware after it. Try to change order according to this note.
django rest framework translation does not work for me
I tried django rest framework internationalization. doc drf internationalization From official drf documentation a set this code in settings.py from django.utils.translation import gettext_lazy as _ MIDDLEWARE = [ ... 'django.middleware.locale.LocaleMiddleware' ] LANGUAGE_CODE = "it" LANGUAGES = ( ('en', _('English')), ('it', _('Italian')), ('fr', _('French')), ('es', _('Spanish')), ) TIME_ZONE = 'UTC' USE_I18N = True but when I try out POST api rest curl -X 'POST' \ 'http://172.18.0.1:7000/appjud/api/v1/reset-password/' \ -H 'accept: application/json' \ -H 'Authorization: Token 014cb7982f31767a8ce07c9f216653d4674baeaf' \ -H 'Content-Type: application/json' \ -d '{ "new_password": "", "confirm_password": "" }' Response body [ { "newpassword": [ "This field is required." ], "confirmpassword": [ "This field is required." ] } ] Response headers allow: POST,OPTIONS content-language: en content-length: 91 content-type: application/json cross-origin-opener-policy: same-origin date: Sat,03 Dec 2022 16:14:16 GMT referrer-policy: same-origin server: WSGIServer/0.2 CPython/3.9.15 vary: Accept,Accept-Language,Origin x-content-type-options: nosniff x-frame-options: DENY UPDATE MIDDLEWARE = [ 'corsheaders.middleware.CorsMiddleware', 'django.middleware.security.SecurityMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.locale.LocaleMiddleware', 'django.middleware.common.CommonMiddleware', ... ] As we can see it print "This field is required." but I would like "Questo campo è obbligatorio." What Does I miss in config settings.py file?
[ "Looks like you added LocaleMiddleware to the end of middlewares list. But order is matter here. From the docs:\n\nBecause middleware order matters, follow these guidelines:\nMake sure it’s one of the first middleware installed.\nIt should come after SessionMiddleware, because LocaleMiddleware makes use of session data. And it should come before CommonMiddleware\nbecause CommonMiddleware needs an activated language in order to\nresolve the requested URL.\nIf you use CacheMiddleware, put LocaleMiddleware after it.\n\nTry to change order according to this note.\n" ]
[ 0 ]
[]
[]
[ "django", "django_rest_framework", "internationalization", "python" ]
stackoverflow_0074668527_django_django_rest_framework_internationalization_python.txt
Q: Applying Riemann–Liouville derivative to Lorenz 3D equation I am using a module in python differint to solve the system of Lorenz 3D equation. After running my 3D system over differint ==> Riemann-Liouville operator for alpha value 1 the original equation and Riemann-Liouville results are not same . The code is mentioned below from scipy.integrate import odeint import numpy as np import matplotlib.pyplot as plt import differint.differint as df t = np.arange(1 , 50, 0.01) def Lorenz(state,t): # unpack the state vector x = state[0] y = state[1] z = state[2] a=10;b=8/3;c=28 xd = a*(y -x) yd = - y +c*x - x*z zd = -b*z + x*y return [xd,yd,zd] state0 = [1,1,1] state = odeint(Lorenz, state0, t) #Simple lorentz eqaution plot plt.subplot(2, 2, 1) plt.plot(state[:,0],state[:,1]) plt.subplot(2, 2, 2) plt.plot(state[:,0],state[:,2]) plt.subplot(2, 2, 3) plt.plot(state[:,1],state[:,2]) plt.show() DF = df.RL(1, state, 0, len(t), len(t)) # Riemann-Liouville plots state=DF plt.subplot(2, 2, 1) plt.plot(state[:,0],state[:,1]) plt.subplot(2,2, 2) plt.plot(state[:,0],state[:,2]) plt.subplot(2, 2, 3) plt.plot(state[:,1],state[:,2]) plt.show() Am i making mistake anywhere or this is the true result? as you can see in eqaution (2) when we put α = 1 we will get the results same as of non fractional system (1). Interested in calculating equation (3) for different values of alpha I think this perhaps the idea of mine is incorrect, because what i am doing is first calculating the system of differential equation using state = odeint(Lorenz, state0, t) followed by the differint module DF = df.RL(1, state, 0, len(t), len(t)) Graphs for lorenz eqaution Graphs for RL fractional for alpha = 1 in these graphs as u can see that the trajectories are absolutely same but the scaling become different A: The call to DF.RL requires a function of the differential equation, however, you used the solution to a differential equation - state (output of odeint call). From the package site https://pypi.org/project/differint/ def f(x): return x**0.5 DF = df.RL(0.5, f) print(DF) Can you try DF = df.RL(1, Lorenz, 0, len(t), len(t)) ?
Applying Riemann–Liouville derivative to Lorenz 3D equation
I am using a module in python differint to solve the system of Lorenz 3D equation. After running my 3D system over differint ==> Riemann-Liouville operator for alpha value 1 the original equation and Riemann-Liouville results are not same . The code is mentioned below from scipy.integrate import odeint import numpy as np import matplotlib.pyplot as plt import differint.differint as df t = np.arange(1 , 50, 0.01) def Lorenz(state,t): # unpack the state vector x = state[0] y = state[1] z = state[2] a=10;b=8/3;c=28 xd = a*(y -x) yd = - y +c*x - x*z zd = -b*z + x*y return [xd,yd,zd] state0 = [1,1,1] state = odeint(Lorenz, state0, t) #Simple lorentz eqaution plot plt.subplot(2, 2, 1) plt.plot(state[:,0],state[:,1]) plt.subplot(2, 2, 2) plt.plot(state[:,0],state[:,2]) plt.subplot(2, 2, 3) plt.plot(state[:,1],state[:,2]) plt.show() DF = df.RL(1, state, 0, len(t), len(t)) # Riemann-Liouville plots state=DF plt.subplot(2, 2, 1) plt.plot(state[:,0],state[:,1]) plt.subplot(2,2, 2) plt.plot(state[:,0],state[:,2]) plt.subplot(2, 2, 3) plt.plot(state[:,1],state[:,2]) plt.show() Am i making mistake anywhere or this is the true result? as you can see in eqaution (2) when we put α = 1 we will get the results same as of non fractional system (1). Interested in calculating equation (3) for different values of alpha I think this perhaps the idea of mine is incorrect, because what i am doing is first calculating the system of differential equation using state = odeint(Lorenz, state0, t) followed by the differint module DF = df.RL(1, state, 0, len(t), len(t)) Graphs for lorenz eqaution Graphs for RL fractional for alpha = 1 in these graphs as u can see that the trajectories are absolutely same but the scaling become different
[ "The call to DF.RL requires a function of the differential equation, however, you used the solution to a differential equation - state (output of odeint call). From the package site https://pypi.org/project/differint/\ndef f(x):\n return x**0.5\n\nDF = df.RL(0.5, f)\nprint(DF)\n\nCan you try DF = df.RL(1, Lorenz, 0, len(t), len(t)) ?\n" ]
[ 0 ]
[]
[]
[ "derivative", "differential_equations", "python", "python_3.x" ]
stackoverflow_0070625237_derivative_differential_equations_python_python_3.x.txt
Q: How to get JSON data from a post in grequests library (async-requests) python so im trying to make an async-requests thingy and i cant get the json data from a post import grequests as requests headers = {SOME HEADERS} data = { SOME DATA... } r = requests.post( "some url (NOT A REAL URL)", headers=headers, data=data ) var = r.json["SOME VALUE"] NOTE: THE VALUES IN TH IS CODE AREN'T REAL I tried to get the json value from r and it didnt work, i expected a json value from the r.json["SOME VALUE"] but instead i got an error: " 'builtin_function_or_method' object is not subscriptable " A: r.json is a method. So you need to call it with parentheses first: var = r.json() #type(var) -- > dictionary var = var['SOME VALUE'] #or (shorter) var = r.json()['SOME VALUE']
How to get JSON data from a post in grequests library (async-requests) python
so im trying to make an async-requests thingy and i cant get the json data from a post import grequests as requests headers = {SOME HEADERS} data = { SOME DATA... } r = requests.post( "some url (NOT A REAL URL)", headers=headers, data=data ) var = r.json["SOME VALUE"] NOTE: THE VALUES IN TH IS CODE AREN'T REAL I tried to get the json value from r and it didnt work, i expected a json value from the r.json["SOME VALUE"] but instead i got an error: " 'builtin_function_or_method' object is not subscriptable "
[ "r.json is a method. So you need to call it with parentheses first:\nvar = r.json() #type(var) -- > dictionary\nvar = var['SOME VALUE']\n\n#or (shorter)\nvar = r.json()['SOME VALUE']\n\n" ]
[ 2 ]
[]
[]
[ "grequests", "python" ]
stackoverflow_0074664802_grequests_python.txt
Q: how to run selenium script from a bash file I have a windows laptop in which i have written a selenium script written in python which creates a github repository. It works fine when a run the python file but it gives error when i try to run the script from a bash file. what should i do my bash file: python3 login.py my python file which i am calling from bash: def login(): chr_options = Options() chr_options.add_experimental_option("detach", True) driver = webdriver.Chrome(ChromeDriverManager().install(), options=chr_options) driver.get('https://github.com/new') this is what i get when i try to run the python file with bash from my ubuntu terminal and i have a windows machine and the python file works fine when i try to run it from the windows terminal selenium.common.exceptions.WebDriverException: Message: unknown error: cannot find Chrome binaryStacktrace:#0 0x55de662442a3 <unknown>#1 0x55de66002f77 <unknown>#2 0x55de66029047 <unknown>#3 0x55de660277d0 <unknown>#4 0x55de660680b7 <unknown>#5 0x55de66067a5f <unknown>#6 0x55de6605f903 <unknown>#7 0x55de66032ece <unknown>#8 0x55de66033fde <unknown>#9 0x55de6629463e <unknown>#10 0x55de66297b79 <unknown>#11 0x55de6627a89e <unknown>#12 0x55de66298a83 <unknown>#13 0x55de6626d505 <unknown>#14 0x55de662b9ca8 <unknown>#15 0x55de662b9e36 <unknown>#16 0x55de662d5333 <unknown>#17 0x7f87539e4b43 <unknown> i have a windows 10 laptop in which my selenium script works fine, but when i run the file through the bash terminal it gives this error selenium.common.exceptions.WebDriverException: Message: unknown error: cannot find Chrome binary Stacktrace: #0 0x55de662442a3 <unknown> #1 0x55de66002f77 <unknown> #2 0x55de66029047 <unknown> #3 0x55de660277d0 <unknown> #4 0x55de660680b7 <unknown> A: When you run the python file in windows it uses Chrome installed in you Windows machine, but when you work in a bash terminal you do it on a Linux machine that runs as a virtual machine in you Windows. So it is a separate machine and it cannot use Chrome from your Windows. It needs to have its own Chrome. Additionally this virtual machine does not have a GUI so you will need to run selenium in the headless mode.
how to run selenium script from a bash file
I have a windows laptop in which i have written a selenium script written in python which creates a github repository. It works fine when a run the python file but it gives error when i try to run the script from a bash file. what should i do my bash file: python3 login.py my python file which i am calling from bash: def login(): chr_options = Options() chr_options.add_experimental_option("detach", True) driver = webdriver.Chrome(ChromeDriverManager().install(), options=chr_options) driver.get('https://github.com/new') this is what i get when i try to run the python file with bash from my ubuntu terminal and i have a windows machine and the python file works fine when i try to run it from the windows terminal selenium.common.exceptions.WebDriverException: Message: unknown error: cannot find Chrome binaryStacktrace:#0 0x55de662442a3 <unknown>#1 0x55de66002f77 <unknown>#2 0x55de66029047 <unknown>#3 0x55de660277d0 <unknown>#4 0x55de660680b7 <unknown>#5 0x55de66067a5f <unknown>#6 0x55de6605f903 <unknown>#7 0x55de66032ece <unknown>#8 0x55de66033fde <unknown>#9 0x55de6629463e <unknown>#10 0x55de66297b79 <unknown>#11 0x55de6627a89e <unknown>#12 0x55de66298a83 <unknown>#13 0x55de6626d505 <unknown>#14 0x55de662b9ca8 <unknown>#15 0x55de662b9e36 <unknown>#16 0x55de662d5333 <unknown>#17 0x7f87539e4b43 <unknown> i have a windows 10 laptop in which my selenium script works fine, but when i run the file through the bash terminal it gives this error selenium.common.exceptions.WebDriverException: Message: unknown error: cannot find Chrome binary Stacktrace: #0 0x55de662442a3 <unknown> #1 0x55de66002f77 <unknown> #2 0x55de66029047 <unknown> #3 0x55de660277d0 <unknown> #4 0x55de660680b7 <unknown>
[ "When you run the python file in windows it uses Chrome installed in you Windows machine, but when you work in a bash terminal you do it on a Linux machine that runs as a virtual machine in you Windows.\nSo it is a separate machine and it cannot use Chrome from your Windows. It needs to have its own Chrome. Additionally this virtual machine does not have a GUI so you will need to run selenium in the headless mode.\n" ]
[ 0 ]
[]
[]
[ "automation", "bash", "python", "selenium", "selenium_chromedriver" ]
stackoverflow_0074666192_automation_bash_python_selenium_selenium_chromedriver.txt
Q: Incorrect Output to .csv file I am getting an error when I try to export the output to a .csv file. import csv import random header = ['Results'] file = open("populationModel5.csv", "w") import random startPopulation = 50 infantMortality = 25 agriculture = 5 disasterChance = 10 fertilityx = 18 fertilityy = 35 food = 0 peopleDictionary = [] class Person: def __init__(self, age): self.gender = random.randint(0,1) self.age = age def harvest(food, agriculture): ablePeople = 0 for person in peopleDictionary: if person.age > 8: ablePeople +=1 food += ablePeople * agriculture if food < len(peopleDictionary): del peopleDictionary[0:int(len(peopleDictionary)-food)] food = 0 else: food -= len(peopleDictionary) def reproduce(fertilityx, fertilityy): for person in peopleDictionary: if person.gender == 1: if person.age > fertilityx: if person.age < fertilityy: if random.randint(0,5)==1: peopleDictionary.append(Person(0)) def beginSim(): for x in range(startPopulation): peopleDictionary.append(Person(random.randint(18,50))) def runYear(food, agriculture, fertilityx, fertilityy): harvest(food, agriculture) reproduce(fertilityx, fertilityy) for person in peopleDictionary: if person.age > 80: peopleDictionary.remove(person) else: person.age +=1 print(len(peopleDictionary)) beginSim() while len(peopleDictionary)<100000 and len(peopleDictionary) > 1: runYear(food, agriculture, fertilityx, fertilityy) print(peopleDictionary) db = csv.writer(file) db.writerow(header) for person in peopleDictionary: db.writerow([person]) file.close() I expected the output to export to a .csv file. The code outputs perfectly in the interpreter but it gives the following error when I export it: [<main.Person object at 0x0000025895762278>, <main.Person object at 0x0000025895770C18>, <main.Person object at 0x0000025894F37940>, A: It looks like the error is happening because you're trying to write an object of the Person class to the CSV file, but the csv.writerow method expects a string or a list of strings as input. To fix the error, you can modify your code to convert the Person object to a string before writing it to the CSV file. One way to do this is to define a str method for the Person class, which will be called whenever you try to convert an instance of the class to a string. Here's an example of how you could define the str method for the Person class: class Person: def __init__(self, age): self.gender = random.randint(0,1) self.age = age def __str__(self): return "age: {}, gender: {}".format(self.age, self.gender) With this method defined, you can write the Person objects to the CSV file like this: for person in peopleDictionary: db.writerow([str(person)]) This will convert the Person objects to strings using the str method, and then write the strings to the CSV file.
Incorrect Output to .csv file
I am getting an error when I try to export the output to a .csv file. import csv import random header = ['Results'] file = open("populationModel5.csv", "w") import random startPopulation = 50 infantMortality = 25 agriculture = 5 disasterChance = 10 fertilityx = 18 fertilityy = 35 food = 0 peopleDictionary = [] class Person: def __init__(self, age): self.gender = random.randint(0,1) self.age = age def harvest(food, agriculture): ablePeople = 0 for person in peopleDictionary: if person.age > 8: ablePeople +=1 food += ablePeople * agriculture if food < len(peopleDictionary): del peopleDictionary[0:int(len(peopleDictionary)-food)] food = 0 else: food -= len(peopleDictionary) def reproduce(fertilityx, fertilityy): for person in peopleDictionary: if person.gender == 1: if person.age > fertilityx: if person.age < fertilityy: if random.randint(0,5)==1: peopleDictionary.append(Person(0)) def beginSim(): for x in range(startPopulation): peopleDictionary.append(Person(random.randint(18,50))) def runYear(food, agriculture, fertilityx, fertilityy): harvest(food, agriculture) reproduce(fertilityx, fertilityy) for person in peopleDictionary: if person.age > 80: peopleDictionary.remove(person) else: person.age +=1 print(len(peopleDictionary)) beginSim() while len(peopleDictionary)<100000 and len(peopleDictionary) > 1: runYear(food, agriculture, fertilityx, fertilityy) print(peopleDictionary) db = csv.writer(file) db.writerow(header) for person in peopleDictionary: db.writerow([person]) file.close() I expected the output to export to a .csv file. The code outputs perfectly in the interpreter but it gives the following error when I export it: [<main.Person object at 0x0000025895762278>, <main.Person object at 0x0000025895770C18>, <main.Person object at 0x0000025894F37940>,
[ "It looks like the error is happening because you're trying to write an object of the Person class to the CSV file, but the csv.writerow method expects a string or a list of strings as input.\nTo fix the error, you can modify your code to convert the Person object to a string before writing it to the CSV file. One way to do this is to define a str method for the Person class, which will be called whenever you try to convert an instance of the class to a string.\nHere's an example of how you could define the str method for the Person class:\nclass Person:\ndef __init__(self, age):\n self.gender = random.randint(0,1)\n self.age = age\n\ndef __str__(self):\n return \"age: {}, gender: {}\".format(self.age, self.gender)\n\nWith this method defined, you can write the Person objects to the CSV file like this:\nfor person in peopleDictionary:\ndb.writerow([str(person)])\n\nThis will convert the Person objects to strings using the str method, and then write the strings to the CSV file.\n" ]
[ 0 ]
[]
[]
[ "csv", "python" ]
stackoverflow_0074667718_csv_python.txt
Q: Loop through outlook mails with python I receive a daily mail with the subject “XYZ” containing a CSV file with some information. I also got this python code which goes into my outlook account, looks for the mail with the subject “XYZ” and extracts the attachment and finally extends a certain database with the new information from the attachment. However, this code only gets the most recent email so that I need to run that code daily to keep it updated. If I’m not able to do that daily, lets say because I am on vacation, my database is going to miss some information from previous days. outlook = Dispatch("Outlook.Application").GetNamespace("MAPI") inbox = outlook.GetDefaultFolder("6") all_inbox = inbox.Items val_date = date.date.today() sub_today = 'XYZ' att_today = 'XYZ.csv' """ Look for the email with the given subject and the attachment name """ for msg in all_inbox: if msg.Subject == sub_today: break for att in msg.Attachments: if att.FileName == att_today: break """ save the update and extend the database """ att.SaveAsFile('U:/' + 'XYZ.csv') read_path = "U:/" write_path = "V:/Special_Path/" df_Update = pd.read_csv(read_path + 'XYZ.csv',parse_dates=['Date'],skiprows=2) del df_Update['Unnamed: 14'] df_DB_Old = pd.read_csv(write_path + 'DataBase_XYZ.csv',parse_dates=['Date']) DB_DatumMax = df_DB_Old['Date'].max() Upd_DatumMax = df_Update['Date'].max() if df_DB_Old['Date'].isin(pd.Series(Upd_DatumMax)).sum()>0: print(' ') print('Date already exists!') print(' ') input("press enter to continue...") # exit() sys.exit() else: df_DB_New = pd.concat([df_DB_Old, df_Update]) df_DB_New.to_csv(write_path + 'XYZ.csv',index=False) Now I would like to extend that code, so that it checks when the last time was the database was updated and then it should extract the information from all the emails with subject “XYZ” starting from the day it was last updated. Example: I run the code on the 01.10.2022 Im on vacation for 2 days The database was last updated on 01.10 On 04.10 im back and I run the code again. The code will look for the email from 02.10 & from 03.10 and of course also for the latest mail 04.10 extract the csv and extend the database My first idea is to create a new folder where all the mails with subject “XYZ” automatically are moved. [Done!] Now I would check for the latest Date in my database. And now I have no clue how to proceed. I guess I need to loop though my new folder but only starting with mails which havent been extracted to the databse. A: Firstly, there is no reason to loop though all messages in a folder - that would be extremely slow in folders with thousands of messages, especially if the cached mode is off. Use Items.Restrict or Items.Find/FindNext - let the store provider do the heavy lifting for you. You will be able to specify a restriction on the Subject and/or ReceivedTime property". See the examples at https://learn.microsoft.com/en-us/office/vba/api/outlook.items.restrict
Loop through outlook mails with python
I receive a daily mail with the subject “XYZ” containing a CSV file with some information. I also got this python code which goes into my outlook account, looks for the mail with the subject “XYZ” and extracts the attachment and finally extends a certain database with the new information from the attachment. However, this code only gets the most recent email so that I need to run that code daily to keep it updated. If I’m not able to do that daily, lets say because I am on vacation, my database is going to miss some information from previous days. outlook = Dispatch("Outlook.Application").GetNamespace("MAPI") inbox = outlook.GetDefaultFolder("6") all_inbox = inbox.Items val_date = date.date.today() sub_today = 'XYZ' att_today = 'XYZ.csv' """ Look for the email with the given subject and the attachment name """ for msg in all_inbox: if msg.Subject == sub_today: break for att in msg.Attachments: if att.FileName == att_today: break """ save the update and extend the database """ att.SaveAsFile('U:/' + 'XYZ.csv') read_path = "U:/" write_path = "V:/Special_Path/" df_Update = pd.read_csv(read_path + 'XYZ.csv',parse_dates=['Date'],skiprows=2) del df_Update['Unnamed: 14'] df_DB_Old = pd.read_csv(write_path + 'DataBase_XYZ.csv',parse_dates=['Date']) DB_DatumMax = df_DB_Old['Date'].max() Upd_DatumMax = df_Update['Date'].max() if df_DB_Old['Date'].isin(pd.Series(Upd_DatumMax)).sum()>0: print(' ') print('Date already exists!') print(' ') input("press enter to continue...") # exit() sys.exit() else: df_DB_New = pd.concat([df_DB_Old, df_Update]) df_DB_New.to_csv(write_path + 'XYZ.csv',index=False) Now I would like to extend that code, so that it checks when the last time was the database was updated and then it should extract the information from all the emails with subject “XYZ” starting from the day it was last updated. Example: I run the code on the 01.10.2022 Im on vacation for 2 days The database was last updated on 01.10 On 04.10 im back and I run the code again. The code will look for the email from 02.10 & from 03.10 and of course also for the latest mail 04.10 extract the csv and extend the database My first idea is to create a new folder where all the mails with subject “XYZ” automatically are moved. [Done!] Now I would check for the latest Date in my database. And now I have no clue how to proceed. I guess I need to loop though my new folder but only starting with mails which havent been extracted to the databse.
[ "Firstly, there is no reason to loop though all messages in a folder - that would be extremely slow in folders with thousands of messages, especially if the cached mode is off.\nUse Items.Restrict or Items.Find/FindNext - let the store provider do the heavy lifting for you. You will be able to specify a restriction on the Subject and/or ReceivedTime property\". See the examples at https://learn.microsoft.com/en-us/office/vba/api/outlook.items.restrict\n" ]
[ 0 ]
[]
[]
[ "email", "office_automation", "outlook", "python", "python_3.x" ]
stackoverflow_0074640759_email_office_automation_outlook_python_python_3.x.txt
Q: Why do I get a TypeError: unsupported operand type(s) for /: 'NoneType' and 'int' Sorry for the short essay but I think context is important here. This is for a course but I have struggled the entire semester with grasping this and the teacher hasn't been much help to me personally. I have a dataset with 30 categories and 500 images in each category (google maps stills of specific terrain). The goal is to process the image features (I'm using opencv SIFT) and conduct PCA on the features. I need to run the images through a deep learning model using fisher vectors and then plot some information based on the model. The problem is I keep getting random errors that I don't believe trace to the original problem. I know there is a crucial issue with my code, but I don't know what I don't know about it so I'm hoping the geniuses on stack can help identify my foible(s). Here is the snippet where I am currently getting stuck: #Ugly code, very sorry for ind, label in enumerate(os.listdir(img_direc)): #labels is storing the integer values of each category of the images ('swamp_lands', 'mountain', etc) labels.append(ind) #temporary list to store features desc_list = [] for i in os.listdir(f"{img_direc}\\{label}")[:400]: #process_image reads each file, converts to grayscale and resizes to a 224,224 image img = process_image(f"{img_direc}\\{label}\\{i}") _, desc = SIFT_Process_Keypoints(img) #first real point of confusion. I know there is a need to create either a 0's or 1's matrix #to fill in any none-type gaps but I'm struggling with the theory and code behind that feat_mtx = np.ones((224,224)) try: feat_mtx = np.zeros(desc.shape) for int, j in enumerate(desc): feat_mtx[int] = j except: pass #Do I need the mean? When trying to conduct PCA on the features I kept getting errors until #I reduced the values to a single number but it still wasn't giving me the right information desc_list.append(np.mean(feat_mtx)) desc_list = np.array(desc_list, dtype='object') desc_list = desc_list.flatten() train.append(desc_list) Does it just feel like my code is out of order? Or I'm missing a certain middle function somewhere. Any help with clarification would be greatly appreciated, I will be working actively on this code to try and gain some further understanding. Currently, the above code is yielding line 55, in <module> desc_list.append(np.mean(desc)) File "<__array_function__ internals>", line 180, in mean line 3432, in mean return _methods._mean(a, axis=axis, dtype=dtype, line 192, in _mean ret = ret / rcount TypeError: unsupported operand type(s) for /: 'NoneType' and 'int' after processing like 10 categories of images without an error. A: One issue with your code is that you are using the mean function on the desc array, which is not a valid input for the mean function because it is not a numerical array. The desc array is a 2D array of shape (N, 128), where N is the number of keypoints detected by the SIFT algorithm, and 128 is the length of the feature vector for each keypoint. To compute the mean of the desc array, you can use the mean function along one of the axes, for example: desc_mean = np.mean(desc, axis=0) This will compute the mean of each column in the desc array, and return a 1D array of shape (128,) with the mean feature vector. Another issue with your code is that you are trying to create a feat_mtx array of shape (224, 224) and fill it with the feature vectors from the desc array. This will not work because the desc array has a different shape than the feat_mtx array ((N, 128) vs (224, 224)), and it is not possible to directly fill the feat_mtx array with the feature vectors from the desc array. Instead, you can create a feat_mtx array of shape (N, 128) and fill it with the feature vectors from the desc array, like this: feat_mtx = np.zeros((desc.shape[0], desc.shape[1])) for int, j in enumerate(desc): feat_mtx[int] = j This will create a feat_mtx array with the same shape as the desc array, and fill it with the feature vectors from the desc array. Once you have fixed these issues with your code, you should be able to compute the mean of the feat_mtx array and append it to the desc_list array, like this: # compute the mean of the feat_mtx array feat_mtx_mean = np.mean(feat_mtx, axis=0) # append the mean of the feat_mtx array to the desc_list array desc_list.append(feat_mtx_mean) With these changes, your code should be able to process all of the images in the dataset and compute the mean feature vectors for each category. You can then use these mean feature vectors as input for the PCA and deep learning model.
Why do I get a TypeError: unsupported operand type(s) for /: 'NoneType' and 'int'
Sorry for the short essay but I think context is important here. This is for a course but I have struggled the entire semester with grasping this and the teacher hasn't been much help to me personally. I have a dataset with 30 categories and 500 images in each category (google maps stills of specific terrain). The goal is to process the image features (I'm using opencv SIFT) and conduct PCA on the features. I need to run the images through a deep learning model using fisher vectors and then plot some information based on the model. The problem is I keep getting random errors that I don't believe trace to the original problem. I know there is a crucial issue with my code, but I don't know what I don't know about it so I'm hoping the geniuses on stack can help identify my foible(s). Here is the snippet where I am currently getting stuck: #Ugly code, very sorry for ind, label in enumerate(os.listdir(img_direc)): #labels is storing the integer values of each category of the images ('swamp_lands', 'mountain', etc) labels.append(ind) #temporary list to store features desc_list = [] for i in os.listdir(f"{img_direc}\\{label}")[:400]: #process_image reads each file, converts to grayscale and resizes to a 224,224 image img = process_image(f"{img_direc}\\{label}\\{i}") _, desc = SIFT_Process_Keypoints(img) #first real point of confusion. I know there is a need to create either a 0's or 1's matrix #to fill in any none-type gaps but I'm struggling with the theory and code behind that feat_mtx = np.ones((224,224)) try: feat_mtx = np.zeros(desc.shape) for int, j in enumerate(desc): feat_mtx[int] = j except: pass #Do I need the mean? When trying to conduct PCA on the features I kept getting errors until #I reduced the values to a single number but it still wasn't giving me the right information desc_list.append(np.mean(feat_mtx)) desc_list = np.array(desc_list, dtype='object') desc_list = desc_list.flatten() train.append(desc_list) Does it just feel like my code is out of order? Or I'm missing a certain middle function somewhere. Any help with clarification would be greatly appreciated, I will be working actively on this code to try and gain some further understanding. Currently, the above code is yielding line 55, in <module> desc_list.append(np.mean(desc)) File "<__array_function__ internals>", line 180, in mean line 3432, in mean return _methods._mean(a, axis=axis, dtype=dtype, line 192, in _mean ret = ret / rcount TypeError: unsupported operand type(s) for /: 'NoneType' and 'int' after processing like 10 categories of images without an error.
[ "One issue with your code is that you are using the mean function on the desc array, which is not a valid input for the mean function because it is not a numerical array. The desc array is a 2D array of shape (N, 128), where N is the number of keypoints detected by the SIFT algorithm, and 128 is the length of the feature vector for each keypoint.\nTo compute the mean of the desc array, you can use the mean function along one of the axes, for example:\ndesc_mean = np.mean(desc, axis=0)\n\nThis will compute the mean of each column in the desc array, and return a 1D array of shape (128,) with the mean feature vector.\nAnother issue with your code is that you are trying to create a feat_mtx array of shape (224, 224) and fill it with the feature vectors from the desc array. This will not work because the desc array has a different shape than the feat_mtx array ((N, 128) vs (224, 224)), and it is not possible to directly fill the feat_mtx array with the feature vectors from the desc array.\nInstead, you can create a feat_mtx array of shape (N, 128) and fill it with the feature vectors from the desc array, like this:\nfeat_mtx = np.zeros((desc.shape[0], desc.shape[1]))\nfor int, j in enumerate(desc):\n feat_mtx[int] = j\n\nThis will create a feat_mtx array with the same shape as the desc array, and fill it with the feature vectors from the desc array.\nOnce you have fixed these issues with your code, you should be able to compute the mean of the feat_mtx array and append it to the desc_list array, like this:\n# compute the mean of the feat_mtx array\nfeat_mtx_mean = np.mean(feat_mtx, axis=0)\n\n# append the mean of the feat_mtx array to the desc_list array\ndesc_list.append(feat_mtx_mean)\n\nWith these changes, your code should be able to process all of the images in the dataset and compute the mean feature vectors for each category. You can then use these mean feature vectors as input for the PCA and deep learning model.\n" ]
[ 1 ]
[]
[]
[ "python", "typeerror" ]
stackoverflow_0074668707_python_typeerror.txt
Q: Trying to write a user_defined function in python that multiplies a formula of constants and coeff by the median of dataframe columns? I am trying to write a user defined function that takes median col values from a dataframe and places those values in a formula of constants and coefficients. I need the median col values to be multiplied one by one by the constants and coefficients. Below is what I need the function to do. median = data[['col 1','col 2','col 3']].median() col 1: 31.65 col 2: 87 col 3: 21.55 const_coeff = [(-.5447 + .1712 * 31.65 + -.5447 + .9601 * 87 + -.5447 + .8474 * 21.55)] print(constants_coefficients) total sum of constants_coefficients ......................................................................................................... I have attempted many variations on the def function but unable to get the answer I get when plugging the values in manually. One example is below. def i(median): const_coeff = 1 for x in median: const_coeff = const_coeff * i return const_coeff print(i(median)) The answer I get is negative number, which is wrong. Obviously, I used generic variables to show what I need/have done rather than my actual data so please forgive if that complicates things. Fairly new to coding and first-time poster. Thanks in advance for any help. A: Separate the values and plug them into the formula. a,b,c = median.array calc = [(-.5447 + .1712 * a + -.5447 + .9601 * b + -.5447 + .8474 * c)] Add parenthesis around the terms to ensure order of operations. Or q = median.array * (.1712,.9601,.8474) # q = median.to_numpy() * (.1712,.9601,.8474) q = q + (-.5447,-.5447,-.5447) r = q.sum() Pandas Series
Trying to write a user_defined function in python that multiplies a formula of constants and coeff by the median of dataframe columns?
I am trying to write a user defined function that takes median col values from a dataframe and places those values in a formula of constants and coefficients. I need the median col values to be multiplied one by one by the constants and coefficients. Below is what I need the function to do. median = data[['col 1','col 2','col 3']].median() col 1: 31.65 col 2: 87 col 3: 21.55 const_coeff = [(-.5447 + .1712 * 31.65 + -.5447 + .9601 * 87 + -.5447 + .8474 * 21.55)] print(constants_coefficients) total sum of constants_coefficients ......................................................................................................... I have attempted many variations on the def function but unable to get the answer I get when plugging the values in manually. One example is below. def i(median): const_coeff = 1 for x in median: const_coeff = const_coeff * i return const_coeff print(i(median)) The answer I get is negative number, which is wrong. Obviously, I used generic variables to show what I need/have done rather than my actual data so please forgive if that complicates things. Fairly new to coding and first-time poster. Thanks in advance for any help.
[ "Separate the values and plug them into the formula.\na,b,c = median.array\ncalc = [(-.5447 + .1712 * a + -.5447 + .9601 * b + -.5447 + .8474 * c)]\n\n\nAdd parenthesis around the terms to ensure order of operations.\n\nOr\nq = median.array * (.1712,.9601,.8474)\n# q = median.to_numpy() * (.1712,.9601,.8474)\nq = q + (-.5447,-.5447,-.5447)\nr = q.sum()\n\n\nPandas Series\n" ]
[ 0 ]
[]
[]
[ "array_formulas", "for_loop", "function", "python" ]
stackoverflow_0074667892_array_formulas_for_loop_function_python.txt
Q: Optimizing Loop for memory def getWhiteLightLength(n, m, lights): lt_nv = [] ctd = 0 for clr, inic, fim in lights: for num in range(inic, fim+1): lt_nv.append(num) c = Counter(lt_nv) for ch, vl in c.items(): if vl == m: ctd += 1 return(ctd) I'm doing this HackerRank solution, it passed on half of the tests, but for the others, I get a memory usage error. I'm new to python so don`t know how to optimize these loops for minor memory usage. A: One way to optimize the memory usage in this code is to avoid using a list to store the numbers of the lightbulbs that are turned on. Instead, you can use a Python set to store the numbers of the lightbulbs, which is more memory-efficient. Here is an updated version of the code that uses a set to store the numbers of the lightbulbs: from collections import Counter def getWhiteLightLength(n, m, lights): lt_nv = set() ctd = 0 for clr, inic, fim in lights: for num in range(inic, fim+1): lt_nv.add(num) c = Counter(lt_nv) for ch, vl in c.items(): if vl == m: ctd += 1 return ctd In this code, the lt_nv set is used to store the numbers of the lightbulbs that are turned on. The add() method is used to add each number to the set, and the Counter() function is used to count the number of times each lightbulb number appears in the set. This updated code should be more memory-efficient and should be able to pass the tests on HackerRank without a memory usage error.
Optimizing Loop for memory
def getWhiteLightLength(n, m, lights): lt_nv = [] ctd = 0 for clr, inic, fim in lights: for num in range(inic, fim+1): lt_nv.append(num) c = Counter(lt_nv) for ch, vl in c.items(): if vl == m: ctd += 1 return(ctd) I'm doing this HackerRank solution, it passed on half of the tests, but for the others, I get a memory usage error. I'm new to python so don`t know how to optimize these loops for minor memory usage.
[ "One way to optimize the memory usage in this code is to avoid using a list to store the numbers of the lightbulbs that are turned on. Instead, you can use a Python set to store the numbers of the lightbulbs, which is more memory-efficient.\nHere is an updated version of the code that uses a set to store the numbers of the lightbulbs:\nfrom collections import Counter\n\ndef getWhiteLightLength(n, m, lights):\n lt_nv = set()\n ctd = 0\n for clr, inic, fim in lights:\n for num in range(inic, fim+1):\n lt_nv.add(num)\n c = Counter(lt_nv)\n for ch, vl in c.items():\n if vl == m:\n ctd += 1\n return ctd\n\nIn this code, the lt_nv set is used to store the numbers of the lightbulbs that are turned on. The add() method is used to add each number to the set, and the Counter() function is used to count the number of times each lightbulb number appears in the set.\nThis updated code should be more memory-efficient and should be able to pass the tests on HackerRank without a memory usage error.\n" ]
[ 0 ]
[]
[]
[ "loops", "memory", "memory_management", "python" ]
stackoverflow_0074668660_loops_memory_memory_management_python.txt
Q: How to make Titlebar height fit new Title Font size increase in WxPython? I've increased the custon AddPrivateFont pointsize to self.label_font.SetPointSize(27) of the Title bar of the sample_one.py script from this shared project: https://wiki.wxpython.org/How%20to%20add%20a%20menu%20bar%20in%20the%20title%20bar%20%28Phoenix%29 From the script of my previous question here: https://web.archive.org/web/20221202192613/https://paste.c-net.org/HondoPrairie AddPrivateFont to App Title / Title bar in WxPython? My problem is I can't figure out how to make the Title bar's height larger so the Title text displays its top part correctly. Currently the top of the Title text is truncated. I tried adjusting the height and textHeight values from this statement: textWidth, textHeight = gcdc.GetTextExtent(self.label) tposx, tposy = ((width / 2) - (textWidth / 2), (height / 1) - (textHeight / 1)) from previous ones (in the sample_one.py script): textWidth, textHeight = gcdc.GetTextExtent(self.label) tposx, tposy = ((width / 2) - (textWidth / 2), (height / 3) - (textHeight / 3)) Because it truncated the bottom (now the bottom shows up correctly but not the top of the Title text). There is also this method I'm not sure how to handle: def DoGetBestSize(self): """ ... """ dc = wx.ClientDC(self) dc.SetFont(self.GetFont()) textWidth, textHeight = dc.GetTextExtent(self.label) spacing = 10 totalWidth = textWidth + (spacing) totalHeight = textHeight + (spacing) best = wx.Size(totalWidth, totalHeight) self.CacheBestSize(best) return best I tried tweaking it and printing results but to no avail. Here's a preview of the Truncated Title text: What would be the correct approach to finding out what controls the height of the title bar object to fix the truncated title text? A: Thanks to @Rolf of Saxony headsup I figured it out! It took the following 3 steps: 1st Step: Top Title Text Display from: class MyTitleBarPnl(wx.Panel): def CreateCtrls(self): self.titleBar.SetSize((w, 54)) def OnResize(self, event): self.titleBar.SetSize((w, 54)) 2nd Step: Vertical Spacing Below Title Text Without Text Display: class MyFrame(wx.Frame): def CreateCtrls(self): self.titleBarPnl = MyTitleBarPnl(self, -1, (w, 54)) def OnResize(self, event): self.titleBarPnl.SetSize((w, 24)) 3rd Step: Vertical Spacing Below Title Text WithText Display: class MyFrame(wx.Frame): def CreateCtrls(self): self.titleBarPnl = MyTitleBarPnl(self, -1, (w, 54)) def OnResize(self, event): self.titleBarPnl.SetSize((w, 54)) EDIT: 4th Step: Status Bar Display: class MyFrame(wx.Frame): def CreateCtrls(self): self.titleBarPnl = MyTitleBarPnl(self, -1, (w, 54)) def OnResize(self, event): self.titleBarPnl.SetSize((w, 54)) self.mainPnl.SetSize((w, h - 55)) # 25
How to make Titlebar height fit new Title Font size increase in WxPython?
I've increased the custon AddPrivateFont pointsize to self.label_font.SetPointSize(27) of the Title bar of the sample_one.py script from this shared project: https://wiki.wxpython.org/How%20to%20add%20a%20menu%20bar%20in%20the%20title%20bar%20%28Phoenix%29 From the script of my previous question here: https://web.archive.org/web/20221202192613/https://paste.c-net.org/HondoPrairie AddPrivateFont to App Title / Title bar in WxPython? My problem is I can't figure out how to make the Title bar's height larger so the Title text displays its top part correctly. Currently the top of the Title text is truncated. I tried adjusting the height and textHeight values from this statement: textWidth, textHeight = gcdc.GetTextExtent(self.label) tposx, tposy = ((width / 2) - (textWidth / 2), (height / 1) - (textHeight / 1)) from previous ones (in the sample_one.py script): textWidth, textHeight = gcdc.GetTextExtent(self.label) tposx, tposy = ((width / 2) - (textWidth / 2), (height / 3) - (textHeight / 3)) Because it truncated the bottom (now the bottom shows up correctly but not the top of the Title text). There is also this method I'm not sure how to handle: def DoGetBestSize(self): """ ... """ dc = wx.ClientDC(self) dc.SetFont(self.GetFont()) textWidth, textHeight = dc.GetTextExtent(self.label) spacing = 10 totalWidth = textWidth + (spacing) totalHeight = textHeight + (spacing) best = wx.Size(totalWidth, totalHeight) self.CacheBestSize(best) return best I tried tweaking it and printing results but to no avail. Here's a preview of the Truncated Title text: What would be the correct approach to finding out what controls the height of the title bar object to fix the truncated title text?
[ "Thanks to @Rolf of Saxony headsup I figured it out!\nIt took the following 3 steps:\n1st Step:\nTop Title Text Display from:\nclass MyTitleBarPnl(wx.Panel):\n def CreateCtrls(self):\n self.titleBar.SetSize((w, 54))\n\n def OnResize(self, event):\n self.titleBar.SetSize((w, 54))\n\n\n2nd Step:\nVertical Spacing Below Title Text Without Text Display:\nclass MyFrame(wx.Frame):\n def CreateCtrls(self):\n self.titleBarPnl = MyTitleBarPnl(self, -1, (w, 54))\n\n def OnResize(self, event):\n self.titleBarPnl.SetSize((w, 24))\n\n\n3rd Step:\nVertical Spacing Below Title Text WithText Display:\nclass MyFrame(wx.Frame):\n def CreateCtrls(self):\n self.titleBarPnl = MyTitleBarPnl(self, -1, (w, 54))\n\n def OnResize(self, event):\n self.titleBarPnl.SetSize((w, 54))\n\n\nEDIT:\n4th Step:\nStatus Bar Display:\nclass MyFrame(wx.Frame):\n def CreateCtrls(self):\n self.titleBarPnl = MyTitleBarPnl(self, -1, (w, 54))\n\n def OnResize(self, event):\n self.titleBarPnl.SetSize((w, 54))\n self.mainPnl.SetSize((w, h - 55)) # 25\n\n\n" ]
[ 1 ]
[]
[]
[ "height", "python", "python_3.x", "wxpython", "wxwidgets" ]
stackoverflow_0074663982_height_python_python_3.x_wxpython_wxwidgets.txt
Q: Wagtail CMS(Django) - Display Inline Model Fields in Related Model I have two custom models(not inheriting from Page) that are specific to the admin in a Wagtail CMS website. I can get this working in regular Django, but in Wagtail I can't get the inline model fields to appear. I get a key error. The code... On model.py: from django.db import models from wagtail.admin.panels import ( FieldPanel, MultiFieldPanel, FieldRowPanel, InlinePanel, ) from author.models import Author class Book(models.Model): author = models.ForeignKey(Author, on_delete=models.CASCADE) date = models.DateField("Date released") panels = [ MultiFieldPanel([ InlinePanel('book_review'), ], heading="Book reviews"), ] class BookReview(models.Model): book = models.ForeignKey( Book, on_delete=models.CASCADE, related_name='book_review') title = models.CharField(max_length=250) content = models.TextField() panels = [ FieldRowPanel([ FieldPanel('title'), FieldPanel('content'), ]) ] and wagtail_hooks.py: from wagtail.contrib.modeladmin.options import ( ModelAdmin, modeladmin_register, ) from .models import Book, BookReview class BookAdmin(ModelAdmin): model = Book add_to_settings_menu = False add_to_admin_menu = True inlines = [BookReview] # only added this after key error, but it didn't help modeladmin_register(BookAdmin) How can I get the InlinePanel('book_review'), line to show up in the admin. It all works until I try add the inline model fields. I looked around online and it was mentioning a third party Django modelcluster package. Is this still required? Those post were quite old(5 years or so). Or instead of using ForeignKey use ParentalKey, but that's only if inheriting from Page model. A: Try changing your Book model to inherit from ClusterableModel (which itself inherits from models.Model) from modelcluster.models import ClusterableModel class Book(ClusterableModel):
Wagtail CMS(Django) - Display Inline Model Fields in Related Model
I have two custom models(not inheriting from Page) that are specific to the admin in a Wagtail CMS website. I can get this working in regular Django, but in Wagtail I can't get the inline model fields to appear. I get a key error. The code... On model.py: from django.db import models from wagtail.admin.panels import ( FieldPanel, MultiFieldPanel, FieldRowPanel, InlinePanel, ) from author.models import Author class Book(models.Model): author = models.ForeignKey(Author, on_delete=models.CASCADE) date = models.DateField("Date released") panels = [ MultiFieldPanel([ InlinePanel('book_review'), ], heading="Book reviews"), ] class BookReview(models.Model): book = models.ForeignKey( Book, on_delete=models.CASCADE, related_name='book_review') title = models.CharField(max_length=250) content = models.TextField() panels = [ FieldRowPanel([ FieldPanel('title'), FieldPanel('content'), ]) ] and wagtail_hooks.py: from wagtail.contrib.modeladmin.options import ( ModelAdmin, modeladmin_register, ) from .models import Book, BookReview class BookAdmin(ModelAdmin): model = Book add_to_settings_menu = False add_to_admin_menu = True inlines = [BookReview] # only added this after key error, but it didn't help modeladmin_register(BookAdmin) How can I get the InlinePanel('book_review'), line to show up in the admin. It all works until I try add the inline model fields. I looked around online and it was mentioning a third party Django modelcluster package. Is this still required? Those post were quite old(5 years or so). Or instead of using ForeignKey use ParentalKey, but that's only if inheriting from Page model.
[ "Try changing your Book model to inherit from ClusterableModel (which itself inherits from models.Model)\nfrom modelcluster.models import ClusterableModel\n\nclass Book(ClusterableModel):\n\n" ]
[ 0 ]
[]
[]
[ "django", "django_models", "python", "wagtail", "wagtail_admin" ]
stackoverflow_0074576306_django_django_models_python_wagtail_wagtail_admin.txt
Q: Override PageLinkHandler in wagtail I have a site that has custom JS for handling link clicks, so that only part of the page reloads and audio playback isn't interrupted. This requires each link to have an onclick attribute. All of the hard coded links in the site have this, but links in page content created in the wagtail CMS don't. I know there's a couple different ways to achieve this, but I felt like the most elegant would be with wagtail's rewrite handlers. I followed the steps and added a hook to register my rewrite handler, but it seems that since links with the identifier "page" are already handled by PageLinkHandler, this is ignored. I can't find a hook provided by wagtail to override this handler, so I've ended up monkeypatching in the functionality: from wagtail.models import Page from wagtail.rich_text.pages import PageLinkHandler from django.utils.html import escape @classmethod def custom_expand_db_attributes(cls, attrs): try: page = cls.get_instance(attrs) return '<a onclick="navigate(event, \'{s}\')" href="{s}">'.format(s=escape(page.localized.specific.url)) except Page.DoesNotExist: return "<a>" PageLinkHandler.expand_db_attributes = custom_expand_db_attributes This code is just placed in the views.py of my frontend. It works, but I want to know if there's an officially supported way to do this without monkeypatching A: I think since you want to replace rather than extend some functionality, monkey patching is the correct approach.
Override PageLinkHandler in wagtail
I have a site that has custom JS for handling link clicks, so that only part of the page reloads and audio playback isn't interrupted. This requires each link to have an onclick attribute. All of the hard coded links in the site have this, but links in page content created in the wagtail CMS don't. I know there's a couple different ways to achieve this, but I felt like the most elegant would be with wagtail's rewrite handlers. I followed the steps and added a hook to register my rewrite handler, but it seems that since links with the identifier "page" are already handled by PageLinkHandler, this is ignored. I can't find a hook provided by wagtail to override this handler, so I've ended up monkeypatching in the functionality: from wagtail.models import Page from wagtail.rich_text.pages import PageLinkHandler from django.utils.html import escape @classmethod def custom_expand_db_attributes(cls, attrs): try: page = cls.get_instance(attrs) return '<a onclick="navigate(event, \'{s}\')" href="{s}">'.format(s=escape(page.localized.specific.url)) except Page.DoesNotExist: return "<a>" PageLinkHandler.expand_db_attributes = custom_expand_db_attributes This code is just placed in the views.py of my frontend. It works, but I want to know if there's an officially supported way to do this without monkeypatching
[ "I think since you want to replace rather than extend some functionality, monkey patching is the correct approach.\n" ]
[ 0 ]
[]
[]
[ "python", "wagtail" ]
stackoverflow_0074553383_python_wagtail.txt
Q: How can I fix this python simple recursion problem I have a function that prints the first multiples of a number (n) starting with zero and stopping at num_multiples, but it keeps printing out one too many multiples. I'm hoping someone can explain what I'm doing wrong so I can understand recursion a bit more. def print_first_multiples(n, num_multiples): if num_multiples < 0: return else: print_first_multiples(n, num_multiples - 1) print(n * num_multiples, end=' ') for example, passing 5 as n and 10 as num_multiples, it should print: 0 5 10 15 20 25 30 35 40 45 but is instead printing an extra "50" at the end. A: First, 0 is not a multiple of 5. The first multiple of 5 is 5 (5*1). The problem with your code is that you only stop when num_multiples is negative (less than 0). Instead, you want to stop when it is zero. Like this: def print_first_multiples(n, num_multiples): if num_multiples == 0: return else: print_first_multiples(n, num_multiples - 1) print(n * num_multiples, end=' ') print_first_multiples(5, 10) If you do want to start at 0 and go up to 45, then you can subtract one from num_multiples. Like this: def print_first_multiples(n, num_multiples): if num_multiples == 0: return else: print_first_multiples(n, num_multiples - 1) print(n * (num_multiples-1), end=' ') print_first_multiples(5, 10) A: Try if num_multiples <= 0: instead A: It prints until it doesn't enter if num_multiples < 0. So e.x. with the initial value num_multiples = 3 you print for num_multiples=3, num_multiples=2, num_multiples=1, and num_multiples=0 so 4 times. Replace if num_multiples < 0 with if num_multiples == 0 and you'd get what you expect
How can I fix this python simple recursion problem
I have a function that prints the first multiples of a number (n) starting with zero and stopping at num_multiples, but it keeps printing out one too many multiples. I'm hoping someone can explain what I'm doing wrong so I can understand recursion a bit more. def print_first_multiples(n, num_multiples): if num_multiples < 0: return else: print_first_multiples(n, num_multiples - 1) print(n * num_multiples, end=' ') for example, passing 5 as n and 10 as num_multiples, it should print: 0 5 10 15 20 25 30 35 40 45 but is instead printing an extra "50" at the end.
[ "First, 0 is not a multiple of 5. The first multiple of 5 is 5 (5*1). The problem with your code is that you only stop when num_multiples is negative (less than 0). Instead, you want to stop when it is zero. Like this:\ndef print_first_multiples(n, num_multiples): \n if num_multiples == 0:\n return\n else:\n print_first_multiples(n, num_multiples - 1)\n print(n * num_multiples, end=' ')\n\n\nprint_first_multiples(5, 10)\n\nIf you do want to start at 0 and go up to 45, then you can subtract one from num_multiples. Like this:\ndef print_first_multiples(n, num_multiples): \n if num_multiples == 0:\n return\n else:\n print_first_multiples(n, num_multiples - 1)\n print(n * (num_multiples-1), end=' ')\n\n\nprint_first_multiples(5, 10)\n\n", "Try if num_multiples <= 0: instead\n", "It prints until it doesn't enter if num_multiples < 0. So e.x. with the initial value num_multiples = 3 you print for num_multiples=3, num_multiples=2, num_multiples=1, and num_multiples=0 so 4 times. Replace if num_multiples < 0 with if num_multiples == 0 and you'd get what you expect\n" ]
[ 1, 0, 0 ]
[]
[]
[ "function", "python", "recursion" ]
stackoverflow_0074668764_function_python_recursion.txt
Q: How to enter file path? How can I do to type something in the field of the image below? I've tried without success: from threading import local import pandas as pd import pyautogui from time import sleep from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.common.keys import Keys from selenium.webdriver.support.ui import Select from selenium.webdriver.chrome.service import Service from webdriver_manager.chrome import ChromeDriverManager from selenium.webdriver.common.action_chains import ActionChains from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC wait.until(EC.presence_of_element_located((By.XPATH,"//input[@type='file']"))send_keys("C:/Users/my_user/Downloads/doch.jpeg") for index, row in df.iterrows(): actions.send_keys((row["message"])) actions.perform() The only palliative solution was: pyautogui.write((row["photo"])) pyautogui.press("enter") I don't want to use pyautogui as it uses the keyboard command and I can't do anything on the computer while the code is running. A: Selenium can't upload files using the Windows select file option, so you'll have to do something else - you might be able to use the send_keys function, i.e.: elem = driver.find_element(By.XPATH, "//input[@type='file']") elem.send_keys('C:\\Path\\To\\File') Note that this may not work, depending on the type of input, and you may be able to instead simulate a drag-and-drop operation if the website supports this. See How to upload file ( picture ) with selenium, python for more info A: For windows path you need double backslashes. Try this: wait.until(EC.presence_of_element_located((By.XPATH,"//input[@type='file']"))send_keys("C:\\Users\\my_user\\Downloads\\doch.jpeg")
How to enter file path?
How can I do to type something in the field of the image below? I've tried without success: from threading import local import pandas as pd import pyautogui from time import sleep from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.common.keys import Keys from selenium.webdriver.support.ui import Select from selenium.webdriver.chrome.service import Service from webdriver_manager.chrome import ChromeDriverManager from selenium.webdriver.common.action_chains import ActionChains from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC wait.until(EC.presence_of_element_located((By.XPATH,"//input[@type='file']"))send_keys("C:/Users/my_user/Downloads/doch.jpeg") for index, row in df.iterrows(): actions.send_keys((row["message"])) actions.perform() The only palliative solution was: pyautogui.write((row["photo"])) pyautogui.press("enter") I don't want to use pyautogui as it uses the keyboard command and I can't do anything on the computer while the code is running.
[ "Selenium can't upload files using the Windows select file option, so you'll have to do something else - you might be able to use the send_keys function, i.e.:\nelem = driver.find_element(By.XPATH, \"//input[@type='file']\")\nelem.send_keys('C:\\\\Path\\\\To\\\\File')\n\nNote that this may not work, depending on the type of input, and you may be able to instead simulate a drag-and-drop operation if the website supports this.\nSee How to upload file ( picture ) with selenium, python for more info\n", "For windows path you need double backslashes. Try this:\nwait.until(EC.presence_of_element_located((By.XPATH,\"//input[@type='file']\"))send_keys(\"C:\\\\Users\\\\my_user\\\\Downloads\\\\doch.jpeg\")\n\n" ]
[ 0, 0 ]
[]
[]
[ "python", "selenium" ]
stackoverflow_0074663050_python_selenium.txt
Q: \t didn't appear in the for loop I'm practicing format fuction and when I use this code with \t in it, the first 9 output did not have any big space ` import random lst = [random.randint(1,100) for i in range(51)] for index, val in enumerate(lst): print(f'{index=}\t{val=}') ` It works with \n but not \t, I don't know why. Can anyone explain it? A: It's just the terminal following tab(4 spaces in your case) convention. So add spaces before and after \t. A: All results has \t Run your code with python yourcode.py | cat -T This is the results. ^I is the tab. $ python yourcode.py | cat -T index=0^Ival=14 index=1^Ival=46 index=2^Ival=87 index=3^Ival=83 index=4^Ival=28 index=5^Ival=77 index=6^Ival=34 index=7^Ival=66 index=8^Ival=64 index=9^Ival=80 index=10^Ival=28 index=11^Ival=21 index=12^Ival=64 index=13^Ival=79 index=14^Ival=92 index=15^Ival=67 index=16^Ival=69 index=17^Ival=50 Man page of cat says -T, --show-tabs display TAB characters as ^I You may change the length of TAB. Check this article A: The behavior you're seeing is because the \t escape sequence inserts a tab character, which is typically used for indenting text. By default, most text editors and terminals use a tab stop of 8 characters, which means that a tab character will move the cursor to the next multiple of 8 characters. In your code, the first nine values in the list don't have any big spaces because they are not aligned with a multiple of 8 characters. This is because the {index=} and {val=} expressions in the format string each add 3 characters to the output, and the = character adds an additional 1 character, for a total of 7 characters. This means that the first nine values are not aligned with a multiple of 8 characters, and the tab character does not insert any extra spaces. If you want to insert a fixed number of spaces in your output, rather than using a tab character, you can use the {:<N} format specifier, where N is the total number of characters you want the output to occupy. For example, you could use the following code to insert 10 spaces between the index and val values in your output: import random lst = [random.randint(1,100) for i in range(51)] for index, val in enumerate(last): print(f'{index:<10}{val=}') In this code, the {index:<10} format specifier tells the format function to left-align the index value within a field of 10 characters. This ensures that there are always 10 spaces between the index and val values, regardless of their actual values. Overall, the \t escape sequence is useful for indenting text, but it may not always produce the results you expect if the text is not aligned
\t didn't appear in the for loop
I'm practicing format fuction and when I use this code with \t in it, the first 9 output did not have any big space ` import random lst = [random.randint(1,100) for i in range(51)] for index, val in enumerate(lst): print(f'{index=}\t{val=}') ` It works with \n but not \t, I don't know why. Can anyone explain it?
[ "It's just the terminal following tab(4 spaces in your case) convention. So add spaces before and after \\t.\n", "All results has \\t\nRun your code with python yourcode.py | cat -T\nThis is the results. ^I is the tab.\n$ python yourcode.py | cat -T\nindex=0^Ival=14\nindex=1^Ival=46\nindex=2^Ival=87\nindex=3^Ival=83\nindex=4^Ival=28\nindex=5^Ival=77\nindex=6^Ival=34\nindex=7^Ival=66\nindex=8^Ival=64\nindex=9^Ival=80\nindex=10^Ival=28\nindex=11^Ival=21\nindex=12^Ival=64\nindex=13^Ival=79\nindex=14^Ival=92\nindex=15^Ival=67\nindex=16^Ival=69\nindex=17^Ival=50\n\nMan page of cat says\n\n -T, --show-tabs\n display TAB characters as ^I\n\n\nYou may change the length of TAB. Check this article\n", "The behavior you're seeing is because the \\t escape sequence inserts a tab character, which is typically used for indenting text. By default, most text editors and terminals use a tab stop of 8 characters, which means that a tab character will move the cursor to the next multiple of 8 characters.\nIn your code, the first nine values in the list don't have any big spaces because they are not aligned with a multiple of 8 characters. This is because the {index=} and {val=} expressions in the format string each add 3 characters to the output, and the = character adds an additional 1 character, for a total of 7 characters. This means that the first nine values are not aligned with a multiple of 8 characters, and the tab character does not insert any extra spaces.\nIf you want to insert a fixed number of spaces in your output, rather than using a tab character, you can use the {:<N} format specifier, where N is the total number of characters you want the output to occupy. For example, you could use the following code to insert 10 spaces between the index and val values in your output:\nimport random\n\nlst = [random.randint(1,100) for i in range(51)]\nfor index, val in enumerate(last):\n print(f'{index:<10}{val=}')\n\nIn this code, the {index:<10} format specifier tells the format function to left-align the index value within a field of 10 characters. This ensures that there are always 10 spaces between the index and val values, regardless of their actual values.\nOverall, the \\t escape sequence is useful for indenting text, but it may not always produce the results you expect if the text is not aligned\n" ]
[ 0, 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0074667173_python.txt
Q: Could not find a version that satisfies the requirement in python I am trying to create virtual env with python2 in mac os from here. While running pip install virtualenv command in terminal I am getting following error. Could not find a version that satisfies the requirement virtualenv (from versions: ) No matching distribution found for virtualenv A: If you are using python 3.x, Please try this commands sudo pip3 install --upgrade pip sudo pip3 install virtualenv A: Run this command and try again curl https://bootstrap.pypa.io/get-pip.py | python The detailed description can be found in the link shared by Anupam in the comments. A: Please try below commands pip install --upgrade virtualenv A: We tried the above but they didn't work in our case because we had two versions of python3 on the systems. One via a normal install a few months back and one via brew (on a Mac). When we discovered that, we downloaded and installed the latest version from python.org and as a result the pip was updated too. Once the pip was installed the sudo pip3 install virturaenv command worked fine. A: Try Below commands: pip install --upgrade pip pip install <'package-name'> example: pip install locust_plugins To check the list of packages installed, use below command: pip list I tried the same and it worked for me A: It is sometimes because of connectivity issue. Re-running the same command when connectivity is okay solves it. pip install virtualenv Initial Output: WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.HTTPSConnection object at 0x0000023A91A94190>: Failed to establish a new connection: [Errno 11001] getaddrinfo failed')': /simple/virtualenv/ WARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.HTTPSConnection object at 0x0000023A922B3390>: Failed to establish a new connection: [Errno 11001] getaddrinfo failed')': /simple/virtualenv/ WARNING: Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.HTTPSConnection object at 0x0000023A922B0850>: Failed to establish a new connection: [Errno 11001] getaddrinfo failed')': /simple/virtualenv/ WARNING: Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.HTTPSConnection object at 0x0000023A922C5990>: Failed to establish a new connection: [Errno 11001] getaddrinfo failed')': /simple/virtualenv/ WARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.HTTPSConnection object at 0x0000023A922C64D0>: Failed to establish a new connection: [Errno 11001] getaddrinfo failed')': /simple/virtualenv/ ERROR: Could not find a version that satisfies the requirement virtualenv (from versions: none) ERROR: No matching distribution found for virtualenv WARNING: There was an error checking the latest version of pip. Output after re-run when connectivity is okay: Collecting virtualenv Using cached virtualenv-20.17.0-py3-none-any.whl (8.8 MB) WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ReadTimeoutError("HTTPSConnectionPool(host='pypi.org', port=443): Read timed out. (read timeout=15)")': /simple/distlib/ Collecting distlib<1,>=0.3.6 Using cached distlib-0.3.6-py2.py3-none-any.whl (468 kB) Collecting filelock<4,>=3.4.1 Using cached filelock-3.8.0-py3-none-any.whl (10 kB) Collecting platformdirs<3,>=2.4 Using cached platformdirs-2.5.4-py3-none-any.whl (14 kB) Installing collected packages: distlib, platformdirs, filelock, virtualenv Successfully installed distlib-0.3.6 filelock-3.8.0 platformdirs-2.5.4 virtualenv-20.17.0 [notice] A new release of pip available: 22.3 -> 22.3.1 [notice] To update, run: python.exe -m pip install --upgrade pip
Could not find a version that satisfies the requirement in python
I am trying to create virtual env with python2 in mac os from here. While running pip install virtualenv command in terminal I am getting following error. Could not find a version that satisfies the requirement virtualenv (from versions: ) No matching distribution found for virtualenv
[ "If you are using python 3.x, Please try this commands\n\nsudo pip3 install --upgrade pip\nsudo pip3 install virtualenv\n\n", "Run this command and try again\ncurl https://bootstrap.pypa.io/get-pip.py | python\n\nThe detailed description can be found in the link shared by Anupam in the comments.\n", "Please try below commands\npip install --upgrade virtualenv\n\n", "We tried the above but they didn't work in our case because we had two versions of python3 on the systems. One via a normal install a few months back and one via brew (on a Mac). When we discovered that, we downloaded and installed the latest version from python.org and as a result the pip was updated too. Once the pip was installed the sudo pip3 install virturaenv command worked fine. \n", "Try Below commands:\n\npip install --upgrade pip\n\n\npip install <'package-name'>\n\nexample: pip install locust_plugins\nTo check the list of packages installed, use below command:\n\npip list\n\nI tried the same and it worked for me\n", "It is sometimes because of connectivity issue. Re-running the same command when connectivity is okay solves it.\npip install virtualenv\n\nInitial Output:\nWARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.HTTPSConnection object at 0x0000023A91A94190>: Failed to establish a new connection: [Errno 11001] getaddrinfo failed')': /simple/virtualenv/\nWARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.HTTPSConnection object at 0x0000023A922B3390>: Failed to establish a new connection: [Errno 11001] getaddrinfo failed')': /simple/virtualenv/\nWARNING: Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.HTTPSConnection object at 0x0000023A922B0850>: Failed to establish a new connection: [Errno 11001] getaddrinfo failed')': /simple/virtualenv/\nWARNING: Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.HTTPSConnection object at 0x0000023A922C5990>: Failed to establish a new connection: [Errno 11001] getaddrinfo failed')': /simple/virtualenv/\nWARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.HTTPSConnection object at 0x0000023A922C64D0>: Failed to establish a new connection: [Errno 11001] getaddrinfo failed')': /simple/virtualenv/\nERROR: Could not find a version that satisfies the requirement virtualenv (from versions: none)\nERROR: No matching distribution found for virtualenv\nWARNING: There was an error checking the latest version of pip.\n\nOutput after re-run when connectivity is okay:\nCollecting virtualenv\n Using cached virtualenv-20.17.0-py3-none-any.whl (8.8 MB)\nWARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ReadTimeoutError(\"HTTPSConnectionPool(host='pypi.org', port=443): Read timed out. (read timeout=15)\")': /simple/distlib/\nCollecting distlib<1,>=0.3.6\n Using cached distlib-0.3.6-py2.py3-none-any.whl (468 kB)\nCollecting filelock<4,>=3.4.1\n Using cached filelock-3.8.0-py3-none-any.whl (10 kB)\nCollecting platformdirs<3,>=2.4\n Using cached platformdirs-2.5.4-py3-none-any.whl (14 kB)\nInstalling collected packages: distlib, platformdirs, filelock, virtualenv\nSuccessfully installed distlib-0.3.6 filelock-3.8.0 platformdirs-2.5.4 virtualenv-20.17.0\n\n[notice] A new release of pip available: 22.3 -> 22.3.1\n[notice] To update, run: python.exe -m pip install --upgrade pip\n\n" ]
[ 15, 14, 13, 1, 0, 0 ]
[ "pip install --upgrade virtualenv\nThis solution works for me in Centos8\n", "If you are using Windows, you have to run cmd as admin.\n" ]
[ -1, -5 ]
[ "pip", "python", "virtualenv" ]
stackoverflow_0049745105_pip_python_virtualenv.txt
Q: Pandas : Calculate the Mean of the value_counts() from row 0 to row n I am struggling to create a function that could first calculate the number of occurrences for each string in a specific column (from row 0 to row n) and then reduce this to one single value by calculating the mean of the value_counts from the first row to the row n. More precisely, what I would like to do is to create a new column ['Mean'] where the value of each row n equals to the mean of the value_counts() from the first row to the nth row of the column ['Name']. import pandas as pd import datetime as dt data = [["2022-11-1", 'Tom'], ["2022-11-2", 'Mike'], ["2022-11-3", 'Paul'], ["2022-11-4", 'Pauline'], ["2022-11-5", 'Pauline'], ["2022-11-6", 'Mike'], ["2022-11-7", 'Tom'], ["2022-11-8", 'Louise'], ["2022-11-9", 'Tom'], ["2022-11-10", 'Mike'], ["2022-11-11", 'Paul'], ["2022-11-12", 'Pauline'], ["2022-11-13", 'Pauline'], ["2022-11-14", 'Mike'], ["2022-11-15", 'Tom'], ["2022-11-16", 'Louise']] df = pd.DataFrame(data, columns=['Date', 'Name']) So for example, the 6th row of ['Mean'] should have a value of 1.25 as Pauline appeared twice, so the calcul should be (1 + 1 + 1 + 2 + 1)/5 = 1.25 . Thank you, A: The logic is unclear, but assuming you want the expanding average count of values, use: df['mean'] = pd.Series(pd.factorize(df['Name'])[0], index=df.index) .expanding() .apply(lambda s: s.value_counts().mean()) ) Output: Date Name mean 0 2022-11-1 Tom 1.00 1 2022-11-2 Mike 1.00 2 2022-11-3 Paul 1.00 3 2022-11-4 Pauline 1.00 4 2022-11-5 Pauline 1.25 5 2022-11-6 Mike 1.50 6 2022-11-7 Tom 1.75 7 2022-11-8 Louise 1.60 8 2022-11-9 Tom 1.80 9 2022-11-10 Mike 2.00 10 2022-11-11 Paul 2.20 11 2022-11-12 Pauline 2.40 12 2022-11-13 Pauline 2.60 13 2022-11-14 Mike 2.80 14 2022-11-15 Tom 3.00 15 2022-11-16 Louise 3.20
Pandas : Calculate the Mean of the value_counts() from row 0 to row n
I am struggling to create a function that could first calculate the number of occurrences for each string in a specific column (from row 0 to row n) and then reduce this to one single value by calculating the mean of the value_counts from the first row to the row n. More precisely, what I would like to do is to create a new column ['Mean'] where the value of each row n equals to the mean of the value_counts() from the first row to the nth row of the column ['Name']. import pandas as pd import datetime as dt data = [["2022-11-1", 'Tom'], ["2022-11-2", 'Mike'], ["2022-11-3", 'Paul'], ["2022-11-4", 'Pauline'], ["2022-11-5", 'Pauline'], ["2022-11-6", 'Mike'], ["2022-11-7", 'Tom'], ["2022-11-8", 'Louise'], ["2022-11-9", 'Tom'], ["2022-11-10", 'Mike'], ["2022-11-11", 'Paul'], ["2022-11-12", 'Pauline'], ["2022-11-13", 'Pauline'], ["2022-11-14", 'Mike'], ["2022-11-15", 'Tom'], ["2022-11-16", 'Louise']] df = pd.DataFrame(data, columns=['Date', 'Name']) So for example, the 6th row of ['Mean'] should have a value of 1.25 as Pauline appeared twice, so the calcul should be (1 + 1 + 1 + 2 + 1)/5 = 1.25 . Thank you,
[ "The logic is unclear, but assuming you want the expanding average count of values, use:\ndf['mean'] = pd.Series(pd.factorize(df['Name'])[0], index=df.index)\n .expanding()\n .apply(lambda s: s.value_counts().mean())\n )\n\nOutput:\n Date Name mean\n0 2022-11-1 Tom 1.00\n1 2022-11-2 Mike 1.00\n2 2022-11-3 Paul 1.00\n3 2022-11-4 Pauline 1.00\n4 2022-11-5 Pauline 1.25\n5 2022-11-6 Mike 1.50\n6 2022-11-7 Tom 1.75\n7 2022-11-8 Louise 1.60\n8 2022-11-9 Tom 1.80\n9 2022-11-10 Mike 2.00\n10 2022-11-11 Paul 2.20\n11 2022-11-12 Pauline 2.40\n12 2022-11-13 Pauline 2.60\n13 2022-11-14 Mike 2.80\n14 2022-11-15 Tom 3.00\n15 2022-11-16 Louise 3.20\n\n" ]
[ 2 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074668825_pandas_python.txt
Q: h2o-pysparkling-2.4 and Glue Jobs with: {"error":"TypeError: 'JavaPackage' object is not callable","errorType":"EXECUTION_FAILURE"} I am try to using pysparkling.ml.H2OMOJOModel for predict a spark dataframe using a MOJO model trained with h2o==3.32.0.2 in AWS Glue Jobs, how ever a got the error: TypeError: 'JavaPackage' object is not callable. I opened a ticket in AWS support and they confirmed that Glue environment is ok and the problem is probably with sparkling-water (pysparkling). It seems that some dependency library is missing, but I have no idea which one. The simple code bellow works perfectly if I run in my local computer (I only need to change the mojo path for GBM_grid__1_AutoML_20220323_233606_model_53.zip) Could anyone ever run sparkling-water in Glue jobs successfully? Job Details: -Glue version 2.0 --additional-python-modules, h2o-pysparkling-2.4==3.36.0.2-1 -Worker type: G1.X -Number of workers: 2 -Using script "createFromMojo.py" createFromMojo.py: import sys from awsglue.transforms import * from awsglue.utils import getResolvedOptions from pyspark.context import SparkContext from awsglue.context import GlueContext from awsglue.job import Job import pandas as pd from pysparkling.ml import H2OMOJOSettings from pysparkling.ml import H2OMOJOModel # from pysparkling.ml import * ## @params: [JOB_NAME] args = getResolvedOptions(sys.argv, ["JOB_NAME"]) #Job setup sc = SparkContext() glueContext = GlueContext(sc) spark = glueContext.spark_session job = Job(glueContext) job.init(args["JOB_NAME"], args) caminho_modelo_mojo='s3://prod-lakehouse-stream/modeling/approaches/GBM_grid__1_AutoML_20220323_233606_model_53.zip' print(caminho_modelo_mojo) print(dir()) settings = H2OMOJOSettings(convertUnknownCategoricalLevelsToNa = True, convertInvalidNumbersToNa = True) model = H2OMOJOModel.createFromMojo(caminho_modelo_mojo, settings) data = {'days_since_last_application': [3, 2, 1, 0], 'job_area': ['a', 'b', 'c', 'd']} base_escorada = model.transform(spark.createDataFrame(pd.DataFrame.from_dict(data))) print(base_escorada.printSchema()) print(base_escorada.show()) job.commit() A: I could run successfully following the steps: Downloaded sparkling water distribution zip: http://h2o-release.s3.amazonaws.com/sparkling-water/spark-3.1/3.36.1.1-1-3.1/index.html Dependent JARs path: s3://bucket_name/sparkling-water-assembly-scoring_2.12-3.36.1.1-1-3.1-all.jar --additional-python-modules, h2o-pysparkling-3.1==3.36.1.1-1-3.1
h2o-pysparkling-2.4 and Glue Jobs with: {"error":"TypeError: 'JavaPackage' object is not callable","errorType":"EXECUTION_FAILURE"}
I am try to using pysparkling.ml.H2OMOJOModel for predict a spark dataframe using a MOJO model trained with h2o==3.32.0.2 in AWS Glue Jobs, how ever a got the error: TypeError: 'JavaPackage' object is not callable. I opened a ticket in AWS support and they confirmed that Glue environment is ok and the problem is probably with sparkling-water (pysparkling). It seems that some dependency library is missing, but I have no idea which one. The simple code bellow works perfectly if I run in my local computer (I only need to change the mojo path for GBM_grid__1_AutoML_20220323_233606_model_53.zip) Could anyone ever run sparkling-water in Glue jobs successfully? Job Details: -Glue version 2.0 --additional-python-modules, h2o-pysparkling-2.4==3.36.0.2-1 -Worker type: G1.X -Number of workers: 2 -Using script "createFromMojo.py" createFromMojo.py: import sys from awsglue.transforms import * from awsglue.utils import getResolvedOptions from pyspark.context import SparkContext from awsglue.context import GlueContext from awsglue.job import Job import pandas as pd from pysparkling.ml import H2OMOJOSettings from pysparkling.ml import H2OMOJOModel # from pysparkling.ml import * ## @params: [JOB_NAME] args = getResolvedOptions(sys.argv, ["JOB_NAME"]) #Job setup sc = SparkContext() glueContext = GlueContext(sc) spark = glueContext.spark_session job = Job(glueContext) job.init(args["JOB_NAME"], args) caminho_modelo_mojo='s3://prod-lakehouse-stream/modeling/approaches/GBM_grid__1_AutoML_20220323_233606_model_53.zip' print(caminho_modelo_mojo) print(dir()) settings = H2OMOJOSettings(convertUnknownCategoricalLevelsToNa = True, convertInvalidNumbersToNa = True) model = H2OMOJOModel.createFromMojo(caminho_modelo_mojo, settings) data = {'days_since_last_application': [3, 2, 1, 0], 'job_area': ['a', 'b', 'c', 'd']} base_escorada = model.transform(spark.createDataFrame(pd.DataFrame.from_dict(data))) print(base_escorada.printSchema()) print(base_escorada.show()) job.commit()
[ "I could run successfully following the steps:\n\nDownloaded sparkling water distribution zip: http://h2o-release.s3.amazonaws.com/sparkling-water/spark-3.1/3.36.1.1-1-3.1/index.html\nDependent JARs path: s3://bucket_name/sparkling-water-assembly-scoring_2.12-3.36.1.1-1-3.1-all.jar\n--additional-python-modules, h2o-pysparkling-3.1==3.36.1.1-1-3.1\n\n" ]
[ 0 ]
[]
[]
[ "apache_spark", "h2o", "python", "sparkling_water" ]
stackoverflow_0071928885_apache_spark_h2o_python_sparkling_water.txt
Q: How do I get the return value when using Python exec on the code object of a function? For testing purposes I want to directly execute a function defined inside of another function. I can get to the code object of the child function, through the code (func_code) of the parent function, but when I exec it, i get no return value. Is there a way to get the return value from the exec'ed code? A: Yes, you need to have the assignment within the exec statement: >>> def foo(): ... return 5 ... >>> exec("a = foo()") >>> a 5 This probably isn't relevant for your case since its being used in controlled testing, but be careful with using exec with user defined input. A: A few years later, but the following snippet helped me: the_code = ''' a = 1 b = 2 return_me = a + b ''' loc = {} exec(the_code, globals(), loc) return_workaround = loc['return_me'] print(return_workaround) # 3 exec() doesn't return anything itself, but you can pass a dict which has all the local variables stored in it after execution. By accessing it you have a something like a return. I hope it helps someone. A: While this is the ugliest beast ever seen by mankind, this is how you can do it by using a global variable inside your exec call: def my_exec(code): exec('global i; i = %s' % code) global i return i This is misusing global variables to get your data across the border. >>> my_exec('1 + 2') 3 Needless to say that you should never allow any user inputs for the input of this function in there, as it poses an extreme security risk. A: Something like this can work: def outer(): def inner(i): return i + 10 for f in outer.func_code.co_consts: if getattr(f, 'co_name', None) == 'inner': inner = type(outer)(f, globals()) # can also use `types` module for readability: # inner = types.FunctionType(f, globals()) print inner(42) # 52 The idea is to extract the code object from the inner function and create a new function based on it. Additional work is required when an inner function can contain free variables. You'll have to extract them as well and pass to the function constructor in the last argument (closure). A: Here's a way to return a value from exec'd code: def exec_and_return(expression): exec(f"""locals()['temp'] = {expression}""") return locals()['temp'] I'd advise you to give an example of the problem you're trying to solve. Because I would only ever use this as a last resort. A: use eval() instead of exec(), it returns result A: This doesn't get the return value per say, but you can provide an empty dictionary when calling exec to retrieve any variables defined in the code. # Python 3 ex_locals = {} exec("a = 'Hello world!'", None, ex_locals) print(ex_locals['a']) # Output: Hello world! From the Python 3 documentation on exec: The default locals act as described for function locals() below: modifications to the default locals dictionary should not be attempted. Pass an explicit locals dictionary if you need to see effects of the code on locals after function exec() returns. For more information, see How does exec work with locals? A: Here's a solution with a simple code: # -*- coding: utf-8 -*- import math x = [0] exec("x[0] = 3*2") print(x[0]) # 6 A: Since Python 3.7, dictionary are ordered. So you no longer need to agree on a name, you can just say "last item that got created": >>> d = {} >>> exec("def addone(i): return i + 1", d, d) >>> list(d) ['__builtins__', 'addone'] >>> thefunction = d[list(d)[-1]] >>> thefunction <function addone at 0x7fd03123fe50>
How do I get the return value when using Python exec on the code object of a function?
For testing purposes I want to directly execute a function defined inside of another function. I can get to the code object of the child function, through the code (func_code) of the parent function, but when I exec it, i get no return value. Is there a way to get the return value from the exec'ed code?
[ "Yes, you need to have the assignment within the exec statement:\n>>> def foo():\n... return 5\n...\n>>> exec(\"a = foo()\")\n>>> a\n5\n\nThis probably isn't relevant for your case since its being used in controlled testing, but be careful with using exec with user defined input. \n", "A few years later, but the following snippet helped me:\nthe_code = '''\na = 1\nb = 2\nreturn_me = a + b\n'''\n\nloc = {}\nexec(the_code, globals(), loc)\nreturn_workaround = loc['return_me']\nprint(return_workaround) # 3\n\nexec() doesn't return anything itself, but you can pass a dict which has all the local variables stored in it after execution. By accessing it you have a something like a return.\nI hope it helps someone.\n", "While this is the ugliest beast ever seen by mankind, this is how you can do it by using a global variable inside your exec call:\ndef my_exec(code):\n exec('global i; i = %s' % code)\n global i\n return i\n\nThis is misusing global variables to get your data across the border.\n>>> my_exec('1 + 2')\n3\n\nNeedless to say that you should never allow any user inputs for the input of this function in there, as it poses an extreme security risk.\n", "Something like this can work:\ndef outer():\n def inner(i):\n return i + 10\n\n\nfor f in outer.func_code.co_consts:\n if getattr(f, 'co_name', None) == 'inner':\n\n inner = type(outer)(f, globals())\n\n # can also use `types` module for readability:\n # inner = types.FunctionType(f, globals())\n\n print inner(42) # 52\n\nThe idea is to extract the code object from the inner function and create a new function based on it.\nAdditional work is required when an inner function can contain free variables. You'll have to extract them as well and pass to the function constructor in the last argument (closure).\n", "Here's a way to return a value from exec'd code:\ndef exec_and_return(expression):\n exec(f\"\"\"locals()['temp'] = {expression}\"\"\")\n return locals()['temp']\n\nI'd advise you to give an example of the problem you're trying to solve. Because I would only ever use this as a last resort.\n", "use eval() instead of exec(), it returns result\n", "This doesn't get the return value per say, but you can provide an empty dictionary when calling exec to retrieve any variables defined in the code.\n# Python 3\nex_locals = {}\nexec(\"a = 'Hello world!'\", None, ex_locals)\nprint(ex_locals['a'])\n# Output: Hello world!\n\nFrom the Python 3 documentation on exec:\n\nThe default locals act as described for function locals() below: modifications to the default locals dictionary should not be attempted. Pass an explicit locals dictionary if you need to see effects of the code on locals after function exec() returns.\n\nFor more information, see How does exec work with locals?\n", "Here's a solution with a simple code:\n# -*- coding: utf-8 -*-\nimport math\n\nx = [0]\nexec(\"x[0] = 3*2\")\nprint(x[0]) # 6\n\n", "Since Python 3.7, dictionary are ordered. So you no longer need to agree on a name, you can just say \"last item that got created\":\n>>> d = {}\n>>> exec(\"def addone(i): return i + 1\", d, d)\n>>> list(d)\n['__builtins__', 'addone']\n>>> thefunction = d[list(d)[-1]]\n>>> thefunction\n<function addone at 0x7fd03123fe50>\n\n" ]
[ 37, 29, 10, 4, 3, 3, 2, 0, 0 ]
[ "if we need a function that is in a file in another directory, eg\nwe need the function1 in file my_py_file.py \nlocated in /home/.../another_directory\n\nwe can use the following code:\n\n\ndef cl_import_function(a_func,py_file,in_Dir): \n... import sys\n... sys.path.insert(0, in_Dir)\n... ax='from %s import %s'%(py_file,a_func)\n... loc={}\n... exec(ax, globals(), loc)\n... getFx = loc[afunc]\n... return getFx\n\ntest = cl_import_function('function1',r'my_py_file',r'/home/.../another_directory/')\ntest()\n\n(a simple way for newbies...)\n", "program = 'a = 5\\nb=10\\nprint(\"Sum =\", a+b)'\nprogram = exec(program)\nprint(program)\n" ]
[ -1, -1 ]
[ "exec", "function", "python", "return" ]
stackoverflow_0023917776_exec_function_python_return.txt
Q: Create Sample Noisy signal in C I am trying to create a sample noisy signal that I will be filtering in C. I have written the code in python but will be deploying it to a microcotroller so I want to create it in C. Here is the python code I am trying to replicate # 1000 samples per second sample_rate = 1000 # frequency in Hz center_freq = 20 # filter frequency in Hz cutoff_freq = 10 test_signal = np.linspace( start=0., stop=2. * pi * center_freq, num=sample_rate, endpoint=False ) test_signal = np.cos(test_signal) second_test_signal = np.random.randn(sample_rate) I tried manually coding a linearlly spaced array but I cannot seem to get it to work. I have looked into libraries to make it easier but can't find any. Does anyone have any ideas on how to translate this python code into C a simple and easy to use way? Here is the C code I have so far. I am also wondering if I need to do this a completely different way? #include <stdlib.h> #include <stdio.h> #include <math.h> int sampleRate = 1000; int center_freq = 20; int cutoff_freq = 10; A: have written the code in python but will be deploying it to a microcotroller so I want to create it in C. There exist MicroPython MicroPython is a lean and efficient implementation of the Python 3 programming language that includes a small subset of the Python standard library and is optimised to run on microcontrollers and in constrained environments. and there exist numpy subset for above in form of micropython-numpy, if you manage to make it run at your microcotroller then conversion to C becomes no longer required. A: This is some code for the linearly spaced array: #include <stdlib.h> #include <stdio.h> #include <math.h> // ... int sampleRate = 1000; int center_freq = 20; int cutoff_freq = 10; float signal_1[1000]; float signal_2[1000]; float value = 0.0f; float inc = (2.0f * M_PI * (float)center_freq) / (float)sampleRate; for(int i = 0; i < sampleRate; ++i) { signal_1[i] = (float)cos(value); value += inc; } for(int i = 0; i < sampleRate; ++i) { signal_2[i] = (float)rand() / (float)RAND_MAX; }
Create Sample Noisy signal in C
I am trying to create a sample noisy signal that I will be filtering in C. I have written the code in python but will be deploying it to a microcotroller so I want to create it in C. Here is the python code I am trying to replicate # 1000 samples per second sample_rate = 1000 # frequency in Hz center_freq = 20 # filter frequency in Hz cutoff_freq = 10 test_signal = np.linspace( start=0., stop=2. * pi * center_freq, num=sample_rate, endpoint=False ) test_signal = np.cos(test_signal) second_test_signal = np.random.randn(sample_rate) I tried manually coding a linearlly spaced array but I cannot seem to get it to work. I have looked into libraries to make it easier but can't find any. Does anyone have any ideas on how to translate this python code into C a simple and easy to use way? Here is the C code I have so far. I am also wondering if I need to do this a completely different way? #include <stdlib.h> #include <stdio.h> #include <math.h> int sampleRate = 1000; int center_freq = 20; int cutoff_freq = 10;
[ "\nhave written the code in python but will be deploying it to a\nmicrocotroller so I want to create it in C.\n\nThere exist MicroPython\n\nMicroPython is a lean and efficient implementation of the Python 3\nprogramming language that includes a small subset of the Python\nstandard library and is optimised to run on microcontrollers and in\nconstrained environments.\n\nand there exist numpy subset for above in form of micropython-numpy, if you manage to make it run at your microcotroller then conversion to C becomes no longer required.\n", "This is some code for the linearly spaced array:\n#include <stdlib.h>\n#include <stdio.h>\n#include <math.h>\n\n// ...\n\nint sampleRate = 1000;\nint center_freq = 20;\nint cutoff_freq = 10;\n\nfloat signal_1[1000];\nfloat signal_2[1000];\n\nfloat value = 0.0f;\nfloat inc = (2.0f * M_PI * (float)center_freq) / (float)sampleRate;\n\nfor(int i = 0; i < sampleRate; ++i)\n{\n signal_1[i] = (float)cos(value);\n value += inc;\n}\n\nfor(int i = 0; i < sampleRate; ++i)\n{\n signal_2[i] = (float)rand() / (float)RAND_MAX;\n}\n\n" ]
[ 0, 0 ]
[]
[]
[ "arduino", "c", "python" ]
stackoverflow_0074632566_arduino_c_python.txt
Q: Converting pandas DataFrame to datacube? I have a DataFrame with four columns: X, Y, Z, and t. The values in the first three columns are discrete and represent a 3D index. The fourth column is a floating-point number. For example, df = pd.DataFrame({'X':[1,2,3,2,3,1], 'Y':[1,1,2,2,3,3], 'Z':[1,2,1,2,1,2], 't':np.random.rand(6)}) # X Y Z t #0 1 1 1 0.410462 #1 2 1 2 0.385973 #2 3 2 1 0.434947 #3 2 2 2 0.880702 #4 3 3 1 0.297190 #5 1 3 2 0.750949 How can I efficiently extend df into a 3D datacube? (With 18 vertices in this case.) The values of t in the new rows should be np.nan. In other words, I want to add all the "missing" rows, such as: ... #6 1 1 2 nan #7 1 1 3 nan #8 1 2 1 nan ... The extents of X, Y, and Z are large but not huge (say, 10, 200, and 1000 unique values). Numpy-based solutions are welcome, too! A: Here is one way to do it with product from Python standard library's itertool module: from itertools import product import pandas as pd axis = ["X", "Y", "Z"] df = ( pd.concat( [ df, pd.DataFrame( product(df["X"].unique(), repeat=df["X"].nunique()), columns=axis, ), ] ) .drop_duplicates(subset=axis, keep="first") .sort_values(axis, ignore_index=True) ) Then: print(df) # Output X Y Z t 0 1 1 1 0.994531 1 1 1 2 NaN 2 1 1 3 NaN 3 1 2 1 NaN 4 1 2 2 NaN 5 1 2 3 NaN 6 1 3 1 NaN 7 1 3 2 0.937584 8 1 3 3 NaN 9 2 1 1 NaN 10 2 1 2 0.168245 11 2 1 3 NaN 12 2 2 1 NaN 13 2 2 2 0.362854 14 2 2 3 NaN 15 2 3 1 NaN 16 2 3 2 NaN 17 2 3 3 NaN 18 3 1 1 NaN 19 3 1 2 NaN 20 3 1 3 NaN 21 3 2 1 0.634389 22 3 2 2 NaN 23 3 2 3 NaN 24 3 3 1 0.953114 25 3 3 2 NaN 26 3 3 3 NaN
Converting pandas DataFrame to datacube?
I have a DataFrame with four columns: X, Y, Z, and t. The values in the first three columns are discrete and represent a 3D index. The fourth column is a floating-point number. For example, df = pd.DataFrame({'X':[1,2,3,2,3,1], 'Y':[1,1,2,2,3,3], 'Z':[1,2,1,2,1,2], 't':np.random.rand(6)}) # X Y Z t #0 1 1 1 0.410462 #1 2 1 2 0.385973 #2 3 2 1 0.434947 #3 2 2 2 0.880702 #4 3 3 1 0.297190 #5 1 3 2 0.750949 How can I efficiently extend df into a 3D datacube? (With 18 vertices in this case.) The values of t in the new rows should be np.nan. In other words, I want to add all the "missing" rows, such as: ... #6 1 1 2 nan #7 1 1 3 nan #8 1 2 1 nan ... The extents of X, Y, and Z are large but not huge (say, 10, 200, and 1000 unique values). Numpy-based solutions are welcome, too!
[ "Here is one way to do it with product from Python standard library's itertool module:\nfrom itertools import product\n\nimport pandas as pd\n\n\naxis = [\"X\", \"Y\", \"Z\"]\n\ndf = (\n pd.concat(\n [\n df,\n pd.DataFrame(\n product(df[\"X\"].unique(), repeat=df[\"X\"].nunique()),\n columns=axis,\n ),\n ]\n )\n .drop_duplicates(subset=axis, keep=\"first\")\n .sort_values(axis, ignore_index=True)\n)\n\nThen:\nprint(df)\n# Output\n X Y Z t\n0 1 1 1 0.994531\n1 1 1 2 NaN\n2 1 1 3 NaN\n3 1 2 1 NaN\n4 1 2 2 NaN\n5 1 2 3 NaN\n6 1 3 1 NaN\n7 1 3 2 0.937584\n8 1 3 3 NaN\n9 2 1 1 NaN\n10 2 1 2 0.168245\n11 2 1 3 NaN\n12 2 2 1 NaN\n13 2 2 2 0.362854\n14 2 2 3 NaN\n15 2 3 1 NaN\n16 2 3 2 NaN\n17 2 3 3 NaN\n18 3 1 1 NaN\n19 3 1 2 NaN\n20 3 1 3 NaN\n21 3 2 1 0.634389\n22 3 2 2 NaN\n23 3 2 3 NaN\n24 3 3 1 0.953114\n25 3 3 2 NaN\n26 3 3 3 NaN\n\n" ]
[ 1 ]
[]
[]
[ "data_cube", "dataframe", "numpy", "pandas", "python" ]
stackoverflow_0074636925_data_cube_dataframe_numpy_pandas_python.txt
Q: How to print Docstring of python function from inside the function itself? I want to print the docstring of a python function from inside the function itself. for eg. def my_function(self): """Doc string for my function.""" # print the Docstring here. At the moment I am doing this directly after my_function has been defined. print my_function.__doc__ But would rather let the function do this itself. I have tried calling print self.__doc__ print self.my_function.__doc__ and print this.__doc__ inside my_function but this did not work. A: def my_func(): """Docstring goes here.""" print my_func.__doc__ This will work as long as you don't change the object bound to the name my_func. new_func_name = my_func my_func = None new_func_name() # doesn't print anything because my_func is None and None has no docstring Situations in which you'd do this are rather rare, but they do happen. However, if you write a decorator like this: def passmein(func): def wrapper(*args, **kwargs): return func(func, *args, **kwargs) return wrapper Now you can do this: @passmein def my_func(me): print me.__doc__ And this will ensure that your function gets a reference to itself (similar to self) as its first argument, so it can always get the docstring of the right function. If used on a method, the usual self becomes the second argument. A: This should work (in my tests it does, also included output). You could probably use __doc__ instead of getdoc, but I like it, so thats just what i used. Also, this doesn't require you to know the names of the class/method/function. Examples both for a class, a method and a function. Tell me if it's not what you were looking for :) from inspect import * class MySelfExplaningClass: """This is my class document string""" def __init__(self): print getdoc(self) def my_selfexplaining_method(self): """This is my method document string""" print getdoc(getattr(self, getframeinfo(currentframe()).function)) explain = MySelfExplaningClass() # Output: This is my class document string explain.my_selfexplaining_method() # Output: This is my method document string def my_selfexplaining_function(): """This is my function document string""" print getdoc(globals()[getframeinfo(currentframe()).function]) my_selfexplaining_function() # Output: This is my function document string A: This works: def my_function(): """Docstring for my function""" #print the Docstring here. print my_function.__doc__ my_function() in Python 2.7.1 This also works: class MyClass(object): def my_function(self): """Docstring for my function""" #print the Docstring here, either way works. print MyClass.my_function.__doc__ print self.my_function.__doc__ foo = MyClass() foo.my_function() This however, will not work on its own: class MyClass(object): def my_function(self): """Docstring for my function""" #print the Docstring here. print my_function.__doc__ foo = MyClass() foo.my_function() NameError: global name 'my_function' is not defined A: There's quite a simple method for doing this that nobody has mentioned yet: import inspect def func(): """Doc string""" print inspect.getdoc(func) And this does what you want. There's nothing fancy going on here. All that's happening is that by doing func.__doc__ in a function defers attribute resolution long enough to have looking up __doc__ on it work as you'd expect. I use this with docopt for console script entry points. A: You've posed your question like a class method rather than a function. Namespaces are important here. For a function, print my_function.__doc__ is fine, as my_function is in the global namespace. For a class method, then print self.my_method.__doc__ would be the way to go. If you don't want to specify the name of the method, but rather pass a variable to it, you can use the built-in functions hasattr(object,attribute) and getattr(obj,attr), which do as they say, allowing you to pass variables in with strings being the name of a method. e.g. class MyClass: def fn(self): """A docstring""" print self.fn.__doc__ def print_docstrings(object): for method in dir( object ): if method[:2] == '__': # A protected function continue meth = getattr( object, method ) if hasattr( meth , '__doc__' ): print getattr( meth , '__doc__' ) x = MyClass() print_docstrings( x ) A: As noted many times, using the function name is a dynamic lookup in the globals() directory. It only works in the module of the definition and only for a global function. If you want to find out the doc string of a member function, you would need to also lookup the path from the class name - which is quite cumbersome as these names can get quite long: def foo(): """ this is foo """ doc = foo.__doc__ class Foo: def bar(self): """ this is bar """ doc = Foo.bar.__doc__ is equivalent to def foo(): """ this is foo """ doc = globals()["foo"].__doc__ class Foo: def bar(self): """ this is bar """ doc = globals()["Foo"].bar.__doc__ If you want to look up the doc string of the caller, that won't work anyway as your print-helper might live in a completely different module with a completely different globals() dictionary. The only correct choice is to look into the stack frame - but Python does not give you the function object being executed, it only has a reference to the "f_code" code object. But keep going, as there is also a reference to the "f_globals" of that function. So you can write a function to get the caller's doc like this, and as a variation from it, you get your own doc string. import inspect def get_caller_doc(): frame = inspect.currentframe().f_back.f_back for objref in frame.f_globals.values(): if inspect.isfunction(objref): if objref.func_code == frame.f_code: return objref.__doc__ elif inspect.isclass(objref): for name, member in inspect.getmembers(objref): if inspect.ismethod(member): if member.im_func.func_code == frame.f_code: return member.__doc__ and let's go to test it: def print_doc(): print get_caller_doc() def foo(): """ this is foo """ print_doc() class Foo: def bar(self): """ this is bar """ print_doc() def nothing(): print_doc() class Nothing: def nothing(self): print_doc() foo() Foo().bar() nothing() Nothing().nothing() # and my doc def get_my_doc(): return get_caller_doc() def print_my_doc(): """ showing my doc """ print get_my_doc() print_my_doc() results in this output this is foo this is bar None None showing my doc Actually, most people want their own doc string only to hand it down as an argument, but the called helper function can look it up all on its own. I'm using this in my unittest code where this is sometimes handy to fill some logs or to use the doc string as test data. That's the reason why the presented get_caller_doc() only looks for global test functions and member functions of a test class, but I guess that is enough for most people who want to find out about the doc string. class FooTest(TestCase): def get_caller_doc(self): # as seen above def test_extra_stuff(self): """ testing extra stuff """ self.createProject("A") def createProject(self, name): description = self.get_caller_doc() self.server.createProject(name, description) To define a proper get_frame_doc(frame) with sys._getframe(1) is left to the reader(). A: Try: class MyClass(): # ... def my_function(self): """Docstring for my function""" print MyClass.my_function.__doc__ # ... (*) There was a colon (:) missing after my_function() A: If you're using Test class to make sure that doc string will appear in each test, then the efficient approach would be this. def setup_method(self, method): print(getattr(self, method.__name__).__doc__) This will print doc string of each method before it gets executed or you can past the same script on teardown_method to print it in the end of each test case.
How to print Docstring of python function from inside the function itself?
I want to print the docstring of a python function from inside the function itself. for eg. def my_function(self): """Doc string for my function.""" # print the Docstring here. At the moment I am doing this directly after my_function has been defined. print my_function.__doc__ But would rather let the function do this itself. I have tried calling print self.__doc__ print self.my_function.__doc__ and print this.__doc__ inside my_function but this did not work.
[ "def my_func():\n \"\"\"Docstring goes here.\"\"\"\n print my_func.__doc__\n\nThis will work as long as you don't change the object bound to the name my_func. \nnew_func_name = my_func\nmy_func = None\n\nnew_func_name()\n# doesn't print anything because my_func is None and None has no docstring\n\nSituations in which you'd do this are rather rare, but they do happen.\nHowever, if you write a decorator like this:\ndef passmein(func):\n def wrapper(*args, **kwargs):\n return func(func, *args, **kwargs)\n return wrapper\n\nNow you can do this:\n@passmein\ndef my_func(me):\n print me.__doc__\n\nAnd this will ensure that your function gets a reference to itself (similar to self) as its first argument, so it can always get the docstring of the right function. If used on a method, the usual self becomes the second argument.\n", "This should work (in my tests it does, also included output). You could probably use __doc__ instead of getdoc, but I like it, so thats just what i used. Also, this doesn't require you to know the names of the class/method/function.\nExamples both for a class, a method and a function. Tell me if it's not what you were looking for :)\nfrom inspect import *\n\nclass MySelfExplaningClass:\n \"\"\"This is my class document string\"\"\"\n\n def __init__(self):\n print getdoc(self)\n\n def my_selfexplaining_method(self):\n \"\"\"This is my method document string\"\"\"\n print getdoc(getattr(self, getframeinfo(currentframe()).function))\n\n\nexplain = MySelfExplaningClass()\n\n# Output: This is my class document string\n\nexplain.my_selfexplaining_method()\n\n# Output: This is my method document string\n\ndef my_selfexplaining_function():\n \"\"\"This is my function document string\"\"\"\n print getdoc(globals()[getframeinfo(currentframe()).function])\n\nmy_selfexplaining_function()\n\n# Output: This is my function document string\n\n", "This works:\ndef my_function():\n \"\"\"Docstring for my function\"\"\"\n #print the Docstring here.\n print my_function.__doc__\n\nmy_function()\n\nin Python 2.7.1\nThis also works:\nclass MyClass(object):\n def my_function(self):\n \"\"\"Docstring for my function\"\"\"\n #print the Docstring here, either way works.\n print MyClass.my_function.__doc__\n print self.my_function.__doc__\n\n\nfoo = MyClass()\n\nfoo.my_function()\n\nThis however, will not work on its own:\nclass MyClass(object):\n def my_function(self):\n \"\"\"Docstring for my function\"\"\"\n #print the Docstring here.\n print my_function.__doc__\n\n\nfoo = MyClass()\n\nfoo.my_function()\n\nNameError: global name 'my_function' is not defined\n", "There's quite a simple method for doing this that nobody has mentioned yet:\nimport inspect\n\ndef func():\n \"\"\"Doc string\"\"\"\n print inspect.getdoc(func)\n\nAnd this does what you want.\nThere's nothing fancy going on here. All that's happening is that by doing func.__doc__ in a function defers attribute resolution long enough to have looking up __doc__ on it work as you'd expect.\nI use this with docopt for console script entry points.\n", "You've posed your question like a class method rather than a function. Namespaces are important here. For a function, print my_function.__doc__ is fine, as my_function is in the global namespace.\nFor a class method, then print self.my_method.__doc__ would be the way to go.\nIf you don't want to specify the name of the method, but rather pass a variable to it, you can use the built-in functions hasattr(object,attribute) and getattr(obj,attr), which do as they say, allowing you to pass variables in with strings being the name of a method. e.g.\nclass MyClass:\n def fn(self):\n \"\"\"A docstring\"\"\"\n print self.fn.__doc__ \n\ndef print_docstrings(object):\n for method in dir( object ):\n if method[:2] == '__': # A protected function\n continue\n meth = getattr( object, method )\n if hasattr( meth , '__doc__' ):\n print getattr( meth , '__doc__' )\n\nx = MyClass()\nprint_docstrings( x )\n\n", "As noted many times, using the function name is a dynamic lookup in the globals() directory. It only works in the module of the definition and only for a global function. If you want to find out the doc string of a member function, you would need to also lookup the path from the class name - which is quite cumbersome as these names can get quite long:\ndef foo():\n \"\"\" this is foo \"\"\"\n doc = foo.__doc__\nclass Foo:\n def bar(self):\n \"\"\" this is bar \"\"\"\n doc = Foo.bar.__doc__\n\nis equivalent to\ndef foo():\n \"\"\" this is foo \"\"\"\n doc = globals()[\"foo\"].__doc__\nclass Foo:\n def bar(self):\n \"\"\" this is bar \"\"\"\n doc = globals()[\"Foo\"].bar.__doc__\n\nIf you want to look up the doc string of the caller, that won't work anyway as your print-helper might live in a completely different module with a completely different globals() dictionary. The only correct choice is to look into the stack frame - but Python does not give you the function object being executed, it only has a reference to the \"f_code\" code object. But keep going, as there is also a reference to the \"f_globals\" of that function. So you can write a function to get the caller's doc like this, and as a variation from it, you get your own doc string.\nimport inspect\n\ndef get_caller_doc():\n frame = inspect.currentframe().f_back.f_back\n for objref in frame.f_globals.values():\n if inspect.isfunction(objref):\n if objref.func_code == frame.f_code:\n return objref.__doc__\n elif inspect.isclass(objref):\n for name, member in inspect.getmembers(objref):\n if inspect.ismethod(member):\n if member.im_func.func_code == frame.f_code:\n return member.__doc__\n\nand let's go to test it:\ndef print_doc():\n print get_caller_doc()\n\ndef foo():\n \"\"\" this is foo \"\"\"\n print_doc()\n\nclass Foo:\n def bar(self):\n \"\"\" this is bar \"\"\"\n print_doc()\n\ndef nothing():\n print_doc()\n\nclass Nothing:\n def nothing(self):\n print_doc()\n\nfoo()\nFoo().bar()\n\nnothing()\nNothing().nothing()\n\n# and my doc\n\ndef get_my_doc():\n return get_caller_doc()\n\ndef print_my_doc():\n \"\"\" showing my doc \"\"\"\n print get_my_doc()\n\nprint_my_doc()\n\nresults in this output\n this is foo \n this is bar \nNone\nNone\n showing my doc \n\nActually, most people want their own doc string only to hand it down as an argument, but the called helper function can look it up all on its own. I'm using this in my unittest code where this is sometimes handy to fill some logs or to use the doc string as test data. That's the reason why the presented get_caller_doc() only looks for global test functions and member functions of a test class, but I guess that is enough for most people who want to find out about the doc string. \nclass FooTest(TestCase):\n def get_caller_doc(self):\n # as seen above\n def test_extra_stuff(self):\n \"\"\" testing extra stuff \"\"\"\n self.createProject(\"A\")\n def createProject(self, name):\n description = self.get_caller_doc()\n self.server.createProject(name, description)\n\nTo define a proper get_frame_doc(frame) with sys._getframe(1) is left to the reader().\n", "Try: \nclass MyClass():\n # ...\n def my_function(self):\n \"\"\"Docstring for my function\"\"\"\n print MyClass.my_function.__doc__\n # ...\n\n(*) There was a colon (:) missing after my_function()\n", "If you're using Test class to make sure that doc string will appear in each test, then the efficient approach would be this.\n\n\n def setup_method(self, method):\n print(getattr(self, method.__name__).__doc__)\n \n \n\n\r\n\nThis will print doc string of each method before it gets executed or you can past the same script on teardown_method to print it in the end of each test case.\n" ]
[ 89, 10, 6, 6, 2, 2, 1, 0 ]
[ "inserting \nprint __doc__\njust after the class declaration,, before the def __init__, will print the doc string to the console every time you initiate an object with the class\n" ]
[ -1 ]
[ "docstring", "function", "printing", "python" ]
stackoverflow_0008822701_docstring_function_printing_python.txt
Q: How to split the data in a group of N lines and find intersection character I have a dataset like below: data="""vJrwpWtwJgWrhcsFMMfFFhFp jqHRNqRjqzjGDLGLrsFMfFZSrLrFZsSL PmmdzqPrVvPwwTWBwg wMqvLMZHhHMvwLHjbvcjnnSBnvTQFn ttgJtRGJQctTZtZT CrZsJsPPZsGzwwsLwLmpwMDw""" These are separate lines. Now, I want to group the data in a set of 3 rows and find the intersecting character in those lines. For example, r is the common character in the first group and Z is the typical character in the second group. So, below is my code: lines = [] for i in range(len(data.splitlines())): lines.append(data[i]) for j in lines: new_line = [k for k in j[i] if k in j[i + 1]] print(new_line) It gives me a string index out-of-range error. new_line = [k for k in j[i] if k in j[i + 1]] IndexError: string index out of range A: For the record: this was the Advent of Code 2022 Day 3 Part 2 challenge. I kept my data in a file called input.txt and just read line by line, but this solution can be applied to a string too. I turned converted every line into a set and used the & intersection operator. From there, I converted it to a list and removed the new line character. s[0] is therefore the only repeated character. Like this: with open('input.txt') as f: lines = f.readlines() for i in range(0, len(lines), 3): s = list(set(lines[i]) & set(lines[i + 1]) & set(lines[i + 2])) s.remove('\n') print(s[0]) Here's an example using your data string. In this case, I'd split by the new line character and no longer need to remove it from the list. I'd also extract the element from the set without converting to a list: data = """vJrwpWtwJgWrhcsFMMfFFhFp jqHRNqRjqzjGDLGLrsFMfFZSrLrFZsSL PmmdzqPrVvPwwTWBwg wMqvLMZHhHMvwLHjbvcjnnSBnvTQFn ttgJtRGJQctTZtZT CrZsJsPPZsGzwwsLwLmpwMDw""" lines = data.split('\n') for i in range(0, len(lines), 3): (ch,) = set(lines[i]) & set(lines[i + 1]) & set(lines[i + 2]) print(ch) A: If I understand your question correctly: Just solved it this morning coincidently. ;-) # ordering = ascii_lowercase + ascii_uppercase # with open('day03.in') as fin: # data = fin.read().strip() # b = 0 lines = data.split('\n') # assuming some date- read-in already # go through 3 chunks: for i in range(0, len(lines), 3): chunk = lines[i: i+3] print(chunk) #for i, c in enumerate(ordering): # if all(c in ll for ll in chunk): #b += ordering.index(c) + 1 # answer.
How to split the data in a group of N lines and find intersection character
I have a dataset like below: data="""vJrwpWtwJgWrhcsFMMfFFhFp jqHRNqRjqzjGDLGLrsFMfFZSrLrFZsSL PmmdzqPrVvPwwTWBwg wMqvLMZHhHMvwLHjbvcjnnSBnvTQFn ttgJtRGJQctTZtZT CrZsJsPPZsGzwwsLwLmpwMDw""" These are separate lines. Now, I want to group the data in a set of 3 rows and find the intersecting character in those lines. For example, r is the common character in the first group and Z is the typical character in the second group. So, below is my code: lines = [] for i in range(len(data.splitlines())): lines.append(data[i]) for j in lines: new_line = [k for k in j[i] if k in j[i + 1]] print(new_line) It gives me a string index out-of-range error. new_line = [k for k in j[i] if k in j[i + 1]] IndexError: string index out of range
[ "For the record: this was the Advent of Code 2022 Day 3 Part 2 challenge. I kept my data in a file called input.txt and just read line by line, but this solution can be applied to a string too.\nI turned converted every line into a set and used the & intersection operator. From there, I converted it to a list and removed the new line character. s[0] is therefore the only repeated character. Like this:\nwith open('input.txt') as f:\n lines = f.readlines()\n for i in range(0, len(lines), 3):\n s = list(set(lines[i]) & set(lines[i + 1]) & set(lines[i + 2]))\n s.remove('\\n')\n print(s[0])\n\nHere's an example using your data string. In this case, I'd split by the new line character and no longer need to remove it from the list. I'd also extract the element from the set without converting to a list:\ndata = \"\"\"vJrwpWtwJgWrhcsFMMfFFhFp\njqHRNqRjqzjGDLGLrsFMfFZSrLrFZsSL\nPmmdzqPrVvPwwTWBwg\nwMqvLMZHhHMvwLHjbvcjnnSBnvTQFn\nttgJtRGJQctTZtZT\nCrZsJsPPZsGzwwsLwLmpwMDw\"\"\"\n\n\nlines = data.split('\\n')\nfor i in range(0, len(lines), 3):\n (ch,) = set(lines[i]) & set(lines[i + 1]) & set(lines[i + 2])\n print(ch)\n\n", "If I understand your question correctly:\nJust solved it this morning coincidently. ;-)\n# ordering = ascii_lowercase + ascii_uppercase\n\n# with open('day03.in') as fin:\n# data = fin.read().strip()\n \n# b = 0\nlines = data.split('\\n') # assuming some date- read-in already\n\n# go through 3 chunks:\nfor i in range(0, len(lines), 3):\n chunk = lines[i: i+3]\n print(chunk)\n \n #for i, c in enumerate(ordering):\n # if all(c in ll for ll in chunk):\n #b += ordering.index(c) + 1 # answer.\n\n" ]
[ 1, 1 ]
[]
[]
[ "python", "python_3.x" ]
stackoverflow_0074668908_python_python_3.x.txt
Q: Finding the most frequent character in a string I found this programming problem while looking at a job posting on SO. I thought it was pretty interesting and as a beginner Python programmer I attempted to tackle it. However I feel my solution is quite...messy...can anyone make any suggestions to optimize it or make it cleaner? I know it's pretty trivial, but I had fun writing it. Note: Python 2.6 The problem: Write pseudo-code (or actual code) for a function that takes in a string and returns the letter that appears the most in that string. My attempt: import string def find_max_letter_count(word): alphabet = string.ascii_lowercase dictionary = {} for letters in alphabet: dictionary[letters] = 0 for letters in word: dictionary[letters] += 1 dictionary = sorted(dictionary.items(), reverse=True, key=lambda x: x[1]) for position in range(0, 26): print dictionary[position] if position != len(dictionary) - 1: if dictionary[position + 1][1] < dictionary[position][1]: break find_max_letter_count("helloworld") Output: >>> ('l', 3) Updated example: find_max_letter_count("balloon") >>> ('l', 2) ('o', 2) A: There are many ways to do this shorter. For example, you can use the Counter class (in Python 2.7 or later): import collections s = "helloworld" print(collections.Counter(s).most_common(1)[0]) If you don't have that, you can do the tally manually (2.5 or later has defaultdict): d = collections.defaultdict(int) for c in s: d[c] += 1 print(sorted(d.items(), key=lambda x: x[1], reverse=True)[0]) Having said that, there's nothing too terribly wrong with your implementation. A: If you are using Python 2.7, you can quickly do this by using collections module. collections is a hight performance data structures module. Read more at http://docs.python.org/library/collections.html#counter-objects >>> from collections import Counter >>> x = Counter("balloon") >>> x Counter({'o': 2, 'a': 1, 'b': 1, 'l': 2, 'n': 1}) >>> x['o'] 2 A: Here is way to find the most common character using a dictionary message = "hello world" d = {} letters = set(message) for l in letters: d[message.count(l)] = l print d[d.keys()[-1]], d.keys()[-1] A: Here's a way using FOR LOOP AND COUNT() w = input() r = 1 for i in w: p = w.count(i) if p > r: r = p s = i print(s) A: The way I did uses no built-in functions from Python itself, only for-loops and if-statements. def most_common_letter(): string = str(input()) letters = set(string) if " " in letters: # If you want to count spaces too, ignore this if-statement letters.remove(" ") max_count = 0 freq_letter = [] for letter in letters: count = 0 for char in string: if char == letter: count += 1 if count == max_count: max_count = count freq_letter.append(letter) if count > max_count: max_count = count freq_letter.clear() freq_letter.append(letter) return freq_letter, max_count This ensures you get every letter/character that gets used the most, and not just one. It also returns how often it occurs. Hope this helps :) A: If you want to have all the characters with the maximum number of counts, then you can do a variation on one of the two ideas proposed so far: import heapq # Helps finding the n largest counts import collections def find_max_counts(sequence): """ Returns an iterator that produces the (element, count)s with the highest number of occurrences in the given sequence. In addition, the elements are sorted. """ if len(sequence) == 0: raise StopIteration counter = collections.defaultdict(int) for elmt in sequence: counter[elmt] += 1 counts_heap = [ (-count, elmt) # The largest elmt counts are the smallest elmts for (elmt, count) in counter.iteritems()] heapq.heapify(counts_heap) highest_count = counts_heap[0][0] while True: try: (opp_count, elmt) = heapq.heappop(counts_heap) except IndexError: raise StopIteration if opp_count != highest_count: raise StopIteration yield (elmt, -opp_count) for (letter, count) in find_max_counts('balloon'): print (letter, count) for (word, count) in find_max_counts(['he', 'lkj', 'he', 'll', 'll']): print (word, count) This yields, for instance: lebigot@weinberg /tmp % python count.py ('l', 2) ('o', 2) ('he', 2) ('ll', 2) This works with any sequence: words, but also ['hello', 'hello', 'bonjour'], for instance. The heapq structure is very efficient at finding the smallest elements of a sequence without sorting it completely. On the other hand, since there are not so many letter in the alphabet, you can probably also run through the sorted list of counts until the maximum count is not found anymore, without this incurring any serious speed loss. A: def most_frequent(text): frequencies = [(c, text.count(c)) for c in set(text)] return max(frequencies, key=lambda x: x[1])[0] s = 'ABBCCCDDDD' print(most_frequent(s)) frequencies is a list of tuples that count the characters as (character, count). We apply max to the tuples using count's and return that tuple's character. In the event of a tie, this solution will pick only one. A: Question : Most frequent character in a string The maximum occurring character in an input string Method 1 : a = "GiniGinaProtijayi" d ={} chh = '' max = 0 for ch in a : d[ch] = d.get(ch,0) +1 for val in sorted(d.items(),reverse=True , key = lambda ch : ch[1]): chh = ch max = d.get(ch) print(chh) print(max) Method 2 : a = "GiniGinaProtijayi" max = 0 chh = '' count = [0] * 256 for ch in a : count[ord(ch)] += 1 for ch in a : if(count[ord(ch)] > max): max = count[ord(ch)] chh = ch print(chh) Method 3 : import collections line ='North Calcutta Shyambazaar Soudipta Tabu Roopa Roopi Gina Gini Protijayi Sovabazaar Paikpara Baghbazaar Roopa' bb = collections.Counter(line).most_common(1)[0][0] print(bb) Method 4 : line =' North Calcutta Shyambazaar Soudipta Tabu Roopa Roopi Gina Gini Protijayi Sovabazaar Paikpara Baghbazaar Roopa' def mostcommonletter(sentence): letters = list(sentence) return (max(set(letters),key = letters.count)) print(mostcommonletter(line)) A: I noticed that most of the answers only come back with one item even if there is an equal amount of characters most commonly used. For example "iii 444 yyy 999". There are an equal amount of spaces, i's, 4's, y's, and 9's. The solution should come back with everything, not just the letter i: sentence = "iii 444 yyy 999" # Returns the first items value in the list of tuples (i.e) the largest number # from Counter().most_common() largest_count: int = Counter(sentence).most_common()[0][1] # If the tuples value is equal to the largest value, append it to the list most_common_list: list = [(x, y) for x, y in Counter(sentence).items() if y == largest_count] print(most_common_count) # RETURNS [('i', 3), (' ', 3), ('4', 3), ('y', 3), ('9', 3)] A: Here are a few things I'd do: Use collections.defaultdict instead of the dict you initialise manually. Use inbuilt sorting and max functions like max instead of working it out yourself - it's easier. Here's my final result: from collections import defaultdict def find_max_letter_count(word): matches = defaultdict(int) # makes the default value 0 for char in word: matches[char] += 1 return max(matches.iteritems(), key=lambda x: x[1]) find_max_letter_count('helloworld') == ('l', 3) A: If you could not use collections for any reason, I would suggest the following implementation: s = input() d = {} # We iterate through a string and if we find the element, that # is already in the dict, than we are just incrementing its counter. for ch in s: if ch in d: d[ch] += 1 else: d[ch] = 1 # If there is a case, that we are given empty string, then we just # print a message, which says about it. print(max(d, key=d.get, default='Empty string was given.')) A: sentence = "This is a great question made me wanna watch matrix again!" char_frequency = {} for char in sentence: if char == " ": #to skip spaces continue elif char in char_frequency: char_frequency[char] += 1 else: char_frequency[char] = 1 char_frequency_sorted = sorted( char_frequency.items(), key=lambda ky: ky[1], reverse=True ) print(char_frequency_sorted[0]) #output -->('a', 9) A: # return the letter with the max frequency. def maxletter(word:str) -> tuple: ''' return the letter with the max occurance ''' v = 1 dic = {} for letter in word: if letter in dic: dic[letter] += 1 else: dic[letter] = v for k in dic: if dic[k] == max(dic.values()): return k, dic[k] l, n = maxletter("Hello World") print(l, n) output: l 3 A: you may also try something below. from pprint import pprint sentence = "this is a common interview question" char_frequency = {} for char in sentence: if char in char_frequency: char_frequency[char] += 1 else: char_frequency[char] = 1 pprint(char_frequency, width = 1) out = sorted(char_frequency.items(), key = lambda kv : kv[1], reverse = True) print(out) print(out[0]) A: statistics.mode(data) Return the single most common data point from discrete or nominal data. The mode (when it exists) is the most typical value and serves as a measure of central location. If there are multiple modes with the same frequency, returns the first one encountered in the data. If the smallest or largest of those is desired instead, use min(multimode(data)) or max(multimode(data)). If the input data is empty, StatisticsError is raised. import statistics as stat test = 'This is a test of the fantastic mode super special function ssssssssssssss' test2 = ['block', 'cheese', 'block'] val = stat.mode(test) val2 = stat.mode(test2) print(val, val2) mode assumes discrete data and returns a single value. This is the standard treatment of the mode as commonly taught in schools: mode([1, 1, 2, 3, 3, 3, 3, 4]) 3 The mode is unique in that it is the only statistic in this package that also applies to nominal (non-numeric) data: mode(["red", "blue", "blue", "red", "green", "red", "red"]) 'red'
Finding the most frequent character in a string
I found this programming problem while looking at a job posting on SO. I thought it was pretty interesting and as a beginner Python programmer I attempted to tackle it. However I feel my solution is quite...messy...can anyone make any suggestions to optimize it or make it cleaner? I know it's pretty trivial, but I had fun writing it. Note: Python 2.6 The problem: Write pseudo-code (or actual code) for a function that takes in a string and returns the letter that appears the most in that string. My attempt: import string def find_max_letter_count(word): alphabet = string.ascii_lowercase dictionary = {} for letters in alphabet: dictionary[letters] = 0 for letters in word: dictionary[letters] += 1 dictionary = sorted(dictionary.items(), reverse=True, key=lambda x: x[1]) for position in range(0, 26): print dictionary[position] if position != len(dictionary) - 1: if dictionary[position + 1][1] < dictionary[position][1]: break find_max_letter_count("helloworld") Output: >>> ('l', 3) Updated example: find_max_letter_count("balloon") >>> ('l', 2) ('o', 2)
[ "There are many ways to do this shorter. For example, you can use the Counter class (in Python 2.7 or later):\nimport collections\ns = \"helloworld\"\nprint(collections.Counter(s).most_common(1)[0])\n\nIf you don't have that, you can do the tally manually (2.5 or later has defaultdict):\nd = collections.defaultdict(int)\nfor c in s:\n d[c] += 1\nprint(sorted(d.items(), key=lambda x: x[1], reverse=True)[0])\n\nHaving said that, there's nothing too terribly wrong with your implementation.\n", "If you are using Python 2.7, you can quickly do this by using collections module.\ncollections is a hight performance data structures module. Read more at\nhttp://docs.python.org/library/collections.html#counter-objects\n>>> from collections import Counter\n>>> x = Counter(\"balloon\")\n>>> x\nCounter({'o': 2, 'a': 1, 'b': 1, 'l': 2, 'n': 1})\n>>> x['o']\n2\n\n", "Here is way to find the most common character using a dictionary\nmessage = \"hello world\"\nd = {}\nletters = set(message)\nfor l in letters:\n d[message.count(l)] = l\n\nprint d[d.keys()[-1]], d.keys()[-1]\n\n", "Here's a way using FOR LOOP AND COUNT()\nw = input()\nr = 1\nfor i in w:\n p = w.count(i)\n if p > r:\n r = p\n s = i\nprint(s)\n\n", "The way I did uses no built-in functions from Python itself, only for-loops and if-statements.\ndef most_common_letter():\n string = str(input())\n letters = set(string)\n if \" \" in letters: # If you want to count spaces too, ignore this if-statement\n letters.remove(\" \")\n max_count = 0\n freq_letter = []\n for letter in letters:\n count = 0\n for char in string:\n if char == letter:\n count += 1\n if count == max_count:\n max_count = count\n freq_letter.append(letter)\n if count > max_count:\n max_count = count\n freq_letter.clear()\n freq_letter.append(letter)\n return freq_letter, max_count\n\nThis ensures you get every letter/character that gets used the most, and not just one. It also returns how often it occurs. Hope this helps :)\n", "If you want to have all the characters with the maximum number of counts, then you can do a variation on one of the two ideas proposed so far:\nimport heapq # Helps finding the n largest counts\nimport collections\n\ndef find_max_counts(sequence):\n \"\"\"\n Returns an iterator that produces the (element, count)s with the\n highest number of occurrences in the given sequence.\n\n In addition, the elements are sorted.\n \"\"\"\n\n if len(sequence) == 0:\n raise StopIteration\n\n counter = collections.defaultdict(int)\n for elmt in sequence:\n counter[elmt] += 1\n\n counts_heap = [\n (-count, elmt) # The largest elmt counts are the smallest elmts\n for (elmt, count) in counter.iteritems()]\n\n heapq.heapify(counts_heap)\n\n highest_count = counts_heap[0][0]\n\n while True:\n\n try:\n (opp_count, elmt) = heapq.heappop(counts_heap)\n except IndexError:\n raise StopIteration\n\n if opp_count != highest_count:\n raise StopIteration\n\n yield (elmt, -opp_count)\n\nfor (letter, count) in find_max_counts('balloon'):\n print (letter, count)\n\nfor (word, count) in find_max_counts(['he', 'lkj', 'he', 'll', 'll']):\n print (word, count)\n\nThis yields, for instance:\nlebigot@weinberg /tmp % python count.py\n('l', 2)\n('o', 2)\n('he', 2)\n('ll', 2)\n\nThis works with any sequence: words, but also ['hello', 'hello', 'bonjour'], for instance.\nThe heapq structure is very efficient at finding the smallest elements of a sequence without sorting it completely. On the other hand, since there are not so many letter in the alphabet, you can probably also run through the sorted list of counts until the maximum count is not found anymore, without this incurring any serious speed loss.\n", "def most_frequent(text):\n frequencies = [(c, text.count(c)) for c in set(text)]\n return max(frequencies, key=lambda x: x[1])[0]\n\ns = 'ABBCCCDDDD'\nprint(most_frequent(s))\n\nfrequencies is a list of tuples that count the characters as (character, count). We apply max to the tuples using count's and return that tuple's character. In the event of a tie, this solution will pick only one.\n", "Question :\nMost frequent character in a string\nThe maximum occurring character in an input string\nMethod 1 :\na = \"GiniGinaProtijayi\"\n\nd ={}\nchh = ''\nmax = 0 \nfor ch in a : d[ch] = d.get(ch,0) +1 \nfor val in sorted(d.items(),reverse=True , key = lambda ch : ch[1]):\n chh = ch\n max = d.get(ch)\n \n \nprint(chh) \nprint(max) \n\nMethod 2 :\na = \"GiniGinaProtijayi\"\n\nmax = 0 \nchh = ''\ncount = [0] * 256 \nfor ch in a : count[ord(ch)] += 1\nfor ch in a :\n if(count[ord(ch)] > max):\n max = count[ord(ch)] \n chh = ch\n \nprint(chh) \n\nMethod 3 :\n import collections\n \n line ='North Calcutta Shyambazaar Soudipta Tabu Roopa Roopi Gina Gini Protijayi Sovabazaar Paikpara Baghbazaar Roopa'\n \nbb = collections.Counter(line).most_common(1)[0][0]\nprint(bb)\n\nMethod 4 :\nline =' North Calcutta Shyambazaar Soudipta Tabu Roopa Roopi Gina Gini Protijayi Sovabazaar Paikpara Baghbazaar Roopa'\n\n\ndef mostcommonletter(sentence):\n letters = list(sentence)\n return (max(set(letters),key = letters.count))\n\n\nprint(mostcommonletter(line)) \n\n \n\n", "I noticed that most of the answers only come back with one item even if there is an equal amount of characters most commonly used. For example \"iii 444 yyy 999\". There are an equal amount of spaces, i's, 4's, y's, and 9's. The solution should come back with everything, not just the letter i:\nsentence = \"iii 444 yyy 999\"\n\n# Returns the first items value in the list of tuples (i.e) the largest number\n# from Counter().most_common()\nlargest_count: int = Counter(sentence).most_common()[0][1]\n\n# If the tuples value is equal to the largest value, append it to the list\nmost_common_list: list = [(x, y)\n for x, y in Counter(sentence).items() if y == largest_count]\n\nprint(most_common_count)\n\n# RETURNS\n[('i', 3), (' ', 3), ('4', 3), ('y', 3), ('9', 3)]\n\n", "Here are a few things I'd do:\n\nUse collections.defaultdict instead of the dict you initialise manually.\nUse inbuilt sorting and max functions like max instead of working it out yourself - it's easier.\n\nHere's my final result:\nfrom collections import defaultdict\n\ndef find_max_letter_count(word):\n matches = defaultdict(int) # makes the default value 0\n\n for char in word:\n matches[char] += 1\n\n return max(matches.iteritems(), key=lambda x: x[1])\n\nfind_max_letter_count('helloworld') == ('l', 3)\n\n", "If you could not use collections for any reason, I would suggest the following implementation:\ns = input()\nd = {}\n\n# We iterate through a string and if we find the element, that\n# is already in the dict, than we are just incrementing its counter.\nfor ch in s:\n if ch in d:\n d[ch] += 1\n else:\n d[ch] = 1\n\n# If there is a case, that we are given empty string, then we just\n# print a message, which says about it.\nprint(max(d, key=d.get, default='Empty string was given.'))\n\n", "sentence = \"This is a great question made me wanna watch matrix again!\"\n\nchar_frequency = {}\n\nfor char in sentence:\n if char == \" \": #to skip spaces\n continue\n elif char in char_frequency:\n char_frequency[char] += 1 \n else:\n char_frequency[char] = 1\n\n\nchar_frequency_sorted = sorted(\n char_frequency.items(), key=lambda ky: ky[1], reverse=True\n)\nprint(char_frequency_sorted[0]) #output -->('a', 9)\n\n", "# return the letter with the max frequency.\n\ndef maxletter(word:str) -> tuple:\n ''' return the letter with the max occurance '''\n v = 1\n dic = {}\n for letter in word:\n if letter in dic:\n dic[letter] += 1\n else:\n dic[letter] = v\n\n for k in dic:\n if dic[k] == max(dic.values()):\n return k, dic[k]\n\nl, n = maxletter(\"Hello World\")\nprint(l, n)\n\noutput: l 3\n", "you may also try something below.\nfrom pprint import pprint \n sentence = \"this is a common interview question\" \n \n char_frequency = {} \n for char in sentence: \n if char in char_frequency: \n char_frequency[char] += 1 \n else: \n char_frequency[char] = 1 \n pprint(char_frequency, width = 1) \n out = sorted(char_frequency.items(), \n key = lambda kv : kv[1], reverse = True) \n print(out) \n print(out[0]) \n\n", "statistics.mode(data)\nReturn the single most common data point from discrete or nominal data. The mode (when it exists) is the most typical value and serves as a measure of central location.\nIf there are multiple modes with the same frequency, returns the first one encountered in the data. If the smallest or largest of those is desired instead, use min(multimode(data)) or max(multimode(data)). If the input data is empty, StatisticsError is raised.\nimport statistics as stat\n\ntest = 'This is a test of the fantastic mode super special function ssssssssssssss'\ntest2 = ['block', 'cheese', 'block']\nval = stat.mode(test)\nval2 = stat.mode(test2)\nprint(val, val2)\n\nmode assumes discrete data and returns a single value. This is the standard treatment of the mode as commonly taught in schools:\n\n\n\n\n\n\nmode([1, 1, 2, 3, 3, 3, 3, 4])\n3\nThe mode is unique in that it is the only statistic in this package that also applies to nominal (non-numeric) data:\n\n\n\n\n\n\nmode([\"red\", \"blue\", \"blue\", \"red\", \"green\", \"red\", \"red\"])\n'red'\n" ]
[ 36, 5, 2, 2, 2, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0 ]
[ "#file:filename\n#quant:no of frequent words you want\n\ndef frequent_letters(file,quant):\n file = open(file)\n file = file.read()\n cnt = Counter\n op = cnt(file).most_common(quant)\n return op \n\n", "# This code is to print all characters in a string which have highest frequency\n \ndef find(str):\n \n y = sorted([[a.count(i),i] for i in set(str)])\n # here,the count of unique character and the character are taken as a list \n # inside y(which is a list). And they are sorted according to the \n # count of each character in the list y. (ascending)\n # Eg : for \"pradeep\", y = [[1,'r'],[1,'a'],[1,'d'],[2,'p'],[2,'e']]\n\n most_freq= y[len(y)-1][0] \n # the count of the most freq character is assigned to the variable 'r'\n # ie, most_freq= 2\n\n x= []\n\n for j in range(len(y)):\n \n if y[j][0] == most_freq:\n x.append(y[j])\n # if the 1st element in the list of list == most frequent \n # character's count, then all the characters which have the \n # highest frequency will be appended to list x.\n # eg :\"pradeep\"\n # x = [['p',2],['e',2]] O/P as expected\n return x\n\nfind(\"pradeep\")\n\n" ]
[ -1, -1 ]
[ "algorithm", "optimization", "python", "time_complexity" ]
stackoverflow_0004131123_algorithm_optimization_python_time_complexity.txt
Q: Why is there an incorrect display of objects in the Pygame? I'm creating a game on Pygame and faced with the problem that the objects are displayed incorrectly I want the objects in a row but I wrote it diagonally My game: import pygame, controls from gun import Gun from pygame.sprite import Group def run(): pygame.init() screen = pygame.display.set_mode((700, 600)) pygame.display.set_caption('Game1') bg_color = (0, 0, 0) gun = Gun(screen) bullets = Group() inos = Group() controls.create_army(screen, inos) while True: controls.events(screen, gun, bullets) gun.update_gun() controls.update(bg_color, screen, gun, inos, bullets) controls.update_bullets(bullets) controls.update_inos(inos) run() controls: import pygame import sys from bullet import Bullet from ino import Ino def events(screen, gun, bullets): for event in pygame.event.get(): if event.type == pygame.QUIT: sys.exit() elif event.type == pygame.KEYDOWN: if event.key == pygame.K_d: gun.mright = True elif event.key == pygame.K_a: gun.mleft = True elif event.key == pygame.K_SPACE: new_bullet = Bullet(screen, gun) bullets.add(new_bullet) elif event.type == pygame.KEYUP: if event.key == pygame.K_d: gun.mright = False elif event.key == pygame.K_a: gun.mleft = False def update(bg_color, screen, gun, inos, bullets): screen.fill(bg_color) for bullet in bullets.sprites(): bullet.draw_bullet() gun.output() inos.draw(screen) pygame.display.flip() def update_bullets(bullets): bullets.update() for bullet in bullets.copy(): if bullet.rect.bottom <= 0: bullets.remove(bullet) def update_inos(inos): inos.update() def create_army(screen, inos): ino = Ino(screen) ino_width = ino.rect.width number_ino_x = int((700 - 2 * ino_width) / ino_width) ino_height = ino.rect.height number_ino_y = int((800 - 100 - 2 * ino_height) / ino_height) for row_number in range(number_ino_y - 1): for ino_number in range(number_ino_x): ino = Ino(screen) ino.x = ino_width + (ino_width * ino_number) ino.y = ino_height + (ino_height * ino_number) ino.rect.x = ino.x ino.rect.y = ino.rect.height + (ino.rect.height * row_number) inos.add(ino) soldiers: import pygame class Ino(pygame.sprite.Sprite): def __init__(self, screen): super(Ino, self).__init__() self.screen = screen self.image = pygame.image.load(r'C:\Users\ralph\Downloads\Python\Game\Image\r.soldier.png') self.rect = self.image.get_rect() self.rect.x = self.rect.width self.rect.y = self.rect.height self.x = float(self.rect.x) self.y = float(self.rect.y) def draw(self): self.screen.blit(self.image, self.rect) def update(self): self.y += 0.05 self.rect.y = self.y I got this result But I need another How can I fix this? In my humble opinion the problem is in "controls" or "soldiers" update A: The images are diagonal because you calculate the y-coordinate depending on the ino_number, so that the y-coordinate increases with increasing ino_number. The y-coordinate must be the same for all objects. Only the coordinated x must increase with the number ino_number: def create_army(screen, inos): # [...] for row_number in range(number_ino_y - 1): for ino_number in range(number_ino_x): ino = Ino(screen) ino.x = ino_width + (ino_width * ino_number) ino.y = ino_height ino.rect.x = ino.x ino.rect.y = ino.y inos.add(ino)
Why is there an incorrect display of objects in the Pygame?
I'm creating a game on Pygame and faced with the problem that the objects are displayed incorrectly I want the objects in a row but I wrote it diagonally My game: import pygame, controls from gun import Gun from pygame.sprite import Group def run(): pygame.init() screen = pygame.display.set_mode((700, 600)) pygame.display.set_caption('Game1') bg_color = (0, 0, 0) gun = Gun(screen) bullets = Group() inos = Group() controls.create_army(screen, inos) while True: controls.events(screen, gun, bullets) gun.update_gun() controls.update(bg_color, screen, gun, inos, bullets) controls.update_bullets(bullets) controls.update_inos(inos) run() controls: import pygame import sys from bullet import Bullet from ino import Ino def events(screen, gun, bullets): for event in pygame.event.get(): if event.type == pygame.QUIT: sys.exit() elif event.type == pygame.KEYDOWN: if event.key == pygame.K_d: gun.mright = True elif event.key == pygame.K_a: gun.mleft = True elif event.key == pygame.K_SPACE: new_bullet = Bullet(screen, gun) bullets.add(new_bullet) elif event.type == pygame.KEYUP: if event.key == pygame.K_d: gun.mright = False elif event.key == pygame.K_a: gun.mleft = False def update(bg_color, screen, gun, inos, bullets): screen.fill(bg_color) for bullet in bullets.sprites(): bullet.draw_bullet() gun.output() inos.draw(screen) pygame.display.flip() def update_bullets(bullets): bullets.update() for bullet in bullets.copy(): if bullet.rect.bottom <= 0: bullets.remove(bullet) def update_inos(inos): inos.update() def create_army(screen, inos): ino = Ino(screen) ino_width = ino.rect.width number_ino_x = int((700 - 2 * ino_width) / ino_width) ino_height = ino.rect.height number_ino_y = int((800 - 100 - 2 * ino_height) / ino_height) for row_number in range(number_ino_y - 1): for ino_number in range(number_ino_x): ino = Ino(screen) ino.x = ino_width + (ino_width * ino_number) ino.y = ino_height + (ino_height * ino_number) ino.rect.x = ino.x ino.rect.y = ino.rect.height + (ino.rect.height * row_number) inos.add(ino) soldiers: import pygame class Ino(pygame.sprite.Sprite): def __init__(self, screen): super(Ino, self).__init__() self.screen = screen self.image = pygame.image.load(r'C:\Users\ralph\Downloads\Python\Game\Image\r.soldier.png') self.rect = self.image.get_rect() self.rect.x = self.rect.width self.rect.y = self.rect.height self.x = float(self.rect.x) self.y = float(self.rect.y) def draw(self): self.screen.blit(self.image, self.rect) def update(self): self.y += 0.05 self.rect.y = self.y I got this result But I need another How can I fix this? In my humble opinion the problem is in "controls" or "soldiers" update
[ "The images are diagonal because you calculate the y-coordinate depending on the ino_number, so that the y-coordinate increases with increasing ino_number. The y-coordinate must be the same for all objects. Only the coordinated x must increase with the number ino_number:\ndef create_army(screen, inos):\n # [...]\n\n for row_number in range(number_ino_y - 1):\n for ino_number in range(number_ino_x):\n ino = Ino(screen)\n ino.x = ino_width + (ino_width * ino_number)\n ino.y = ino_height\n ino.rect.x = ino.x\n ino.rect.y = ino.y\n inos.add(ino)\n\n" ]
[ 0 ]
[]
[]
[ "pygame", "python" ]
stackoverflow_0074668817_pygame_python.txt
Q: Clearing an animated plot in tkinter to reload animation I have an arduino connected to a pressure sensor collecting data. I am able to connect to the sensor and plot the data with an animation via the code below in a tkinter window/frame. When I open the tkinter window, I click the graph button and the animation loads as expected. I want to click the clear button to delete the animation and then click graph again get new data. Ive played around with plt clear, forgetting the pack and other options but still have not found a solution. Any help would be appreciated... import tkinter as tk from tkinter import * import matplotlib.pyplot as plt from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg import matplotlib.animation as animation import serial import datetime root = tk.Tk() root.state('zoomed') width, height = root.winfo_screenwidth(), root.winfo_screenheight() root.geometry('%dx%d+0+0' % (width, height)) root.configure(background='white') root.title("CMJ") global data, times times = [] data = [] fig, ax = plt.subplots() ax.set_ylim(0, 40) plt.xticks(rotation=45, ha='right') def animate(i): ser = serial.Serial('/dev/cu.usbserial-1420', 115200, timeout=1) for i in range(20): raw = ser.readline() dec = raw.decode() strline = dec.split(" ") lines = float(strline[0]) now = datetime.datetime.now() now1 = now.strftime('%M:%S.%f') now = now1[:-5] data.append(lines) times.append(now) ser.close() line, = ax.plot(times, data, label="Force", color="blue") line.set_ydata(data) x = max(data) most.configure(text=x) return line, def graph(): graphframe.pack(pady=20) canvas = FigureCanvasTkAgg(fig, master=graphframe) canvas.get_tk_widget().pack() ani = animation.FuncAnimation(fig, animate, frames=19, repeat=FALSE, interval=20, cache_frame_data=FALSE) canvas.draw() def clear(): plt.close("all") buttonframe = Frame(root) buttonframe.pack() graphframe = Frame(root) graphframe.pack() button = tk.Button(buttonframe, text='graph', command=graph, width=15) button.pack() button2 = tk.Button(buttonframe, text='clear', command=clear, width=15) button2.pack() most = tk.Label(buttonframe, text="0") most.pack() root.mainloop() Ive tried to forget the frame, but when I repack, the frame pops up with he old completed animation and not with new values. In the animate function, when I print the data and times, it actually saves new values into he variables but I cannot get them to reanimate the plot with the new data. A: Well it took me 3 days and A LOT of searching, trial and error but ive been able to get the desired result. It may not be the most efficient but it's a start. Ive created a new function to create the subplots setup() I call that initially when the program opens. Then in the clear() function I am able to forget the canvas widget, reset the variables and call the setup() function again. Now when I click graph() function it creates a new graph. import tkinter as tk from tkinter import * import matplotlib.pyplot as plt from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg import matplotlib.animation as animation import serial import datetime root = tk.Tk() root.state('zoomed') width, height = root.winfo_screenwidth(), root.winfo_screenheight() root.geometry('%dx%d+0+0' % (width, height)) root.configure(background='white') root.title("CMJ") times = [] data = [] def setup(): global fig, ax fig, ax = plt.subplots() ax.set_ylim(0, 40) plt.xticks(rotation=45, ha='right') def animate(i): ser = serial.Serial('/dev/cu.usbserial-1420', 115200, timeout=1) for i in range(20): raw = ser.readline() dec = raw.decode() strline = dec.split(" ") lines = float(strline[0]) now = datetime.datetime.now() now1 = now.strftime('%M:%S.%f') now = now1[:-5] data.append(lines) times.append(now) ser.close() line, = ax.plot(times, data, label="Force", color="blue") line.set_ydata(data) x = max(data) most.configure(text=x) print(times, data) return line, graph() def graph(): global canvas graphframe.pack(pady=20) canvas = FigureCanvasTkAgg(fig, master=graphframe) canvas.get_tk_widget().pack() ani = animation.FuncAnimation(fig, animate, frames=19, repeat=FALSE, interval=20, cache_frame_data=TRUE) canvas.draw() def clear(): global times, data times = [] data = [] canvas.get_tk_widget().forget() setup() buttonframe = Frame(root) buttonframe.pack() graphframe = Frame(root) graphframe.pack() button = tk.Button(buttonframe, text='graph', command=graph, width=15) button.pack() button2 = tk.Button(buttonframe, text='clear', command=clear, width=15) button2.pack() most = tk.Label(buttonframe, text="0") most.pack() setup() root.mainloop() If you notice any thing I am overlooking, anything that is unnecessary or overkill please let me know. Thank you for your time, hope this helps if you arr encountering something similar.
Clearing an animated plot in tkinter to reload animation
I have an arduino connected to a pressure sensor collecting data. I am able to connect to the sensor and plot the data with an animation via the code below in a tkinter window/frame. When I open the tkinter window, I click the graph button and the animation loads as expected. I want to click the clear button to delete the animation and then click graph again get new data. Ive played around with plt clear, forgetting the pack and other options but still have not found a solution. Any help would be appreciated... import tkinter as tk from tkinter import * import matplotlib.pyplot as plt from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg import matplotlib.animation as animation import serial import datetime root = tk.Tk() root.state('zoomed') width, height = root.winfo_screenwidth(), root.winfo_screenheight() root.geometry('%dx%d+0+0' % (width, height)) root.configure(background='white') root.title("CMJ") global data, times times = [] data = [] fig, ax = plt.subplots() ax.set_ylim(0, 40) plt.xticks(rotation=45, ha='right') def animate(i): ser = serial.Serial('/dev/cu.usbserial-1420', 115200, timeout=1) for i in range(20): raw = ser.readline() dec = raw.decode() strline = dec.split(" ") lines = float(strline[0]) now = datetime.datetime.now() now1 = now.strftime('%M:%S.%f') now = now1[:-5] data.append(lines) times.append(now) ser.close() line, = ax.plot(times, data, label="Force", color="blue") line.set_ydata(data) x = max(data) most.configure(text=x) return line, def graph(): graphframe.pack(pady=20) canvas = FigureCanvasTkAgg(fig, master=graphframe) canvas.get_tk_widget().pack() ani = animation.FuncAnimation(fig, animate, frames=19, repeat=FALSE, interval=20, cache_frame_data=FALSE) canvas.draw() def clear(): plt.close("all") buttonframe = Frame(root) buttonframe.pack() graphframe = Frame(root) graphframe.pack() button = tk.Button(buttonframe, text='graph', command=graph, width=15) button.pack() button2 = tk.Button(buttonframe, text='clear', command=clear, width=15) button2.pack() most = tk.Label(buttonframe, text="0") most.pack() root.mainloop() Ive tried to forget the frame, but when I repack, the frame pops up with he old completed animation and not with new values. In the animate function, when I print the data and times, it actually saves new values into he variables but I cannot get them to reanimate the plot with the new data.
[ "Well it took me 3 days and A LOT of searching, trial and error but ive been able to get the desired result. It may not be the most efficient but it's a start.\nIve created a new function to create the subplots setup() I call that initially when the program opens. Then in the clear() function I am able to forget the canvas widget, reset the variables and call the setup() function again. Now when I click graph() function it creates a new graph.\nimport tkinter as tk\nfrom tkinter import *\nimport matplotlib.pyplot as plt\nfrom matplotlib.backends.backend_tkagg import FigureCanvasTkAgg\nimport matplotlib.animation as animation\nimport serial\nimport datetime\n\nroot = tk.Tk()\nroot.state('zoomed')\nwidth, height = root.winfo_screenwidth(), root.winfo_screenheight()\nroot.geometry('%dx%d+0+0' % (width, height))\nroot.configure(background='white')\nroot.title(\"CMJ\")\n\ntimes = []\ndata = []\n\n\ndef setup():\n global fig, ax\n fig, ax = plt.subplots()\n ax.set_ylim(0, 40)\n plt.xticks(rotation=45, ha='right')\n\ndef animate(i):\n ser = serial.Serial('/dev/cu.usbserial-1420', 115200, timeout=1)\n for i in range(20):\n raw = ser.readline()\n dec = raw.decode()\n strline = dec.split(\" \")\n lines = float(strline[0])\n now = datetime.datetime.now()\n now1 = now.strftime('%M:%S.%f')\n now = now1[:-5]\n data.append(lines)\n times.append(now)\n ser.close()\n line, = ax.plot(times, data, label=\"Force\", color=\"blue\")\n line.set_ydata(data)\n x = max(data)\n most.configure(text=x)\n print(times, data)\n return line,\n graph()\n\ndef graph():\n global canvas\n graphframe.pack(pady=20)\n canvas = FigureCanvasTkAgg(fig, master=graphframe)\n canvas.get_tk_widget().pack()\n ani = animation.FuncAnimation(fig, animate, frames=19, repeat=FALSE, interval=20, cache_frame_data=TRUE)\n canvas.draw()\n\ndef clear():\n global times, data\n times = []\n data = []\n canvas.get_tk_widget().forget()\n setup()\n\n\nbuttonframe = Frame(root)\nbuttonframe.pack()\ngraphframe = Frame(root)\ngraphframe.pack()\nbutton = tk.Button(buttonframe, text='graph', command=graph, width=15)\nbutton.pack()\nbutton2 = tk.Button(buttonframe, text='clear', command=clear, width=15)\nbutton2.pack()\nmost = tk.Label(buttonframe, text=\"0\")\nmost.pack()\n\nsetup()\n\nroot.mainloop()\n\nIf you notice any thing I am overlooking, anything that is unnecessary or overkill please let me know.\nThank you for your time, hope this helps if you arr encountering something similar.\n" ]
[ 0 ]
[]
[]
[ "arduino", "matplotlib", "python", "tkinter" ]
stackoverflow_0074658736_arduino_matplotlib_python_tkinter.txt
Q: How to print "\'" in python I would like to print in python print("\'/"), expected output \'/ Thanks for helping me ! I just need to know how to print an antislash with a ' after thanks ! A: You can use either print("\\'/") or print(r"\'/") The escape character in Python is \ and the option of r before the string represent the string as literals without the need of escape chars. A: Is this what you're looking for? print("\\\'/") A: >>> print(r"\'/") \'/
How to print "\'" in python
I would like to print in python print("\'/"), expected output \'/ Thanks for helping me ! I just need to know how to print an antislash with a ' after thanks !
[ "You can use either\nprint(\"\\\\'/\") \n\nor \nprint(r\"\\'/\")\n\nThe escape character in Python is \\ and the option of r before the string represent the string as literals without the need of escape chars. \n", "Is this what you're looking for?\nprint(\"\\\\\\'/\")\n", ">>> print(r\"\\'/\")\n\\'/\n\n" ]
[ 2, 2, 2 ]
[ "The Best way to do this is to Add a Front Slash() and then inverted commas.\nprint('a'')\na='That's a Sample Case'\nprint(a)\n" ]
[ -1 ]
[ "python" ]
stackoverflow_0062311463_python.txt
Q: ValueError: could not broadcast input array from shape (200,200,3) into shape (200,200) ValueError: could not broadcast input array from shape (200,200,3) into shape (200,200) img_000= np.array(img_00) A: use np.asarray(img_00) your image needs 3 channels: (width, height,colorchannels) A: A small example that displays a similar error In [72]: alist = [np.ones((3,3,3)), np.zeros((3,3))] In [73]: np.array(alist) C:\Users\paul\AppData\Local\Temp\ipykernel_7196\2629805649.py:1: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray. np.array(alist) --------------------------------------------------------------------------- ValueError Traceback (most recent call last) Input In [73], in <cell line: 1>() ----> 1 np.array(alist) ValueError: could not broadcast input array from shape (3,3,3) into shape (3,3)
ValueError: could not broadcast input array from shape (200,200,3) into shape (200,200)
ValueError: could not broadcast input array from shape (200,200,3) into shape (200,200) img_000= np.array(img_00)
[ "use\nnp.asarray(img_00)\n\nyour image needs 3 channels: (width, height,colorchannels)\n", "A small example that displays a similar error\nIn [72]: alist = [np.ones((3,3,3)), np.zeros((3,3))]\n\nIn [73]: np.array(alist)\nC:\\Users\\paul\\AppData\\Local\\Temp\\ipykernel_7196\\2629805649.py:1: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray.\n np.array(alist)\n---------------------------------------------------------------------------\nValueError Traceback (most recent call last)\nInput In [73], in <cell line: 1>()\n----> 1 np.array(alist)\n\nValueError: could not broadcast input array from shape (3,3,3) into shape (3,3)\n\n" ]
[ 1, 0 ]
[]
[]
[ "numpy", "python", "tensorflow" ]
stackoverflow_0074667144_numpy_python_tensorflow.txt
Q: Load very large pickle file? so for my bachelors thesis I am supposed to train a classifier on a very large dataset. I'm gonna get access to my Uni's deep learning cluster at some point, but for now I was told to do a bit of data exploration on the data on my own device. I was told to only use 10% of the data. Thing is the pickle file is absolutely massive (8.3 GB on my 16 GB system) and when I try to load it straight up, the system crashes. I have an excel which contains sample data, so I figured I could pickle the sample data and write a script which only reads the first 10% or sth of the file. However when I pickled it and looked at it using pickletools.dis() I realized, that I can't just read the top 10% of the file, since it essentially doesnt go row by row, but column by column. So if I were to take the first 10%, I would have data which is entirely useless. I am not sure if this is the case for every pickle file, cause in some thread I have seen one that goes row by row, but I can't check what is the case for my main file, cause I can't inspect it at all. How could I approach this issue (besides buying more RAM lol)? A: Instead of sample your file I would recommend you working with the entire file using cloud competing. You can create a free account in AWS or AZURE using those links https://aws.amazon.com/pt/free https://azure.microsoft.com/pt-br/free/ I would suggest you use AZURE because you will receive 200 dollars and will have more flexibility to use as you want. After create you account you can create a powerful virtual machine with enough memory to read your entire file. You can check the link bellow to see how to use Azure azure-machine-learning to create your VM: https://github.com/maxreis86/FIEP-Machine-Learning-e-Computacao-em-Nuvem#azure-machine-learning
Load very large pickle file?
so for my bachelors thesis I am supposed to train a classifier on a very large dataset. I'm gonna get access to my Uni's deep learning cluster at some point, but for now I was told to do a bit of data exploration on the data on my own device. I was told to only use 10% of the data. Thing is the pickle file is absolutely massive (8.3 GB on my 16 GB system) and when I try to load it straight up, the system crashes. I have an excel which contains sample data, so I figured I could pickle the sample data and write a script which only reads the first 10% or sth of the file. However when I pickled it and looked at it using pickletools.dis() I realized, that I can't just read the top 10% of the file, since it essentially doesnt go row by row, but column by column. So if I were to take the first 10%, I would have data which is entirely useless. I am not sure if this is the case for every pickle file, cause in some thread I have seen one that goes row by row, but I can't check what is the case for my main file, cause I can't inspect it at all. How could I approach this issue (besides buying more RAM lol)?
[ "Instead of sample your file I would recommend you working with the entire file using cloud competing.\nYou can create a free account in AWS or AZURE using those links\nhttps://aws.amazon.com/pt/free \nhttps://azure.microsoft.com/pt-br/free/\nI would suggest you use AZURE because you will receive 200 dollars and will have more flexibility to use as you want.\nAfter create you account you can create a powerful virtual machine with enough memory to read your entire file.\nYou can check the link bellow to see how to use Azure azure-machine-learning to create your VM:\nhttps://github.com/maxreis86/FIEP-Machine-Learning-e-Computacao-em-Nuvem#azure-machine-learning\n" ]
[ 0 ]
[]
[]
[ "deserialization", "machine_learning", "pickle", "python" ]
stackoverflow_0074668920_deserialization_machine_learning_pickle_python.txt
Q: Is it possible to use Python as scripts for Linux PAM? I want to use a python script to call it in the pam_exec module. The first answer in this question says that I can't use a python script and a PAM module together. First off - you cannot use python code as a PAM module, it has to be compiled code that satisfies certain interface requirements. See here for more info. Here we are clearly given to understand that pam_exec is a PAM module. pam_exec - PAM module which calls an external command So is it possible to use python or not? (This also applies to my previous question.) A: The difference between the two answers you cite is because of how the script is used. In the negative answer, the python script was listed directly as the PAM module. This will not work. PAM modules need to be shared objects, e.g. binary compiled code. The are directly linked into the running process that is uses PAM as needed. A Python script isn't compiled code. In the positive answer, the PAM module used is pam_exec. pam_exec is a shared object: /usr/lib64/security/pam_exec.so: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, BuildID[sha1]=d0c1dbb05c0689e3645193b45d3125d3b27b32ce, stripped pam_exec then runs a program, which CAN be a Python script. Because it runs a program rather than dynamically linking to an shared object, it doesn't have the same limitation. This is the whole point of pam_exec really. So yes, you can use Python, but you must pam_exec the script. Do be aware of this note from pam_exec, it's important: Commands called by pam_exec need to be aware of that the user can have control over the environment. A: You can use the pam-python library, which provides bindings and helper functions for working with PAM in Python. Once your PAM module is written and compiled, you can configure it to be used by the PAM system by modifying the appropriate PAM configuration file. For example, if you want to use your PAM module for password authentication, you would add it to the /etc/pam.d/common-password file.
Is it possible to use Python as scripts for Linux PAM?
I want to use a python script to call it in the pam_exec module. The first answer in this question says that I can't use a python script and a PAM module together. First off - you cannot use python code as a PAM module, it has to be compiled code that satisfies certain interface requirements. See here for more info. Here we are clearly given to understand that pam_exec is a PAM module. pam_exec - PAM module which calls an external command So is it possible to use python or not? (This also applies to my previous question.)
[ "The difference between the two answers you cite is because of how the script is used.\nIn the negative answer, the python script was listed directly as the PAM module. This will not work. PAM modules need to be shared objects, e.g. binary compiled code. The are directly linked into the running process that is uses PAM as needed. A Python script isn't compiled code.\nIn the positive answer, the PAM module used is pam_exec. pam_exec is a shared object:\n/usr/lib64/security/pam_exec.so: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, BuildID[sha1]=d0c1dbb05c0689e3645193b45d3125d3b27b32ce, stripped\n\npam_exec then runs a program, which CAN be a Python script. Because it runs a program rather than dynamically linking to an shared object, it doesn't have the same limitation. This is the whole point of pam_exec really.\nSo yes, you can use Python, but you must pam_exec the script. Do be aware of this note from pam_exec, it's important:\n\nCommands called by pam_exec need to be aware of that the user can have control over the environment.\n\n", "You can use the pam-python library, which provides bindings and helper functions for working with PAM in Python.\nOnce your PAM module is written and compiled, you can configure it to be used by the PAM system by modifying the appropriate PAM configuration file. For example, if you want to use your PAM module for password authentication, you would add it to the\n/etc/pam.d/common-password file.\n" ]
[ 1, 0 ]
[]
[]
[ "linux", "python" ]
stackoverflow_0074668675_linux_python.txt
Q: How to change text color and font in px.timeline Not really a specific code issue, it's just I can't find how to edit text size and font in a px.timeline, neither for go.bar. import pandas as pd import plotly.express as px import plotly.subplots as sp df1 = pd.DataFrame([ dict(unit='MVT',Task="Job A", Start='2009-01-01', Finish='2009-02-28'), dict(unit='MVT',Task="Job B", Start='2009-02-28', Finish='2009-04-15'), dict(unit='MVT',Task="Job A", Start='2009-04-15', Finish='2009-05-30') ]) fig1 = px.timeline(df1, x_start="Start", x_end="Finish",y="unit",color="Task",text="unit") How to do if I have to have bold text for example A: You can style the text inside each bar through insidetextfont: fig1.update_traces(insidetextfont=dict(color='white', size=16,family='Times New Roman'))
How to change text color and font in px.timeline
Not really a specific code issue, it's just I can't find how to edit text size and font in a px.timeline, neither for go.bar. import pandas as pd import plotly.express as px import plotly.subplots as sp df1 = pd.DataFrame([ dict(unit='MVT',Task="Job A", Start='2009-01-01', Finish='2009-02-28'), dict(unit='MVT',Task="Job B", Start='2009-02-28', Finish='2009-04-15'), dict(unit='MVT',Task="Job A", Start='2009-04-15', Finish='2009-05-30') ]) fig1 = px.timeline(df1, x_start="Start", x_end="Finish",y="unit",color="Task",text="unit") How to do if I have to have bold text for example
[ "You can style the text inside each bar through insidetextfont:\nfig1.update_traces(insidetextfont=dict(color='white', size=16,family='Times New Roman'))\n\n\n" ]
[ 0 ]
[]
[]
[ "plotly", "python" ]
stackoverflow_0074668596_plotly_python.txt
Q: I get AttributeError: 'WebDriver' object has no attribute 'find_element_by_class_name' (The below code is not mine) Ive been trying to get this ixl math bot to work but everytime i run it i get AttributeError: 'WebDriver' object has no attribute 'find_element_by_class_name' Im using selenium 4.3 and the latest python version, if no one can help then at least an explanation of what this error means and how i could fix it would be appreciated, https://github.com/debaet/IXLMultiBot?adlt=strict&toWww=1&redig=1D778E48B58B4E39B6F7082C77F7F797 this is the original GitHub post (not mine) I'm fairly new to python so I only tried a few basic things like restated PATH or double "\" not a lot its supposed to ask for username password grade and lesson link which work but after the selenium chrome windows opens and it gets gives: AttributeError: 'WebDriver' object has no attribute 'find_element_by_class_name' ` from selenium import webdriver import os, time from selenium.webdriver.common.keys import Keys from selenium.webdriver.chrome.options import Options from selenium.common.exceptions import NoSuchElementException from selenium.webdriver.common.action_chains import ActionChains import sys import colorama from colorama import Fore, Back, Style colorama.init() # config #if trying yourself replace this path with your own path of chromium downloaded (with double back slashes) PATH = ("C:\\Users\\aashu\Downloads\\chromedriver_win32\\chromedriver.exe") driver = webdriver.Chrome(PATH) def main4(argv): lesson = input('Enter lesson link for algerba') while True: driver.get(lesson) time.sleep(4) driver.refresh() time.sleep(4) q1 = driver.find_element_by_class_name('yui3-practiceagent-content') q2 = q1.text print(repr(q2)) driver.find_element_by_xpath('/html/body/div[9]/section/div[1]/div[1]/div[6]/div/div[1]/div/div[2]/button').click() time.sleep(4) driver.find_element_by_xpath('/html/body/div[1]/div/div/div[2]/div[2]/div/div/div[2]/div[2]/div/button[2]').click() time.sleep(4) #change this answer = driver.find_element_by_xpath('/html/body/div[9]/section/div[1]/div[1]/div[6]/div/div[8]/div/div[1]/div[4]/div[2]/div/div/div[2]') ans = answer.text print(ans) driver.delete_all_cookies() driver.find_element_by_xpath('/html/body/div[9]/section/div[1]/div[1]/div[6]/div/div[8]/div/div[1]/div[1]/div[2]/button').click() def main1(argv): lesson = input('Enter an 8th grade lesson link') while True: driver.get(lesson) time.sleep(4) driver.refresh() time.sleep(4) q1 = driver.find_element_by_class_name('yui3-practiceagent-content') q2 = q1.text print(repr(q2)) driver.find_element_by_xpath('/html/body/div[9]/section/div[1]/div[1]/div[6]/div/div[1]/div/div[2]/button').click() time.sleep(4) driver.find_element_by_xpath('/html/body/div[1]/div/div/div[2]/div[2]/div/div/div[2]/div[2]/div/button[2]').click() time.sleep(4) #change this answer = driver.find_element_by_xpath('/html/body/div[9]/section/div[1]/div[1]/div[6]/div/div[8]/div/div[1]/div[4]/div[2]/div/div/div/div/div[9]') ans = answer.text print(ans) driver.delete_all_cookies() driver.find_element_by_xpath('/html/body/div[9]/section/div[1]/div[1]/div[6]/div/div[8]/div/div[1]/div[1]/div[2]/button').click() def main2(argv): lesson = input('Please Enter A 7th Grade Lesson Link: ') while True: driver.get(lesson) time.sleep(4) driver.refresh() time.sleep(4) q1 = driver.find_element_by_class_name('yui3-practiceagent-content') q2 = q1.text print(repr(q2)) driver.find_element_by_xpath('/html/body/div[9]/section/div[1]/div[1]/div[6]/div/div[1]/div/div[2]/button').click() time.sleep(4) driver.find_element_by_xpath('/html/body/div[1]/div/div/div[2]/div[2]/div/div/div[2]/div[2]/div/button[2]').click() time.sleep(4) answer = driver.find_element_by_xpath('/html/body/div[9]/section/div[1]/div[1]/div[6]/div/div[8]/div/div[1]/div[4]/div[2]/div/div/div/div/div') ans = answer.text print(ans) driver.delete_all_cookies() driver.find_element_by_xpath('/html/body/div[9]/section/div[1]/div[1]/div[6]/div/div[8]/div/div[1]/div[1]/div[2]/button').click() # 6th Grade def main3(argv): lesson = input('Please Enter A 6th Grade Lesson Link: ') while True: driver.get(lesson) time.sleep(4) driver.refresh() time.sleep(4) q1 = driver.find_element_by_class_name('yui3-practiceagent-content') q2 = q1.text print(repr(q2)) driver.find_element_by_xpath('/html/body/div[9]/section/div[1]/div[1]/div[6]/div/div[1]/div/div[2]/button').click() time.sleep(4) driver.find_element_by_xpath('/html/body/div[1]/div/div/div[2]/div[2]/div/div/div[2]/div[2]/div/button[2]').click() time.sleep(4) answer = driver.find_element_by_xpath('/html/body/div[9]/section/div[1]/div[1]/div[6]/div/div[8]/div/div[1]/div[4]/div[2]/div/div/div/div') ans = answer.text print(ans) driver.delete_all_cookies() driver.find_element_by_xpath('/html/body/div[9]/section/div[1]/div[1]/div[6]/div/div[8]/div/div[1]/div[1]/div[2]/button').click() def op1(): os.system('cls' if os.name == 'nt' else 'clear') print('If any of these are incorrect, bot will fail.') username = input('Enter Username or Email: ') password = input('Enter Password: ') os.system('cls' if os.name == 'nt' else 'clear') print('do not touch the window that has just popped up. ') print('smart score goes up and down alot, just go afk or do something in background!') driver.get('https://www.ixl.com/math/grade-7/add-and-subtract-integers') time.sleep(3) driver.refresh() driver.find_element_by_xpath('//*[@id="qlusername"]').send_keys(username) driver.find_element_by_xpath('//*[@id="qlpassword"]').send_keys(password) driver.find_element_by_xpath('//*[@id="qlsubmit"]').click() driver.execute_script('''window.open('',"_blank");''') driver.switch_to.window(driver.window_handles[-1]) driver.get('https://www.meta-calculator.com/scientific-calculator.php?panel-203-simple-calculator') driver.switch_to.window(driver.window_handles[0]) while True: time.sleep(3) variable = driver.find_element_by_class_name('old-space-indent').text print(variable) driver.switch_to.window(driver.window_handles[-1]) box = driver.find_element_by_xpath('/html/body/div[1]/div[3]/div/div[1]/div[2]/div[2]/div[4]/div[1]/div[2]/input') time.sleep(3) driver.find_element_by_xpath('/html/body/div[1]/div[3]/div/div[1]/div[2]/div[2]/div[4]/div[1]/div[2]/input').send_keys(variable) answer = driver.find_element_by_xpath('/html/body/div[1]/div[3]/div/div[1]/div[2]/div[2]/div[4]/div[1]/div[2]/input') box.send_keys(Keys.BACKSPACE) print('the bot will now pause for 150 seconds to generate some time.') time.sleep(150) box.send_keys(Keys.ENTER) answer = driver.find_element_by_xpath('/html/body/div[1]/div[3]/div/div[1]/div[2]/div[2]/div[4]/div[1]/div[1]/span[2]').text a = (answer) c = "=" for char in c: a = a.replace(char, "") print(a) driver.find_element_by_xpath('/html/body/div[1]/div[3]/div/div[1]/div[2]/div[2]/div[4]/div[1]/div[2]/button[2]').click() driver.switch_to.window(driver.window_handles[0]) driver.find_element_by_class_name('fillIn').click() driver.find_element_by_class_name('fillIn').send_keys(a) time.sleep(3) driver.find_element_by_class_name('fillIn').send_keys(Keys.ENTER) driver.refresh() def op2(): os.system('cls' if os.name == 'nt' else 'clear') grade = input('Are you in 8th Grade? (Y/N): ') if grade == "Y": main1(sys.argv) print('You are In 8th Grade') print('You will be asked to Enter Your Lesson Link') else: grade2 = input('Are you in 7th Grade? (Y/N): ') if grade2 == "Y": main2(sys.argv) print('You are In 7th Grade') print('You will be asked to Enter Your Lesson Link') else: grade3 = input('Are you in 6th Grade? (Y/N): ') if grade3 == "Y": print('You are In 6th Grade') print('You will be asked to Enter Your Lesson Link') main3(sys.argv) else: print('algerba') main4(sys.argv) def op4(): os.system('cls' if os.name == 'nt' else 'clear') print('If any of these are incorrect, bot will fail.') username = input('Enter Username or Email: ') password = input('Enter Password: ') os.system('cls' if os.name == 'nt' else 'clear') def op3(): os.system('cls' if os.name == 'nt' else 'clear') print('If any of these are incorrect, bot will fail.') username = input('Enter Username or Email: ') password = input('Enter Password: ') os.system('cls' if os.name == 'nt' else 'clear') def op5(): os.system('cls' if os.name == 'nt' else 'clear') print('If any of these are incorrect, bot will fail.') username = input('Enter Username or Email: ') password = input('Enter Password: ') os.system('cls' if os.name == 'nt' else 'clear') def main(): print('Menu: ') print('1. Add more time to your account') print('2. Scrape Answers (get answers)') print('3. Get Teacher Accounts') print('4. Auto Answer (some lessons work)') print('5. Credits') var = input('Enter an Option: ') # goes to the specified option if var==('1'): op1() elif var==('2'): op2() elif var==('3'): op3() elif var==('4'): op4() elif var==('5'): op5() else: print('Please enter numbers only. If you did and still got an error, please enter a number which is listed above.') time.sleep(4) os.system('cls' if os.name == 'nt' else 'clear') main() # end code print('Welcome to The First Functional IXL BOT.') print('DO NOT CLOSE THE CHROME WINDOW THAT IS ABOUT TO POP UP.') print('Please wait..') time.sleep(3) os.system('cls' if os.name == 'nt' else 'clear') main() ` Thank you to anyone that helps, i will try to respond to any answers within 2-3 days :D - Aashu, A: All the find_element_by_* and find_elements_by_* methods are deprecated in current Selenium versions. You need to use driver.find_element(By.CLASS_NAME, " "), driver.find_element(By.XPATH, " ") etc. methods.
I get AttributeError: 'WebDriver' object has no attribute 'find_element_by_class_name'
(The below code is not mine) Ive been trying to get this ixl math bot to work but everytime i run it i get AttributeError: 'WebDriver' object has no attribute 'find_element_by_class_name' Im using selenium 4.3 and the latest python version, if no one can help then at least an explanation of what this error means and how i could fix it would be appreciated, https://github.com/debaet/IXLMultiBot?adlt=strict&toWww=1&redig=1D778E48B58B4E39B6F7082C77F7F797 this is the original GitHub post (not mine) I'm fairly new to python so I only tried a few basic things like restated PATH or double "\" not a lot its supposed to ask for username password grade and lesson link which work but after the selenium chrome windows opens and it gets gives: AttributeError: 'WebDriver' object has no attribute 'find_element_by_class_name' ` from selenium import webdriver import os, time from selenium.webdriver.common.keys import Keys from selenium.webdriver.chrome.options import Options from selenium.common.exceptions import NoSuchElementException from selenium.webdriver.common.action_chains import ActionChains import sys import colorama from colorama import Fore, Back, Style colorama.init() # config #if trying yourself replace this path with your own path of chromium downloaded (with double back slashes) PATH = ("C:\\Users\\aashu\Downloads\\chromedriver_win32\\chromedriver.exe") driver = webdriver.Chrome(PATH) def main4(argv): lesson = input('Enter lesson link for algerba') while True: driver.get(lesson) time.sleep(4) driver.refresh() time.sleep(4) q1 = driver.find_element_by_class_name('yui3-practiceagent-content') q2 = q1.text print(repr(q2)) driver.find_element_by_xpath('/html/body/div[9]/section/div[1]/div[1]/div[6]/div/div[1]/div/div[2]/button').click() time.sleep(4) driver.find_element_by_xpath('/html/body/div[1]/div/div/div[2]/div[2]/div/div/div[2]/div[2]/div/button[2]').click() time.sleep(4) #change this answer = driver.find_element_by_xpath('/html/body/div[9]/section/div[1]/div[1]/div[6]/div/div[8]/div/div[1]/div[4]/div[2]/div/div/div[2]') ans = answer.text print(ans) driver.delete_all_cookies() driver.find_element_by_xpath('/html/body/div[9]/section/div[1]/div[1]/div[6]/div/div[8]/div/div[1]/div[1]/div[2]/button').click() def main1(argv): lesson = input('Enter an 8th grade lesson link') while True: driver.get(lesson) time.sleep(4) driver.refresh() time.sleep(4) q1 = driver.find_element_by_class_name('yui3-practiceagent-content') q2 = q1.text print(repr(q2)) driver.find_element_by_xpath('/html/body/div[9]/section/div[1]/div[1]/div[6]/div/div[1]/div/div[2]/button').click() time.sleep(4) driver.find_element_by_xpath('/html/body/div[1]/div/div/div[2]/div[2]/div/div/div[2]/div[2]/div/button[2]').click() time.sleep(4) #change this answer = driver.find_element_by_xpath('/html/body/div[9]/section/div[1]/div[1]/div[6]/div/div[8]/div/div[1]/div[4]/div[2]/div/div/div/div/div[9]') ans = answer.text print(ans) driver.delete_all_cookies() driver.find_element_by_xpath('/html/body/div[9]/section/div[1]/div[1]/div[6]/div/div[8]/div/div[1]/div[1]/div[2]/button').click() def main2(argv): lesson = input('Please Enter A 7th Grade Lesson Link: ') while True: driver.get(lesson) time.sleep(4) driver.refresh() time.sleep(4) q1 = driver.find_element_by_class_name('yui3-practiceagent-content') q2 = q1.text print(repr(q2)) driver.find_element_by_xpath('/html/body/div[9]/section/div[1]/div[1]/div[6]/div/div[1]/div/div[2]/button').click() time.sleep(4) driver.find_element_by_xpath('/html/body/div[1]/div/div/div[2]/div[2]/div/div/div[2]/div[2]/div/button[2]').click() time.sleep(4) answer = driver.find_element_by_xpath('/html/body/div[9]/section/div[1]/div[1]/div[6]/div/div[8]/div/div[1]/div[4]/div[2]/div/div/div/div/div') ans = answer.text print(ans) driver.delete_all_cookies() driver.find_element_by_xpath('/html/body/div[9]/section/div[1]/div[1]/div[6]/div/div[8]/div/div[1]/div[1]/div[2]/button').click() # 6th Grade def main3(argv): lesson = input('Please Enter A 6th Grade Lesson Link: ') while True: driver.get(lesson) time.sleep(4) driver.refresh() time.sleep(4) q1 = driver.find_element_by_class_name('yui3-practiceagent-content') q2 = q1.text print(repr(q2)) driver.find_element_by_xpath('/html/body/div[9]/section/div[1]/div[1]/div[6]/div/div[1]/div/div[2]/button').click() time.sleep(4) driver.find_element_by_xpath('/html/body/div[1]/div/div/div[2]/div[2]/div/div/div[2]/div[2]/div/button[2]').click() time.sleep(4) answer = driver.find_element_by_xpath('/html/body/div[9]/section/div[1]/div[1]/div[6]/div/div[8]/div/div[1]/div[4]/div[2]/div/div/div/div') ans = answer.text print(ans) driver.delete_all_cookies() driver.find_element_by_xpath('/html/body/div[9]/section/div[1]/div[1]/div[6]/div/div[8]/div/div[1]/div[1]/div[2]/button').click() def op1(): os.system('cls' if os.name == 'nt' else 'clear') print('If any of these are incorrect, bot will fail.') username = input('Enter Username or Email: ') password = input('Enter Password: ') os.system('cls' if os.name == 'nt' else 'clear') print('do not touch the window that has just popped up. ') print('smart score goes up and down alot, just go afk or do something in background!') driver.get('https://www.ixl.com/math/grade-7/add-and-subtract-integers') time.sleep(3) driver.refresh() driver.find_element_by_xpath('//*[@id="qlusername"]').send_keys(username) driver.find_element_by_xpath('//*[@id="qlpassword"]').send_keys(password) driver.find_element_by_xpath('//*[@id="qlsubmit"]').click() driver.execute_script('''window.open('',"_blank");''') driver.switch_to.window(driver.window_handles[-1]) driver.get('https://www.meta-calculator.com/scientific-calculator.php?panel-203-simple-calculator') driver.switch_to.window(driver.window_handles[0]) while True: time.sleep(3) variable = driver.find_element_by_class_name('old-space-indent').text print(variable) driver.switch_to.window(driver.window_handles[-1]) box = driver.find_element_by_xpath('/html/body/div[1]/div[3]/div/div[1]/div[2]/div[2]/div[4]/div[1]/div[2]/input') time.sleep(3) driver.find_element_by_xpath('/html/body/div[1]/div[3]/div/div[1]/div[2]/div[2]/div[4]/div[1]/div[2]/input').send_keys(variable) answer = driver.find_element_by_xpath('/html/body/div[1]/div[3]/div/div[1]/div[2]/div[2]/div[4]/div[1]/div[2]/input') box.send_keys(Keys.BACKSPACE) print('the bot will now pause for 150 seconds to generate some time.') time.sleep(150) box.send_keys(Keys.ENTER) answer = driver.find_element_by_xpath('/html/body/div[1]/div[3]/div/div[1]/div[2]/div[2]/div[4]/div[1]/div[1]/span[2]').text a = (answer) c = "=" for char in c: a = a.replace(char, "") print(a) driver.find_element_by_xpath('/html/body/div[1]/div[3]/div/div[1]/div[2]/div[2]/div[4]/div[1]/div[2]/button[2]').click() driver.switch_to.window(driver.window_handles[0]) driver.find_element_by_class_name('fillIn').click() driver.find_element_by_class_name('fillIn').send_keys(a) time.sleep(3) driver.find_element_by_class_name('fillIn').send_keys(Keys.ENTER) driver.refresh() def op2(): os.system('cls' if os.name == 'nt' else 'clear') grade = input('Are you in 8th Grade? (Y/N): ') if grade == "Y": main1(sys.argv) print('You are In 8th Grade') print('You will be asked to Enter Your Lesson Link') else: grade2 = input('Are you in 7th Grade? (Y/N): ') if grade2 == "Y": main2(sys.argv) print('You are In 7th Grade') print('You will be asked to Enter Your Lesson Link') else: grade3 = input('Are you in 6th Grade? (Y/N): ') if grade3 == "Y": print('You are In 6th Grade') print('You will be asked to Enter Your Lesson Link') main3(sys.argv) else: print('algerba') main4(sys.argv) def op4(): os.system('cls' if os.name == 'nt' else 'clear') print('If any of these are incorrect, bot will fail.') username = input('Enter Username or Email: ') password = input('Enter Password: ') os.system('cls' if os.name == 'nt' else 'clear') def op3(): os.system('cls' if os.name == 'nt' else 'clear') print('If any of these are incorrect, bot will fail.') username = input('Enter Username or Email: ') password = input('Enter Password: ') os.system('cls' if os.name == 'nt' else 'clear') def op5(): os.system('cls' if os.name == 'nt' else 'clear') print('If any of these are incorrect, bot will fail.') username = input('Enter Username or Email: ') password = input('Enter Password: ') os.system('cls' if os.name == 'nt' else 'clear') def main(): print('Menu: ') print('1. Add more time to your account') print('2. Scrape Answers (get answers)') print('3. Get Teacher Accounts') print('4. Auto Answer (some lessons work)') print('5. Credits') var = input('Enter an Option: ') # goes to the specified option if var==('1'): op1() elif var==('2'): op2() elif var==('3'): op3() elif var==('4'): op4() elif var==('5'): op5() else: print('Please enter numbers only. If you did and still got an error, please enter a number which is listed above.') time.sleep(4) os.system('cls' if os.name == 'nt' else 'clear') main() # end code print('Welcome to The First Functional IXL BOT.') print('DO NOT CLOSE THE CHROME WINDOW THAT IS ABOUT TO POP UP.') print('Please wait..') time.sleep(3) os.system('cls' if os.name == 'nt' else 'clear') main() ` Thank you to anyone that helps, i will try to respond to any answers within 2-3 days :D - Aashu,
[ "All the find_element_by_* and find_elements_by_* methods are deprecated in current Selenium versions. You need to use driver.find_element(By.CLASS_NAME, \" \"), driver.find_element(By.XPATH, \" \") etc. methods.\n" ]
[ 0 ]
[]
[]
[ "attributeerror", "automation", "bots", "python", "selenium" ]
stackoverflow_0074669026_attributeerror_automation_bots_python_selenium.txt
Q: Why does my While loop omit the last input and adding a 0 in the list? I want to build a program that takes the amount of rainfall each day for 7 days and then output the total and average rainfall for those days. Initially, I've created a while loop to take the input: rainfall = 0 rain = [] counter = 1 while counter < 8: rain.append(rainfall) rainfall = float(input("Enter the rainfall of day {0}: ".format(counter))) counter += 1 print(rain) But the output that is generated is not what I expected: [0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0] It will enter a 0 as first value and then omit the last input (here the input is 1 to 7 as an example) A: In the first line of your while: rain.append(rainfall) at this point since you didn't reassign it rainfall is still the value that you set it to earlier: rainfall = 0 and your while runs for the numbers 1, 2, 3, 4, 5, 6, 7 since those are the integers < 8 A: This is the correct version for your aim. rain = [] counter = 1 while counter <= 7: rainfall = float(input("Enter the rainfall of day {0}: ".format(counter))) rain.append(rainfall) counter += 1 print(rain) You were passing default parameter you created for rainfall. No need to set default rainfall.
Why does my While loop omit the last input and adding a 0 in the list?
I want to build a program that takes the amount of rainfall each day for 7 days and then output the total and average rainfall for those days. Initially, I've created a while loop to take the input: rainfall = 0 rain = [] counter = 1 while counter < 8: rain.append(rainfall) rainfall = float(input("Enter the rainfall of day {0}: ".format(counter))) counter += 1 print(rain) But the output that is generated is not what I expected: [0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0] It will enter a 0 as first value and then omit the last input (here the input is 1 to 7 as an example)
[ "In the first line of your while:\nrain.append(rainfall)\n\nat this point since you didn't reassign it rainfall is still the value that you set it to earlier:\nrainfall = 0\n\nand your while runs for the numbers\n1, 2, 3, 4, 5, 6, 7\n\nsince those are the integers < 8\n", "This is the correct version for your aim.\nrain = []\ncounter = 1\n\nwhile counter <= 7:\n rainfall = float(input(\"Enter the rainfall of day {0}: \".format(counter)))\n rain.append(rainfall)\n counter += 1\nprint(rain)\n\nYou were passing default parameter you created for rainfall.\nNo need to set default rainfall.\n" ]
[ 1, 1 ]
[]
[]
[ "list", "python" ]
stackoverflow_0074668944_list_python.txt
Q: How to find the index of the last odd number in a list, without reversing the list? Have this so far, and essentially want to get there is something wrong with the position of last_odd as the compiler says the pop index is out of range? def remove_last_odd(numbers): has_odd = False last_odd = 0 for num in range(len(numbers)): if numbers[num] % 2 == 1: has_odd = True last_odd = numbers[num] if has_odd: numbers.pop(last_odd) numbers = [1, 7, 2, 34, 8, 7, 2, 5, 14, 22, 93, 48, 76, 15, 6] A: As @DeepSpace said, list.pop will Remove the item at the given position in the list, and return it. If no index is specified, a.pop() removes and returns the last item in the list. So basically, the solution to your problem would be to replace last_odd = numbers[num] with last_odd = num. A: list.pop() receive index as argument and remove the value. So, last_odd should be assigned to num instead of numbers[num] Your function doesn't have return value yet. It should return numbers list. def remove_last_odd(numbers): for num in range(len(numbers)): if numbers[num] % 2 == 1: last_odd = num numbers.pop(last_odd) return numbers numbers = [1, 7, 2, 34, 8, 7, 2, 5, 14, 22, 93, 48, 76, 15, 6] print(remove_last_odd(numbers)) # [1, 7, 2, 34, 8, 7, 2, 5, 14, 22, 93, 48, 76, 6] Or iterating numbers value and using remove() method instead of pop(): def remove_last_odd(numbers): for num in numbers: if num % 2: last_odd = num numbers.remove(last_odd) return numbers numbers = [1, 7, 2, 34, 8, 7, 2, 5, 14, 22, 93, 48, 76, 15, 6] print(remove_last_odd(numbers)) # [1, 7, 2, 34, 8, 7, 2, 5, 14, 22, 93, 48, 76, 6] A: No need to reverse the list but you can search it in reverse. def remove_last_odd(numbers): for i in range(len(numbers)-1, -1, -1): if numbers[i] & 1: numbers.pop(i) break numbers = [1, 7, 2, 34, 8, 7, 2, 5, 14, 22, 93, 48, 76, 15, 6] remove_last_odd(numbers) print(numbers) Output: [1, 7, 2, 34, 8, 7, 2, 5, 14, 22, 93, 48, 76, 6] Option: If you insist on a 'forward' search then: def remove_last_odd(numbers): ri = -1 for i, v in enumerate(numbers): if v & 1: ri = i if ri >= 0: numbers.pop(ri) A: To get the index of the last odd element you can use the reversed() iterator which will not reverse the original list object. A quick way to get the index is: >>> numbers = [1, 7, 2, 34, 8, 7, 2, 5, 14, 22, 93, 48, 76, 15, 6] >>> -next(filter(lambda v: v[1]%2==1, enumerate(reversed(numbers))))[0] -1 Even for very large lists with a lot of even numbers at the end the result will be delivered quite quick (compared with the proposed code): >>> from timeit import timeit >>> def find_last_odd(numbers): for num in range(len(numbers)): if numbers[num] % 2 == 1: last_odd = num return last_odd >>> numbers2=numbers+([2]*10000) # create a list with 10000 even numbers at the end! >>> timeit(lambda: find_last_odd(numbers2),number=100) 0.5675344999999936 >>> timeit.timeit(lambda: -next(filter(lambda v: v[1]%2==1, enumerate(reversed(numbers2)))).__getitem__(0),number=100) 0.10892959999998197 A: Remove the item at the given position in the list, and return it. If no index is specified, a.pop() removes and returns the last item in the list. So basically, the solution to your problem would be to replace last_odd = numbers[num] with last_odd = num.
How to find the index of the last odd number in a list, without reversing the list?
Have this so far, and essentially want to get there is something wrong with the position of last_odd as the compiler says the pop index is out of range? def remove_last_odd(numbers): has_odd = False last_odd = 0 for num in range(len(numbers)): if numbers[num] % 2 == 1: has_odd = True last_odd = numbers[num] if has_odd: numbers.pop(last_odd) numbers = [1, 7, 2, 34, 8, 7, 2, 5, 14, 22, 93, 48, 76, 15, 6]
[ "As @DeepSpace said, list.pop will\n\nRemove the item at the given position in the list, and return it. If no index is specified, a.pop() removes and returns the last item in the list.\n\nSo basically, the solution to your problem would be to replace last_odd = numbers[num] with last_odd = num.\n", "list.pop() receive index as argument and remove the value. So, last_odd should be assigned to num instead of numbers[num]\nYour function doesn't have return value yet. It should return numbers list.\ndef remove_last_odd(numbers):\n for num in range(len(numbers)):\n if numbers[num] % 2 == 1:\n last_odd = num\n numbers.pop(last_odd)\n return numbers\n\nnumbers = [1, 7, 2, 34, 8, 7, 2, 5, 14, 22, 93, 48, 76, 15, 6]\nprint(remove_last_odd(numbers))\n\n# [1, 7, 2, 34, 8, 7, 2, 5, 14, 22, 93, 48, 76, 6]\n\nOr iterating numbers value and using remove() method instead of pop():\ndef remove_last_odd(numbers):\n for num in numbers:\n if num % 2: last_odd = num\n numbers.remove(last_odd)\n return numbers\n\nnumbers = [1, 7, 2, 34, 8, 7, 2, 5, 14, 22, 93, 48, 76, 15, 6]\nprint(remove_last_odd(numbers))\n\n# [1, 7, 2, 34, 8, 7, 2, 5, 14, 22, 93, 48, 76, 6]\n\n", "No need to reverse the list but you can search it in reverse.\ndef remove_last_odd(numbers):\n for i in range(len(numbers)-1, -1, -1):\n if numbers[i] & 1:\n numbers.pop(i)\n break\n\nnumbers = [1, 7, 2, 34, 8, 7, 2, 5, 14, 22, 93, 48, 76, 15, 6]\n\nremove_last_odd(numbers)\n\nprint(numbers)\n\nOutput:\n[1, 7, 2, 34, 8, 7, 2, 5, 14, 22, 93, 48, 76, 6]\n\nOption:\nIf you insist on a 'forward' search then:\ndef remove_last_odd(numbers):\n ri = -1\n for i, v in enumerate(numbers):\n if v & 1:\n ri = i\n if ri >= 0:\n numbers.pop(ri)\n\n", "To get the index of the last odd element you can use the reversed() iterator which will not reverse the original list object.\nA quick way to get the index is:\n >>> numbers = [1, 7, 2, 34, 8, 7, 2, 5, 14, 22, 93, 48, 76, 15, 6]\n >>> -next(filter(lambda v: v[1]%2==1, enumerate(reversed(numbers))))[0] \n -1\n\nEven for very large lists with a lot of even numbers at the end the result will be delivered quite quick (compared with the proposed code):\n >>> from timeit import timeit\n >>> def find_last_odd(numbers):\n for num in range(len(numbers)):\n if numbers[num] % 2 == 1:\n last_odd = num\n return last_odd\n >>> numbers2=numbers+([2]*10000) # create a list with 10000 even numbers at the end!\n >>> timeit(lambda: find_last_odd(numbers2),number=100)\n 0.5675344999999936\n >>> timeit.timeit(lambda: -next(filter(lambda v: v[1]%2==1, enumerate(reversed(numbers2)))).__getitem__(0),number=100)\n 0.10892959999998197\n \n\n", "Remove the item at the given position in the list, and return it. If no index is specified, a.pop() removes and returns the last item in the list.\nSo basically, the solution to your problem would be to replace last_odd = numbers[num] with last_odd = num.\n" ]
[ 0, 0, 0, 0, 0 ]
[]
[]
[ "iteration", "python" ]
stackoverflow_0074668354_iteration_python.txt
Q: Plotting images side by side using matplotlib I was wondering how I am able to plot images side by side using matplotlib for example something like this: The closest I got is this: This was produced by using this code: f, axarr = plt.subplots(2,2) axarr[0,0] = plt.imshow(image_datas[0]) axarr[0,1] = plt.imshow(image_datas[1]) axarr[1,0] = plt.imshow(image_datas[2]) axarr[1,1] = plt.imshow(image_datas[3]) But I can't seem to get the other images to show. I'm thinking that there must be a better way to do this as I would imagine trying to manage the indexes would be a pain. I have looked through the documentation although I have a feeling I may be look at the wrong one. Would anyone be able to provide me with an example or point me in the right direction? EDIT: See the answer from @duhaime if you want a function to automatically determine the grid size. A: The problem you face is that you try to assign the return of imshow (which is an matplotlib.image.AxesImage to an existing axes object. The correct way of plotting image data to the different axes in axarr would be f, axarr = plt.subplots(2,2) axarr[0,0].imshow(image_datas[0]) axarr[0,1].imshow(image_datas[1]) axarr[1,0].imshow(image_datas[2]) axarr[1,1].imshow(image_datas[3]) The concept is the same for all subplots, and in most cases the axes instance provide the same methods than the pyplot (plt) interface. E.g. if ax is one of your subplot axes, for plotting a normal line plot you'd use ax.plot(..) instead of plt.plot(). This can actually be found exactly in the source from the page you link to. A: One thing that I found quite helpful to use to print all images : _, axs = plt.subplots(n_row, n_col, figsize=(12, 12)) axs = axs.flatten() for img, ax in zip(imgs, axs): ax.imshow(img) plt.show() A: You are plotting all your images on one axis. What you want ist to get a handle for each axis individually and plot your images there. Like so: fig = plt.figure() ax1 = fig.add_subplot(2,2,1) ax1.imshow(...) ax2 = fig.add_subplot(2,2,2) ax2.imshow(...) ax3 = fig.add_subplot(2,2,3) ax3.imshow(...) ax4 = fig.add_subplot(2,2,4) ax4.imshow(...) For more info have a look here: http://matplotlib.org/examples/pylab_examples/subplots_demo.html For complex layouts, you should consider using gridspec: http://matplotlib.org/users/gridspec.html A: If the images are in an array and you want to iterate through each element and print it, you can write the code as follows: plt.figure(figsize=(10,10)) # specifying the overall grid size for i in range(25): plt.subplot(5,5,i+1) # the number of images in the grid is 5*5 (25) plt.imshow(the_array[i]) plt.show() Also note that I used subplot and not subplots. They're both different A: Below is a complete function show_image_list() that displays images side-by-side in a grid. You can invoke the function with different arguments. Pass in a list of images, where each image is a Numpy array. It will create a grid with 2 columns by default. It will also infer if each image is color or grayscale. list_images = [img, gradx, grady, mag_binary, dir_binary] show_image_list(list_images, figsize=(10, 10)) Pass in a list of images, a list of titles for each image, and other arguments. show_image_list(list_images=[img, gradx, grady, mag_binary, dir_binary], list_titles=['original', 'gradx', 'grady', 'mag_binary', 'dir_binary'], num_cols=3, figsize=(20, 10), grid=False, title_fontsize=20) Here's the code: import matplotlib.pyplot as plt import numpy as np def img_is_color(img): if len(img.shape) == 3: # Check the color channels to see if they're all the same. c1, c2, c3 = img[:, : , 0], img[:, :, 1], img[:, :, 2] if (c1 == c2).all() and (c2 == c3).all(): return True return False def show_image_list(list_images, list_titles=None, list_cmaps=None, grid=True, num_cols=2, figsize=(20, 10), title_fontsize=30): ''' Shows a grid of images, where each image is a Numpy array. The images can be either RGB or grayscale. Parameters: ---------- images: list List of the images to be displayed. list_titles: list or None Optional list of titles to be shown for each image. list_cmaps: list or None Optional list of cmap values for each image. If None, then cmap will be automatically inferred. grid: boolean If True, show a grid over each image num_cols: int Number of columns to show. figsize: tuple of width, height Value to be passed to pyplot.figure() title_fontsize: int Value to be passed to set_title(). ''' assert isinstance(list_images, list) assert len(list_images) > 0 assert isinstance(list_images[0], np.ndarray) if list_titles is not None: assert isinstance(list_titles, list) assert len(list_images) == len(list_titles), '%d imgs != %d titles' % (len(list_images), len(list_titles)) if list_cmaps is not None: assert isinstance(list_cmaps, list) assert len(list_images) == len(list_cmaps), '%d imgs != %d cmaps' % (len(list_images), len(list_cmaps)) num_images = len(list_images) num_cols = min(num_images, num_cols) num_rows = int(num_images / num_cols) + (1 if num_images % num_cols != 0 else 0) # Create a grid of subplots. fig, axes = plt.subplots(num_rows, num_cols, figsize=figsize) # Create list of axes for easy iteration. if isinstance(axes, np.ndarray): list_axes = list(axes.flat) else: list_axes = [axes] for i in range(num_images): img = list_images[i] title = list_titles[i] if list_titles is not None else 'Image %d' % (i) cmap = list_cmaps[i] if list_cmaps is not None else (None if img_is_color(img) else 'gray') list_axes[i].imshow(img, cmap=cmap) list_axes[i].set_title(title, fontsize=title_fontsize) list_axes[i].grid(grid) for i in range(num_images, len(list_axes)): list_axes[i].set_visible(False) fig.tight_layout() _ = plt.show() A: As per matplotlib's suggestion for image grids: import matplotlib.pyplot as plt from mpl_toolkits.axes_grid1 import ImageGrid fig = plt.figure(figsize=(4., 4.)) grid = ImageGrid(fig, 111, # similar to subplot(111) nrows_ncols=(2, 2), # creates 2x2 grid of axes axes_pad=0.1, # pad between axes in inch. ) for ax, im in zip(grid, image_data): # Iterating over the grid returns the Axes. ax.imshow(im) plt.show() A: I end up at this url about once a week. For those who want a little function that just plots a grid of images without hassle, here we go: import matplotlib.pyplot as plt import numpy as np def plot_image_grid(images, ncols=None, cmap='gray'): '''Plot a grid of images''' if not ncols: factors = [i for i in range(1, len(images)+1) if len(images) % i == 0] ncols = factors[len(factors) // 2] if len(factors) else len(images) // 4 + 1 nrows = int(len(images) / ncols) + int(len(images) % ncols) imgs = [images[i] if len(images) > i else None for i in range(nrows * ncols)] f, axes = plt.subplots(nrows, ncols, figsize=(3*ncols, 2*nrows)) axes = axes.flatten()[:len(imgs)] for img, ax in zip(imgs, axes.flatten()): if np.any(img): if len(img.shape) > 2 and img.shape[2] == 1: img = img.squeeze() ax.imshow(img, cmap=cmap) # make 16 images with 60 height, 80 width, 3 color channels images = np.random.rand(16, 60, 80, 3) # plot them plot_image_grid(images) A: Sample code to visualize one random image from the dataset def get_random_image(num): path=os.path.join("/content/gdrive/MyDrive/dataset/",images[num]) image=cv2.imread(path) return image Call the function images=os.listdir("/content/gdrive/MyDrive/dataset") random_num=random.randint(0, len(images)) img=get_random_image(random_num) plt.figure(figsize=(8,8)) plt.imshow(cv2.cvtColor(img,cv2.COLOR_BGR2RGB)) Display cluster of random images from the given dataset #Making a figure containing 16 images lst=random.sample(range(0,len(images)), 16) plt.figure(figsize=(12,12)) for index,value in enumerate(lst): img=get_random_image(value) img_resized=cv2.resize(img,(400,400)) #print(path) plt.subplot(4,4,index+1) plt.imshow(img_resized) plt.axis('off') plt.tight_layout() plt.subplots_adjust(wspace=0, hspace=0) #plt.savefig(f"Images/{lst[0]}.png") plt.show()
Plotting images side by side using matplotlib
I was wondering how I am able to plot images side by side using matplotlib for example something like this: The closest I got is this: This was produced by using this code: f, axarr = plt.subplots(2,2) axarr[0,0] = plt.imshow(image_datas[0]) axarr[0,1] = plt.imshow(image_datas[1]) axarr[1,0] = plt.imshow(image_datas[2]) axarr[1,1] = plt.imshow(image_datas[3]) But I can't seem to get the other images to show. I'm thinking that there must be a better way to do this as I would imagine trying to manage the indexes would be a pain. I have looked through the documentation although I have a feeling I may be look at the wrong one. Would anyone be able to provide me with an example or point me in the right direction? EDIT: See the answer from @duhaime if you want a function to automatically determine the grid size.
[ "The problem you face is that you try to assign the return of imshow (which is an matplotlib.image.AxesImage to an existing axes object. \nThe correct way of plotting image data to the different axes in axarr would be\nf, axarr = plt.subplots(2,2)\naxarr[0,0].imshow(image_datas[0])\naxarr[0,1].imshow(image_datas[1])\naxarr[1,0].imshow(image_datas[2])\naxarr[1,1].imshow(image_datas[3])\n\nThe concept is the same for all subplots, and in most cases the axes instance provide the same methods than the pyplot (plt) interface. \nE.g. if ax is one of your subplot axes, for plotting a normal line plot you'd use ax.plot(..) instead of plt.plot(). This can actually be found exactly in the source from the page you link to. \n", "One thing that I found quite helpful to use to print all images :\n_, axs = plt.subplots(n_row, n_col, figsize=(12, 12))\naxs = axs.flatten()\nfor img, ax in zip(imgs, axs):\n ax.imshow(img)\nplt.show()\n\n", "You are plotting all your images on one axis. What you want ist to get a handle for each axis individually and plot your images there. Like so:\nfig = plt.figure()\nax1 = fig.add_subplot(2,2,1)\nax1.imshow(...)\nax2 = fig.add_subplot(2,2,2)\nax2.imshow(...)\nax3 = fig.add_subplot(2,2,3)\nax3.imshow(...)\nax4 = fig.add_subplot(2,2,4)\nax4.imshow(...)\n\nFor more info have a look here: http://matplotlib.org/examples/pylab_examples/subplots_demo.html\nFor complex layouts, you should consider using gridspec: http://matplotlib.org/users/gridspec.html\n", "If the images are in an array and you want to iterate through each element and print it, you can write the code as follows:\nplt.figure(figsize=(10,10)) # specifying the overall grid size\n\nfor i in range(25):\n plt.subplot(5,5,i+1) # the number of images in the grid is 5*5 (25)\n plt.imshow(the_array[i])\n\nplt.show()\n\nAlso note that I used subplot and not subplots. They're both different\n", "Below is a complete function show_image_list() that displays images side-by-side in a grid. You can invoke the function with different arguments.\n\nPass in a list of images, where each image is a Numpy array. It will create a grid with 2 columns by default. It will also infer if each image is color or grayscale.\n\nlist_images = [img, gradx, grady, mag_binary, dir_binary]\n\nshow_image_list(list_images, figsize=(10, 10))\n\n\n\nPass in a list of images, a list of titles for each image, and other arguments.\n\nshow_image_list(list_images=[img, gradx, grady, mag_binary, dir_binary], \n list_titles=['original', 'gradx', 'grady', 'mag_binary', 'dir_binary'],\n num_cols=3,\n figsize=(20, 10),\n grid=False,\n title_fontsize=20)\n\n\nHere's the code:\nimport matplotlib.pyplot as plt\nimport numpy as np\n\ndef img_is_color(img):\n\n if len(img.shape) == 3:\n # Check the color channels to see if they're all the same.\n c1, c2, c3 = img[:, : , 0], img[:, :, 1], img[:, :, 2]\n if (c1 == c2).all() and (c2 == c3).all():\n return True\n\n return False\n\ndef show_image_list(list_images, list_titles=None, list_cmaps=None, grid=True, num_cols=2, figsize=(20, 10), title_fontsize=30):\n '''\n Shows a grid of images, where each image is a Numpy array. The images can be either\n RGB or grayscale.\n\n Parameters:\n ----------\n images: list\n List of the images to be displayed.\n list_titles: list or None\n Optional list of titles to be shown for each image.\n list_cmaps: list or None\n Optional list of cmap values for each image. If None, then cmap will be\n automatically inferred.\n grid: boolean\n If True, show a grid over each image\n num_cols: int\n Number of columns to show.\n figsize: tuple of width, height\n Value to be passed to pyplot.figure()\n title_fontsize: int\n Value to be passed to set_title().\n '''\n\n assert isinstance(list_images, list)\n assert len(list_images) > 0\n assert isinstance(list_images[0], np.ndarray)\n\n if list_titles is not None:\n assert isinstance(list_titles, list)\n assert len(list_images) == len(list_titles), '%d imgs != %d titles' % (len(list_images), len(list_titles))\n\n if list_cmaps is not None:\n assert isinstance(list_cmaps, list)\n assert len(list_images) == len(list_cmaps), '%d imgs != %d cmaps' % (len(list_images), len(list_cmaps))\n\n num_images = len(list_images)\n num_cols = min(num_images, num_cols)\n num_rows = int(num_images / num_cols) + (1 if num_images % num_cols != 0 else 0)\n\n # Create a grid of subplots.\n fig, axes = plt.subplots(num_rows, num_cols, figsize=figsize)\n \n # Create list of axes for easy iteration.\n if isinstance(axes, np.ndarray):\n list_axes = list(axes.flat)\n else:\n list_axes = [axes]\n\n for i in range(num_images):\n\n img = list_images[i]\n title = list_titles[i] if list_titles is not None else 'Image %d' % (i)\n cmap = list_cmaps[i] if list_cmaps is not None else (None if img_is_color(img) else 'gray')\n \n list_axes[i].imshow(img, cmap=cmap)\n list_axes[i].set_title(title, fontsize=title_fontsize) \n list_axes[i].grid(grid)\n\n for i in range(num_images, len(list_axes)):\n list_axes[i].set_visible(False)\n\n fig.tight_layout()\n _ = plt.show()\n\n\n", "As per matplotlib's suggestion for image grids:\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.axes_grid1 import ImageGrid\n\nfig = plt.figure(figsize=(4., 4.))\ngrid = ImageGrid(fig, 111, # similar to subplot(111)\n nrows_ncols=(2, 2), # creates 2x2 grid of axes\n axes_pad=0.1, # pad between axes in inch.\n )\n\nfor ax, im in zip(grid, image_data):\n # Iterating over the grid returns the Axes.\n ax.imshow(im)\n\nplt.show()\n\n", "I end up at this url about once a week. For those who want a little function that just plots a grid of images without hassle, here we go:\nimport matplotlib.pyplot as plt\nimport numpy as np\n\ndef plot_image_grid(images, ncols=None, cmap='gray'):\n '''Plot a grid of images'''\n if not ncols:\n factors = [i for i in range(1, len(images)+1) if len(images) % i == 0]\n ncols = factors[len(factors) // 2] if len(factors) else len(images) // 4 + 1\n nrows = int(len(images) / ncols) + int(len(images) % ncols)\n imgs = [images[i] if len(images) > i else None for i in range(nrows * ncols)]\n f, axes = plt.subplots(nrows, ncols, figsize=(3*ncols, 2*nrows))\n axes = axes.flatten()[:len(imgs)]\n for img, ax in zip(imgs, axes.flatten()): \n if np.any(img):\n if len(img.shape) > 2 and img.shape[2] == 1:\n img = img.squeeze()\n ax.imshow(img, cmap=cmap)\n\n# make 16 images with 60 height, 80 width, 3 color channels\nimages = np.random.rand(16, 60, 80, 3)\n\n# plot them\nplot_image_grid(images)\n\n", "Sample code to visualize one random image from the dataset\ndef get_random_image(num):\n path=os.path.join(\"/content/gdrive/MyDrive/dataset/\",images[num])\n image=cv2.imread(path)\n return image\n\nCall the function\nimages=os.listdir(\"/content/gdrive/MyDrive/dataset\")\nrandom_num=random.randint(0, len(images))\nimg=get_random_image(random_num)\nplt.figure(figsize=(8,8))\nplt.imshow(cv2.cvtColor(img,cv2.COLOR_BGR2RGB))\n\nDisplay cluster of random images from the given dataset\n#Making a figure containing 16 images \nlst=random.sample(range(0,len(images)), 16)\nplt.figure(figsize=(12,12))\nfor index,value in enumerate(lst):\n img=get_random_image(value)\n img_resized=cv2.resize(img,(400,400))\n #print(path)\n plt.subplot(4,4,index+1)\n plt.imshow(img_resized)\n plt.axis('off')\n\nplt.tight_layout()\nplt.subplots_adjust(wspace=0, hspace=0)\n#plt.savefig(f\"Images/{lst[0]}.png\")\nplt.show() \n\n\n" ]
[ 148, 56, 32, 26, 13, 8, 4, 0 ]
[ "Plotting images present in a dataset\nHere rand gives a random index value which is used to select a random image present in the dataset and labels has the integer representation for every image type and labels_dict is a dictionary holding key val information\nfig,ax = plt.subplots(5,5,figsize = (15,15))\nax = ax.ravel()\nfor i in range(25):\n rand = np.random.randint(0,len(image_dataset))\n image = image_dataset[rand]\n ax[i].imshow(image,cmap = 'gray')\n ax[i].set_title(labels_dict[labels[rand]])\n \nplt.show()\n\n" ]
[ -2 ]
[ "matplotlib", "python" ]
stackoverflow_0041793931_matplotlib_python.txt
Q: Can not use numba in the class I recently want to use @njit(parallel=True) in package numba to speed up my nbodysimulation code, but when I separate the original function out of the class, my code can not work anymore. How to fix this problem? The following block is the original code to calculate acceleration. def _calculate_acceleration(self, mass, pos, rsoft): """ Calculate the acceleration. """ # TODO: N = self.particles rsoft = self.rsoft posx = pos[:,0] posy = pos[:,1] posz = pos[:,2] G = self.G npts = self.nparticles acc = np.zeros((npts, 3)) for i in prange(npts): for j in prange(npts): if (j>i): x = (posx[i]-posx[j]) y = (posy[i]-posy[j]) z = (posz[i]-posz[j]) rsq = x**2 + y**2 + z**2 req = np.sqrt(x**2 + y**2) f = -G*mass[i,0]*mass[j,0]/rsq theta = np.arctan2(y, x) phi = np.arctan2(z, req) fx = f*np.cos(theta)*np.cos(phi) fy = f*np.sin(theta)*np.cos(phi) fz = f*np.sin(phi) acc[i,0] += fx/mass[i] acc[i,1] += fy/mass[i] acc[i,2] += fz/mass[i] acc[j,0] -= fx/mass[j] acc[j,1] -= fy/mass[j] acc[j,2] -= fz/mass[j] return acc def initialRandomParticles(N = 100, total_mass = 10): """ Initial particles """ particles = Particles(N) masses = particles.masses mass = total_mass/particles.nparticles particles.masses = (masses*mass) positions = np.random.randn(N,3) velocities = np.random.randn(N,3) accelerations = np.random.randn(N,3) particles.positions = positions particles.velocities = velocities particles.accelerations = accelerations return particles particles = initialRandomParticles(N = 10**5, total_mass = 20) sim = NbodySimulation(particles) sim.setup(G=G,method="RK4",io_freq=200,io_title=problem_name,io_screen=True,visualized=False, rsoft=0.01) sim.evolve(dt=0.01,tmax=10) # Particles and NbodySimulation are defined class. I spearate the original function out of the class, and define another function to call it in the class, but it's still can not work. The following is new code which is out of the class. @njit(nopython=True, parallel=True) def _calculate_acceleration(n, npts, G, mass, pos, rsoft): """ Calculate the acceleration. This function is out of the class. """ # TODO: posx = pos[:,0] posy = pos[:,1] posz = pos[:,2] acc = np.zeros((n, 3)) sqrt = np.sqrt for i in prange(npts): for j in prange(npts): if (j>i): x = (posx[i]-posx[j]) y = (posy[i]-posy[j]) z = (posz[i]-posz[j]) rsq = x**2 + y**2 + z**2 req = sqrt(x**2 + y**2 + z**2) f = -G*mass[i,0]*mass[j,0]/(req + rsoft)**2 fx = f*x**2/rsq fy = f*y**2/rsq fz = f*z**2/rsq acc[i,0] = fx/mass[i] + acc[i,0] acc[i,1] = fy/mass[i] + acc[i,1] acc[i,2] = fz/mass[i] + acc[i,2] acc[j,0] = fx/mass[j] - acc[j,0] acc[j,1] = fy/mass[j] - acc[j,1] acc[j,2] = fz/mass[j] - acc[j,2] return acc Here is the error: TypingError Traceback (most recent call last) :428, in NbodySimulation._update_particles_rk4(self, dt, particles) 426 position = particles.positions # y0[0] 427 velocity = particles.velocities # y0[1], k1[0] --> 428 acceleration = self._calculate_acceleration_inclass() # k1[1] 430 position2 = position + 0.5*velocity * dt # y1[0] 431 velocity2 = velocity + 0.5*acceleration * dt # y1[1], k2[0] :381, in NbodySimulation._calculate_acceleration_inclass(self) 377 def _calculate_acceleration_inclass(self): 378 """ 379 Calculate the acceleration. ... <source elided> acc[i,0] = fx/mass[i] + acc[i,0] ^ A: It seems like you want to use the numba.njit decorator to speed up your code by making it run in parallel. To use numba.njit with parallel execution, you will need to specify the parallel=True keyword argument when you decorate your function. Additionally, you will need to use the prange function instead of the built-in range function in order to specify which loops should be executed in parallel. from numba import njit, prange @njit(nopython=True, parallel=True) def _calculate_acceleration(n, npts, G, mass, pos, rsoft): """ Calculate the acceleration. This function is out of the class. """ # TODO: posx = pos[:,0] posy = pos[:,1] posz = pos[:,2] acc = np.zeros((n, 3)) sqrt = np.sqrt for i in prange(npts): for j in prange(npts): if (j>i): x = (posx[i]-posx[j]) y = (posy[i]-posy[j]) z = (posz[i]-posz[j]) rsq = x**2 + y**2 + z**2 req = sqrt(x**2 + y**2) f = -G*mass[i,0]*mass[j,0]/rsq theta = np.arctan2(y, x) phi = np.arctan2(z, req) fx = f*np.cos(theta)*np.cos(phi) fy = f*np.sin(theta)*np.cos(phi) fz = f*np.sin(phi) acc[i,0] += fx/mass[i] acc[i,1] += fy/mass[i] acc[i,2] += fz/mass[i] acc[j,0] -= fx/mass[j] acc[j,1] -= fy/mass[j] acc[j,2] -= fz/mass[j] return acc
Can not use numba in the class
I recently want to use @njit(parallel=True) in package numba to speed up my nbodysimulation code, but when I separate the original function out of the class, my code can not work anymore. How to fix this problem? The following block is the original code to calculate acceleration. def _calculate_acceleration(self, mass, pos, rsoft): """ Calculate the acceleration. """ # TODO: N = self.particles rsoft = self.rsoft posx = pos[:,0] posy = pos[:,1] posz = pos[:,2] G = self.G npts = self.nparticles acc = np.zeros((npts, 3)) for i in prange(npts): for j in prange(npts): if (j>i): x = (posx[i]-posx[j]) y = (posy[i]-posy[j]) z = (posz[i]-posz[j]) rsq = x**2 + y**2 + z**2 req = np.sqrt(x**2 + y**2) f = -G*mass[i,0]*mass[j,0]/rsq theta = np.arctan2(y, x) phi = np.arctan2(z, req) fx = f*np.cos(theta)*np.cos(phi) fy = f*np.sin(theta)*np.cos(phi) fz = f*np.sin(phi) acc[i,0] += fx/mass[i] acc[i,1] += fy/mass[i] acc[i,2] += fz/mass[i] acc[j,0] -= fx/mass[j] acc[j,1] -= fy/mass[j] acc[j,2] -= fz/mass[j] return acc def initialRandomParticles(N = 100, total_mass = 10): """ Initial particles """ particles = Particles(N) masses = particles.masses mass = total_mass/particles.nparticles particles.masses = (masses*mass) positions = np.random.randn(N,3) velocities = np.random.randn(N,3) accelerations = np.random.randn(N,3) particles.positions = positions particles.velocities = velocities particles.accelerations = accelerations return particles particles = initialRandomParticles(N = 10**5, total_mass = 20) sim = NbodySimulation(particles) sim.setup(G=G,method="RK4",io_freq=200,io_title=problem_name,io_screen=True,visualized=False, rsoft=0.01) sim.evolve(dt=0.01,tmax=10) # Particles and NbodySimulation are defined class. I spearate the original function out of the class, and define another function to call it in the class, but it's still can not work. The following is new code which is out of the class. @njit(nopython=True, parallel=True) def _calculate_acceleration(n, npts, G, mass, pos, rsoft): """ Calculate the acceleration. This function is out of the class. """ # TODO: posx = pos[:,0] posy = pos[:,1] posz = pos[:,2] acc = np.zeros((n, 3)) sqrt = np.sqrt for i in prange(npts): for j in prange(npts): if (j>i): x = (posx[i]-posx[j]) y = (posy[i]-posy[j]) z = (posz[i]-posz[j]) rsq = x**2 + y**2 + z**2 req = sqrt(x**2 + y**2 + z**2) f = -G*mass[i,0]*mass[j,0]/(req + rsoft)**2 fx = f*x**2/rsq fy = f*y**2/rsq fz = f*z**2/rsq acc[i,0] = fx/mass[i] + acc[i,0] acc[i,1] = fy/mass[i] + acc[i,1] acc[i,2] = fz/mass[i] + acc[i,2] acc[j,0] = fx/mass[j] - acc[j,0] acc[j,1] = fy/mass[j] - acc[j,1] acc[j,2] = fz/mass[j] - acc[j,2] return acc Here is the error: TypingError Traceback (most recent call last) :428, in NbodySimulation._update_particles_rk4(self, dt, particles) 426 position = particles.positions # y0[0] 427 velocity = particles.velocities # y0[1], k1[0] --> 428 acceleration = self._calculate_acceleration_inclass() # k1[1] 430 position2 = position + 0.5*velocity * dt # y1[0] 431 velocity2 = velocity + 0.5*acceleration * dt # y1[1], k2[0] :381, in NbodySimulation._calculate_acceleration_inclass(self) 377 def _calculate_acceleration_inclass(self): 378 """ 379 Calculate the acceleration. ... <source elided> acc[i,0] = fx/mass[i] + acc[i,0] ^
[ "It seems like you want to use the numba.njit decorator to speed up your code by making it run in parallel. To use numba.njit with parallel execution, you will need to specify the parallel=True keyword argument when you decorate your function. Additionally, you will need to use the prange function instead of the built-in range function in order to specify which loops should be executed in parallel.\nfrom numba import njit, prange\n\n@njit(nopython=True, parallel=True)\ndef _calculate_acceleration(n, npts, G, mass, pos, rsoft):\n \"\"\"\n Calculate the acceleration. This function is out of the class.\n \"\"\"\n # TODO:\n posx = pos[:,0]\n posy = pos[:,1]\n posz = pos[:,2]\n acc = np.zeros((n, 3))\n sqrt = np.sqrt\n\n for i in prange(npts):\n for j in prange(npts):\n if (j>i): \n x = (posx[i]-posx[j])\n y = (posy[i]-posy[j])\n z = (posz[i]-posz[j])\n rsq = x**2 + y**2 + z**2\n req = sqrt(x**2 + y**2)\n\n f = -G*mass[i,0]*mass[j,0]/rsq\n \n theta = np.arctan2(y, x)\n phi = np.arctan2(z, req)\n fx = f*np.cos(theta)*np.cos(phi)\n fy = f*np.sin(theta)*np.cos(phi)\n fz = f*np.sin(phi)\n\n acc[i,0] += fx/mass[i]\n acc[i,1] += fy/mass[i]\n acc[i,2] += fz/mass[i]\n acc[j,0] -= fx/mass[j]\n acc[j,1] -= fy/mass[j]\n acc[j,2] -= fz/mass[j]\n return acc\n\n" ]
[ 0 ]
[]
[]
[ "jit", "jupyter_notebook", "numba", "python" ]
stackoverflow_0074668846_jit_jupyter_notebook_numba_python.txt
Q: How to put a limit for the player in the game I have this game, there could be unlimited amount of players, I want to make it minimum 2 and maximum 5. from dataclasses import dataclass @dataclass class Player: firstname: str lastname: str coins: int slot: int def full_info(self) -> str: return f"{self.firstname} {self.lastname} {self.coins} {self.slot}" @classmethod def from_user_input(cls) -> 'Player': return cls( firstname=input("Please enter your first name:"), lastname=input("Please enter your second name: "), coins=100, slot= 0) n = int(input("Number of players:")) playersingame = [] for i in range(n): playersingame.append(Player.from_user_input()) print([player.full_info() for player in playersingame]) I tried replacing lines 19 to 23 to max_players= 0 while (max_players <2) or (max_players > 5) : max_players = int(input(" Please choose a number of players between 2 and 5. ")) while len (players_dict) < max_players: The expected output is to enable the player to choose the amount of players ( minimum 2 and maximum 5) If you insert 1 or any number above 5 it should say "please chose a number between 2 and 5. A: You can write something like this: number_of_players = int(input("Number of players:")) while not (2 <= number_of_players <= 5): print("please chose a number between 2 and 5") number_of_players = int(input("Number of players: ")) Good luck :)
How to put a limit for the player in the game
I have this game, there could be unlimited amount of players, I want to make it minimum 2 and maximum 5. from dataclasses import dataclass @dataclass class Player: firstname: str lastname: str coins: int slot: int def full_info(self) -> str: return f"{self.firstname} {self.lastname} {self.coins} {self.slot}" @classmethod def from_user_input(cls) -> 'Player': return cls( firstname=input("Please enter your first name:"), lastname=input("Please enter your second name: "), coins=100, slot= 0) n = int(input("Number of players:")) playersingame = [] for i in range(n): playersingame.append(Player.from_user_input()) print([player.full_info() for player in playersingame]) I tried replacing lines 19 to 23 to max_players= 0 while (max_players <2) or (max_players > 5) : max_players = int(input(" Please choose a number of players between 2 and 5. ")) while len (players_dict) < max_players: The expected output is to enable the player to choose the amount of players ( minimum 2 and maximum 5) If you insert 1 or any number above 5 it should say "please chose a number between 2 and 5.
[ "You can write something like this:\nnumber_of_players = int(input(\"Number of players:\"))\nwhile not (2 <= number_of_players <= 5):\n print(\"please chose a number between 2 and 5\")\n number_of_players = int(input(\"Number of players: \"))\n\nGood luck :)\n" ]
[ 0 ]
[]
[]
[ "list", "python" ]
stackoverflow_0074669041_list_python.txt
Q: What is this connection error for roberta model? I want to run a roberta model, But it has a connection error... Here is the error: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on. from transformers import AutoTokenizer from transformers import AutoModelForSequenceClassification from scipy.special import softmax MODEL = f"cardiffnlp/twitter-roberta-base-sentiment" tokenizer = AutoTokenizer.from_pretrained(MODEL) model = AutoModelForSequenceClassification.from_pretrained(MODEL) A: I wonder whether or not you have resolved this. But the problem is that your Kaggle notebook is not connected to the internet. You can try opening the setting on the right of your screen, and make sure your internet is toggled on. This may require you to verify your phone number. Good luck
What is this connection error for roberta model?
I want to run a roberta model, But it has a connection error... Here is the error: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on. from transformers import AutoTokenizer from transformers import AutoModelForSequenceClassification from scipy.special import softmax MODEL = f"cardiffnlp/twitter-roberta-base-sentiment" tokenizer = AutoTokenizer.from_pretrained(MODEL) model = AutoModelForSequenceClassification.from_pretrained(MODEL)
[ "I wonder whether or not you have resolved this. But the problem is that your Kaggle notebook is not connected to the internet. You can try opening the setting on the right of your screen, and make sure your internet is toggled on. This may require you to verify your phone number. Good luck\n" ]
[ 0 ]
[]
[]
[ "data_science", "python" ]
stackoverflow_0074532943_data_science_python.txt
Q: Error with tatsu : does not recognize the right grammar pattern I am getting started with tatsu and I am trying to implement a grammar for the miniML language. Once my grammar successfully parsed, I tried to parse some little expressions to check that it was working ; however I discovered Tatsu was unable to recognize some of the expected patterns. Here is the code : ` grammar=""" @@grammar::CALC start = expression $ ; expression = |integer |addition |soustraction |multiplication |division |Fst |Snd |pair |varname |assign |function |application |parentheses ; integer = /\d+/ ; addition = left:'+' right:pair ; soustraction = '-' pair ; multiplication = '*' pair ; division = '/' pair ; Fst = 'Fst' pair ; Snd = 'Snd' pair ; pair = '(' expression ',' expression ')' ; varname = /[a-z]+/ ; assign = varname '=' expression ';' expression ; function = 'Lambda' varname ':' expression ; application = ' '<{expression}+ ; parentheses = '(' expression ')' ; """ ` then parsed : parser = tatsu.compile(grammar) All of those expression are successfully recognized, except the "assign" and the "application" ones. If i try something like this : parser.parse("x=3;x+1") I get that error message : FailedExpectingEndOfText: (1:2) Expecting end of text : x=3;x+1 ^ start and same goes for an expression of the type "expression expression". What could be the syntax error I made here ? I have no clue and I can't find any in the documentation. Thanks in advance ! A: It seems the failure of assign comes from a conflict with the varname rule; to solve it, simply place |assign BEFORE |variable in your expression rule. A now obsolete workaround, that I'll leave anyway: # I added a negative lookahead for '=' so it will not conflict with the assign rule varname = /[a-z]+/!'=' ; assign = /[a-z]+/ '=' expression ';' expression ; Example: parser.parse("x=1;+(x,1)") # ['x', '=', '1', ';', AST({'left': '+', 'right': ['(', 'x', ',', '1', ')']})] About 'application' : replacing ' ' with / / at the start of the rule, and placing |application at the start of the expression rule solves the problem: parser.parse("1 2 (x=1;3) *(4,5)") Out[207]: (' ', '1', (' ', '2', (' ', ['(', ['x', '=', '1', ';', '3'], ')'], ['*', ['(', '4', ',', '5', ')']])))
Error with tatsu : does not recognize the right grammar pattern
I am getting started with tatsu and I am trying to implement a grammar for the miniML language. Once my grammar successfully parsed, I tried to parse some little expressions to check that it was working ; however I discovered Tatsu was unable to recognize some of the expected patterns. Here is the code : ` grammar=""" @@grammar::CALC start = expression $ ; expression = |integer |addition |soustraction |multiplication |division |Fst |Snd |pair |varname |assign |function |application |parentheses ; integer = /\d+/ ; addition = left:'+' right:pair ; soustraction = '-' pair ; multiplication = '*' pair ; division = '/' pair ; Fst = 'Fst' pair ; Snd = 'Snd' pair ; pair = '(' expression ',' expression ')' ; varname = /[a-z]+/ ; assign = varname '=' expression ';' expression ; function = 'Lambda' varname ':' expression ; application = ' '<{expression}+ ; parentheses = '(' expression ')' ; """ ` then parsed : parser = tatsu.compile(grammar) All of those expression are successfully recognized, except the "assign" and the "application" ones. If i try something like this : parser.parse("x=3;x+1") I get that error message : FailedExpectingEndOfText: (1:2) Expecting end of text : x=3;x+1 ^ start and same goes for an expression of the type "expression expression". What could be the syntax error I made here ? I have no clue and I can't find any in the documentation. Thanks in advance !
[ "\nIt seems the failure of assign comes from a conflict with the varname rule; to solve it, simply place |assign BEFORE |variable in your expression rule.\n\nA now obsolete workaround, that I'll leave anyway:\n# I added a negative lookahead for '=' so it will not conflict with the assign rule\nvarname = /[a-z]+/!'=' ;\n \nassign = /[a-z]+/ '=' expression ';' expression ;\n\nExample:\nparser.parse(\"x=1;+(x,1)\")\n# ['x', '=', '1', ';', AST({'left': '+', 'right': ['(', 'x', ',', '1', ')']})]\n\n\nAbout 'application' : replacing ' ' with / / at the start of the rule, and placing |application at the start of the expression rule solves the problem:\n\nparser.parse(\"1 2 (x=1;3) *(4,5)\")\nOut[207]: \n(' ',\n '1',\n (' ',\n '2',\n (' ',\n ['(', ['x', '=', '1', ';', '3'], ')'],\n ['*', ['(', '4', ',', '5', ')']])))\n\n" ]
[ 0 ]
[]
[]
[ "grammar", "parsing", "python", "tatsu" ]
stackoverflow_0074668215_grammar_parsing_python_tatsu.txt