"{\"inputs\":\"# Just read and run this cell.\\n\\naditya_height_m = 1.21\\nbotan_height_m = 1.85\\naverage_adult_human_height_m = 1.688\\n\\n# The biggest distance from the average human height, among the two heights:\\nbiggest_distance_m = max(abs(aditya_height_m - average_adult_human_height_m), abs(botan_height_m - average_adult_human_height_m))\\n\\n# Print out our results in a nice readable format:\\nprint(\\\"The biggest distance from the average height among these two people is\\\", biggest_distance_m, \\\"meters.\\\")\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\n5.1. More nesting\\nNow say that we want to compute the most unusual height among Aditya's and Botan's heights. We'll use the function max, which (again) takes two numbers as arguments and returns the larger of the two arguments. Combining that with the abs function, we can compute the biggest distance from the average among the two heights:\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"DV360 Automation: codelab\\nAuthor: Matt Lynam\\n\\nObjective\\nEnable Display & Video 360 (DV360) advertisers to increase workflow efficiency by utilising the right automation solution according to their needs, resources and technical capability.\\nGoals\\n* Provide an overview of the current automation suite available in DV360\\n* Demonstrate the capabilities and limitations of DV360's UI and APIs\\n* Explore common advertiser use cases and pitfalls\\n* Acquire hands-on experience by applying key concepts using a fictional case study\\n0) Setup and authentication\\nGoogle Colab primer\\nGoogle Colaboratory, or \\\"Colab\\\" for short, allows you to write and execute Python in your browser, with:\\n- Zero configuration required\\n- Free access to GPUs\\n- Easy sharing & colaboration \\nA notebook is a list of cells, containing either explanatory text or executable code and its output. This is a text cell. \\nUseful Colab tips\\n* Double-click within the cell to edit\\n* Code cells can be executed by clicking the Play icon in the left gutter of the cell; or with Cmd\\/Ctrl + Enter to run the cell in place;\\n* Use Cmd\\/Ctrl + \\/ to comment out a line of code\\n0.1 Install Python client libraries\\nRun the following block to install the latest Google Python Client Library and import additional libraries used for this workshop.\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\n!pip install google-api-python-client\\n!pip install google-cloud-vision\\n\\nimport csv\\nimport datetime\\nimport io\\nimport json\\nimport pprint\\n\\nfrom google.api_core import retry\\nfrom google.cloud import vision\\nfrom google.colab import files\\nfrom google_auth_oauthlib.flow import InstalledAppFlow\\nfrom googleapiclient import discovery\\nfrom googleapiclient import http\\nimport pandas as pd\\nimport requests\\n\\nprint('Successfully imported Python libraries!')\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"Adding spin echoes\\nDynamical decoupling applies a series of spin echoes to otherwise idle qubits to reduce decoherent effects. As mentioned above, spin echoes were used as an effective error mitigation technique in Information Scrambling in Computationally Complex Quantum Circuits, and the performance of any circuit with idle qubits can potentially be improved by adding spin echoes.\\nThe following codeblock shows how to insert spin echoes on the ancilla qubit.\\n\",\"targets\":\"# Gates for spin echoes. Note that these gates are self-inverse.\\npi_pulses = [\\n cirq.PhasedXPowGate(phase_exponent=p, exponent=1.0) for p in (-0.5, 0.0, 0.5, 1.0)\\n]\\n\\n# Generate spin echoes on ancilla.\\nnum_echoes = 3\\nrandom_state = np.random.RandomState(1)\\n\\nspin_echo = []\\nfor _ in range(num_echoes):\\n op = random_state.choice(pi_pulses).on(qubits[0])\\n spin_echo += [op, cirq.inverse(op)]\\n\\n# Insert spin echo operations to circuit.\\noptimized_circuit_with_spin_echoes = circuit.copy()\\noptimized_circuit_with_spin_echoes.insert(5, spin_echo)\\n\\n# Align single-qubit spin echo gates into other moments of single-qubit gates.\\noptimized_circuit_with_spin_echoes = cirq.stratified_circuit(\\n optimized_circuit_with_spin_echoes, \\n categories=[lambda op : len(op.qubits) == 1, lambda op : len(op.qubits) == 2]\\n)\\noptimized_circuit_with_spin_echoes\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"... now we're on the status page, get the page source and parse it with BeautifulSoup\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\npageSrc = browser.page_source\\npageSoup = Soup(pageSrc, 'lxml')\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"15.4. Density\\nIs Required: TRUE Type: ENUM Cardinality: 1.1\\nDescription of the treatment of snow density\\n\",\"targets\":\"# PROPERTY ID - DO NOT EDIT ! \\nDOC.set_id('cmip6.land.snow.density') \\n\\n# PROPERTY VALUE: \\n# Set as follows: DOC.set_value(\\\"value\\\") \\n# Valid Choices: \\n# \\\"prognostic\\\" \\n# \\\"constant\\\" \\n# \\\"Other: [Please specify]\\\" \\nDOC.set_value(\\\"constant\\\")\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"Análogamente podemos establecer los filtros usados en print_stats:\\n\",\"targets\":\"p_rcf_stats.strip_dirs().sort_stats(\\\"cumulative\\\").print_callers(10)\\n\\np_rcf_stats.strip_dirs().sort_stats(\\\"cumulative\\\").print_callees(\\\"Rcf|lambda\\\")\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"
\\n אם ננסה לעשות unpacking לאיבר שאינו iterable, תתקבל השגיאה הבאה:\\n<\\/p>\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\na, b = 5\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"\\\"Although there's no obvious relationship in this case, such analyses may be\\nuseful for metadata variables that more directly index the time course of\\nstimulus processing (such as reaction time).\\nAdding metadata to an Epochs object\\nYou can add a metadata :class:~pandas.DataFrame to any\\n~mne.Epochs object (or replace existing metadata) simply by\\nassigning to the :attr:~mne.Epochs.metadata attribute:\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\nnew_metadata = pd.DataFrame(data=['foo'] * len(epochs), columns=['bar'],\\n index=range(len(epochs)))\\nepochs.metadata = new_metadata\\nepochs.metadata.head()\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"# Calculate the derivative with respect to v\\n\\nprint(\\\"The partial derivative with respect to u: \\\", v.grad)\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nthe expression is given by:\\n$\\\\frac{\\\\mathrm{\\\\partial f(u,v)}}{\\\\partial {u}}=v+2u$\\n$\\\\frac{\\\\mathrm{\\\\partial f(u=1,v=2)}}{\\\\partial {u}}=2+2(1)=4$\\n\\n\\nNow, take the derivative with respect to Remez and least squares designs have advantages when there are\\n \\\"do not care\\\" regions in our frequency response. However, we want\\n well controlled responses in all frequency regions.\\n Frequency-domain construction is good when an arbitrary response\\n is desired, but generally less clean (due to sampling issues) than\\n a windowed approach for more straightforward filter applications.\\n Since our filters (low-pass, high-pass, band-pass, band-stop)\\n are fairly simple and we require precise control of all frequency\\n regions, we will primarily use and explore windowed FIR design.<\\/p><\\/div>\\n\\nIf we relax our frequency-domain filter requirements a little bit, we can\\nuse these functions to construct a lowpass filter that instead has a\\ntransition band, or a region between the pass frequency $f_p$\\nand stop frequency $f_s$, e.g.:\\nCan you write Python code for it?\\n\",\"targets\":\"\\ntrans_bandwidth = 10 # 10 Hz transition band\\nf_s = f_p + trans_bandwidth # = 50 Hz\\n\\nfreq = [0., f_p, f_s, nyq]\\ngain = [1., 1., 0., 0.]\\nax = plt.subplots(1, figsize=third_height)[1]\\ntitle = '%s Hz lowpass with a %s Hz transition' % (f_p, trans_bandwidth)\\nplot_ideal_filter(freq, gain, ax, title=title, flim=flim)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"\\\"1.1 Model for one patient recieving a single dose\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\n# Simulate data\\nstates = ode_map(k1 = theta[0], k2 = theta[1])\\nsigma = 0.1\\nlog_y = sigma * random.normal(random.PRNGKey(37272710), (states.shape[0],)) \\\\\\n + jnp.log(states)\\n\\ny = jnp.exp(log_y)\\n# print(y)\\n\\nfigure(figsize = [6, 6])\\nplot(t[1:], states)\\nplot(t[1:], y, 'o')\\nshow()\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"# dipo_min = dipo[phi_min, gam_min, the_min]\\n# dipo_ci = dipo[phi_ci, gam_ci, the_ci]\\n# difference_dipo = dipo_ci - dipo_min\\n\\n# for i in range(8):\\n# permanent = difference_dipo[:,i,i]\\n# print('S_{} -> {}'.format(i,permanent))\\n\\n# dipo_min[:,1,2],dipo_min[:,0,1],dipo_min[:,0,6],dipo_min[:,0,3],dipo_min[:,0,2],dipo_min[:,0,7]\\n\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nHere I check the direction of the permanent dipoles.\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"!python test.py --reconstruct --data data\\/yelp\\/test.txt --output test.rec --checkpoint checkpoints\\/yelp\\/daae\\/\\n\\n\\n\\n!ls checkpoints\\/yelp\\/daae\\n\\n!head checkpoints\\/yelp\\/daae\\/test.rec.rec\\n\\n!head checkpoints\\/yelp\\/daae\\/test.rec.z\\n\\n!head data\\/yelp\\/test.txt\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nReconstruction\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"DIVISÃO EM ZONA RURAL E URBANA, A SEGUNDA VARIÁVEL DE ANÁLISE\\n\",\"targets\":\"base.loc[(base.V4105<4),\\\"ZONA\\\"]=\\\"Urbana\\\"\\nbase.loc[(base.V4105>3),\\\"ZONA\\\"]=\\\"Rural\\\"\\nbase.ZONA=base.ZONA.astype(\\\"category\\\")\\n\\nbase9.loc[(base9.V4105<4),\\\"ZONA\\\"]=\\\"Urbana\\\"\\nbase9.loc[(base9.V4105>3),\\\"ZONA\\\"]=\\\"Rural\\\"\\nbase9.ZONA=base9.ZONA.astype(\\\"category\\\")\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"\\\"Although C gets more than half of the PageRank for itself, the effect has been limited.\\nNote that for a random surfer, there are three path to move:\\n\\n\\nfollow a link.\\n\\n\\nteleport to a random page. $\\\\gets$ taxation\\n\\n\\ngoes nowhere. $\\\\gets$ dead ends\\n\\n\\nSince there will always be some fraction of a surfer operating on the Web, so even if there are dead ends, the sum of the ocmponents of $v$ may be less than 1, but it will never reacher 0.\\n5.1.6 Using PageRank in a Search Engine\\n\\n\\nfind the qualified pages, which have at least one of the search terms in the query.\\n\\n\\ncalculate a score for those pages, including PageRank.\\n\\n\\n5.1.7 Exercises\\n5.1.1\\n略\\n5.1.2\\n略\\n5.1.3\\n$n$ nodes: 1\\/n \\nthe additional node: n * 1\\/n * 1\\/n = 1\\/n\\n5.1.4\\ntodo\\n5.1.5\\n略\\n5.1.6\\nthe first node: 1 \\nthe left nodes: 1\\/2\\n5.1.7\\nroot: 1 \\nheight = 1: 1\\/3 \\nheight = 2: 1\\/3 * 1\\/2 \\nheight = k: $\\\\frac{1}{3} \\\\times (\\\\frac{1}{2})^{k-1}, k > 1$\\n5.2 Efficient Computation of PageRank\\nPageRank: matrix-vector multiplicaton $\\\\to$ MapReduce. \\nHowever, we must deal with two issues:\\n\\n\\n$M$ is very sparse. \\n list its nonzero elements.\\n\\n\\nwe may wish to use a combiner to reduce the amount of data (Map $\\\\to$ Reduce). \\n striping approach.\\n\\n\\n5.2.1 Representing Transition Matrices\\nThe proper way to represent any sparse matrix is to list the locations and values of the nonzero entries. \\nThe space needed is linear in the number of nonzero entries.\\nRepresent a column by: \\n 1. one integer for the out-degree, \\n 2. one integer for rowname per nonzero entry.\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\nmatrix_5_1\\n\\nimport string\\n\\ndf_M = pd.DataFrame(matrix_5_1, index=list(string.uppercase[0:4]), columns=list(string.uppercase[0:4]))\\ndf_M\\n\\ndef compact_representation_of_sparse_matrix(df):\\n \\\"\\\"\\\"It is introduced in Example 5.7\\\"\\\"\\\"\\n \\n degree = df.apply(np.count_nonzero, axis=0) \\n \\n dest = df.apply(np.nonzero, axis=0) \\n dest = dest.apply(lambda x: x[0])\\n \\n return pd.concat([degree, dest], axis=1, keys=['Degree', 'Destinations']) \\n \\n \\ncompact_representation_of_sparse_matrix(df_M)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"\\\"Varianz<\\/b> Equivalent methods are available for raw and evoked data objects.<\\/p><\\/div>\\n\\nMore information and additional introductory materials can be found at the\\npandas doc sites: http:\\/\\/pandas.pydata.org\\/pandas-docs\\/stable\\/\\nShort Pandas Primer\\nPandas Data Frames\\n~~~~~~~~~~~~~~~~~~\\nA data frame can be thought of as a combination of matrix, list and dict:\\nIt knows about linear algebra and element-wise operations but is size mutable\\nand allows for labeled access to its data. In addition, the pandas data frame\\nclass provides many useful methods for restructuring, reshaping and visualizing\\ndata. As most methods return data frame instances, operations can be chained\\nwith ease; this allows to write efficient one-liners. Technically a DataFrame\\ncan be seen as a high-level container for numpy arrays and hence switching\\nback and forth between numpy arrays and DataFrames is very easy.\\nTaken together, these features qualify data frames for inter operation with\\ndatabases and for interactive data exploration \\/ analysis.\\nAdditionally, pandas interfaces with the R statistical computing language that\\ncovers a huge amount of statistical functionality.\\nExport Options\\n~~~~~~~~~~~~~~\\nThe pandas exporter comes with a few options worth being commented.\\nPandas DataFrame objects use a so called hierarchical index. This can be\\nthought of as an array of unique tuples, in our case, representing the higher\\ndimensional MEG data in a 2D data table. The column names are the channel names\\nfrom the epoch object. The channels can be accessed like entries of a\\ndictionary::\\n>>> df['MEG 2333']\\n\\nEpochs and time slices can be accessed with the .loc method::\\n>>> epochs_df.loc[(1, 2), 'MEG 2333']\\n\\nHowever, it is also...\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"\\\"1. Single-subject dataset example\\nGetting started\\nWe will use a dataset where one subject was presented with 92 different visual stimuli while brain responses were measured in 100 voxels.\\nThe different visual stimuli (each row) are the conditions, and the voxels (each column) are the measurement channels.\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\n# import the measurements for the dataset\\nmeasurements = io.matlab.loadmat('92imageData\\/simTruePatterns.mat')\\nmeasurements = measurements['simTruePatterns']\\nnCond = measurements.shape[0]\\nnVox = measurements.shape[1]\\n\\n# plot the imported data\\nplt.imshow(measurements,cmap='gray') \\nplt.xlabel('Voxels')\\nplt.ylabel('Conditions')\\nplt.title('Measurements')\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"Caching transformers within a Pipeline\\n\\nIt is sometimes worthwhile storing the state of a specific transformer\\n since it could be used again. Using a pipeline in GridSearchCV triggers\\n such situations. Therefore, we use the argument memory to enable caching.\\n.. warning::\\n Note that this example is, however, only an illustration since for this\\n specific case fitting PCA is not necessarily slower than loading the\\n cache. Hence, use the memory constructor parameter when the fitting\\n of a transformer is costly.\\n\",\"targets\":\"from tempfile import mkdtemp\\nfrom shutil import rmtree\\nfrom sklearn.externals.joblib import Memory\\n\\n# Create a temporary folder to store the transformers of the pipeline\\ncachedir = mkdtemp()\\nmemory = Memory(cachedir=cachedir, verbose=10)\\ncached_pipe = Pipeline([('reduce_dim', PCA()),\\n ('classify', LinearSVC())],\\n memory=memory)\\n\\n# This time, a cached pipeline will be used within the grid search\\ngrid = GridSearchCV(cached_pipe, cv=3, n_jobs=1, param_grid=param_grid)\\ndigits = load_digits()\\ngrid.fit(digits.data, digits.target)\\n\\n# Delete the temporary cache before exiting\\nrmtree(cachedir)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"Creating lattice\\nOcelot has following elements: Drift, Quadrupole, Sextupole, Octupole, Bend, SBend, RBend, Edge, Multipole, Hcor, Vcor, Solenoid, Cavity, Monitor, Marker, Undulator.\\n\",\"targets\":\"# defining of the drifts\\nD1 = Drift(l=2.)\\nD2 = Drift(l=0.6)\\nD3 = Drift(l=0.3)\\nD4 = Drift(l=0.7)\\nD5 = Drift(l=0.9)\\nD6 = Drift(l=0.2)\\n\\n# defining of the quads\\nQ1 = Quadrupole(l=0.4, k1=-1.3)\\nQ2 = Quadrupole(l=0.8, k1=1.4)\\nQ3 = Quadrupole(l=0.4, k1=-1.7)\\nQ4 = Quadrupole(l=0.5, k1=1.3)\\n\\n# defining of the bending magnet\\nB = Bend(l=2.7, k1=-.06, angle=2*pi\\/16., e1=pi\\/16., e2=pi\\/16.)\\n\\n# defining of the sextupoles\\nSF = Sextupole(l=0.01, k2=1.5) #random value\\nSD = Sextupole(l=0.01, k2=-1.5) #random value\\n\\n# cell creating\\ncell = (D1, Q1, D2, Q2, D3, Q3, D4, B, D5, SD, D5, SF, D6, Q4, D6,\\n SF, D5, SD, D5, B, D4, Q3, D3, Q2, D2, Q1, D1)\\n\\ncell\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"3. 搜索\\nNumPy有多个函数可以在数据中进行搜索。\\n\\nargmax函数返回数组中最大值对应的下标\\n\",\"targets\":\"a = np.array([2,4,8])\\nnp.argmax(a)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"# prepare digits to send to online prediction endpoint\\ndigits_float32 = np.concatenate((font_digits, validation_digits[:100-N])) # pixel values in [0.0, 1.0] float range\\ndigits_uint8 = np.round(digits_float32*255).astype(np.uint8) # pixel values in [0, 255] int range\\nlabels = np.concatenate((font_labels, validation_labels[:100-N]))\\nwith open(\\\"digits.json\\\", \\\"w\\\") as f:\\n for digit in digits_uint8:\\n # the format for AI Platform online predictions is: one JSON object per line\\n data = json.dumps({\\\"images\\\": digit.tolist()}) # \\\"images\\\" because that was the name you gave this parametr in the serving funtion my_serve\\n f.write(data+'\\\\n')\\n\\n# Request online predictions from deployed model (REST API) using the \\\"gcloud ml-engine\\\" command line.\\npredictions = !gcloud ai-platform predict --model={MODEL_NAME} --json-instances digits.json --project={PROJECT} --version {MODEL_VERSION}\\nprint(predictions)\\n\\npredictions = np.stack([json.loads(p) for p in predictions[2:]]) # first elemet is the name of the output layer: drop it, parse the rest\\ndisplay_top_unrecognized(digits_float32, predictions, labels, N, 100\\/\\/N)\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nTest the deployed model\\nYour model is now available as a REST API. Let us try to call it. The cells below use the \\\"gcloud ml-engine\\\"\\ncommand line tool but any tool that can send a JSON payload to a REST endpoint will work.\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"%matplotlib inline\\nfrom matplotlib import pyplot as plt\\nfrom pandas import DataFrame\\n\\ndef plot_by(df, column='Dept Name', count=10, ascending=False):\\n \\n # Group the data by the column specified and sum the totals.\\n data = df.groupby(column)['Total'].sum().dropna()\\n \\n # Sort the data.\\n data = DataFrame(data, columns=['Total']).sort('Total', ascending=ascending)\\n \\n # Plot the subset of the sorted data that the user is interested in.\\n data = data[:count].plot(kind='bar')\\n \\n # Plot settings.\\n plt.title('%s Costs' % column)\\n plt.ylabel('Cost ($)')\\n\\nfrom IPython.html.widgets import interact, fixed\\ninteract(plot_by, df=fixed(df), column=df.columns.tolist(), count=(5,15));\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nNow the data can be explored using matplotlib and interact. The following function plots the costs of the selected parameter type.\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"I am working on the file \\\"MNIST_NP_Softmax_Basic.ipynb\\\".\\nThe first task is:\\nThe Accuracy function takes two vectors of class labels and computes the accuracy or in other words how many of them match.\\nCan you write Python code for it?\\n\",\"targets\":\"\\ndef Accuracy(preds, labels):\\n accuracy = sum(preds == labels)\\/(float(len(labels)))\\n return accuracy\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"\\\"Comment the cases for which the p-value is very low (i.e. p < 0.01).\\nPart c\\n$$\\\\int {-1}^1\\\\int {-1}^1e^{-6 x^2+10 x y-6 y^2}dxdy=0.86787$$\\n2d midpoint integration\\n\\n\\n\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\n# Definition of the T values for which to calculate alpha and beta\\nN = 100\\nrvec = np.linspace(1e-6,np.sqrt(2),N)\\n# initializate arrays for each method\\nalpha_int_r = np.empty(N)\\nbeta_int_r = np.empty(N)\\nalpha_hist_r = np.empty(N)\\nbeta_hist_r = np.empty(N)\\n\\n# midpoint parameters\\nTint_r = np.sqrt(Xint**2+Yint**2)\\n\\n# histogram quadrature parameters\\nNb_r = 100\\nT_bins_r = np.linspace(0, np.sqrt(2),Nb_r+1)\\ntstShist_r, tstShist_edges = np.histogram(tstSR,bins=T_bins_r,normed=1)\\ntstBhist_r, tstBhist_edges = np.histogram(tstBR,bins=T_bins_r,normed=1)\\nbin_width_r = tstShist_edges[1]-tstShist_edges[0]\\n\\nfor i,Tcut in enumerate(rvec):\\n alpha_int_r[i], beta_int_r[i] = test_int(Tcut,tstSR,tstBR)\\n alpha_hist_r[i], beta_hist_r[i] = hist_integral(Tcut,tstShist_r,tstBhist_r,T_bins_r)\\n\\nstn_int_r = (1-alpha_int_r)\\/np.sqrt(beta_int_r)\\nstn_hist_r = (1-alpha_hist_r)\\/np.sqrt(beta_hist_r)\\n\\nfigs = [plt.figure(j+1) for j in range(3)]\\nax1, ax2, ax3 = [fig.add_subplot(111) for fig in figs]\\nax1.plot(rvec,1-alpha_int_r,'--',lw=3,label='int')\\nax1.plot(rvec,1-alpha_hist_r,':',lw=3,label='histogram')\\nax1.set_ylabel(r'$1-\\\\alpha$')\\nax2.plot(rvec,beta_int_r,'--',lw=3,label='int')\\nax2.plot(rvec,beta_hist_r,':',lw=3,label='histogram')\\nax2.set_ylabel(r'$\\\\beta$')\\nax3.plot(rvec,stn_int_r,'--',lw=3,label='int')\\nax3.plot(rvec[1:],stn_hist_r[1:],':',lw=3,label='histogram')\\nax3.set_ylabel(r'$\\\\frac{1-\\\\alpha}{\\\\sqrt{\\\\beta}}$')\\nfor ax in [ax1,ax2,ax3]:\\n ax.set_xlabel('r')\\n ax.grid(True)\\n ax.legend()\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"I am working on the file \\\"Feature Selector Usage.ipynb\\\".\\nThe first task is:\\nWe can view a heatmap of the correlations above the threhold. The features which will be dropped are on the x-axis.\\nCan you write Python code for it?\\n\",\"targets\":\"\\nfs.plot_collinear()\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"\\\"4. Enter DV360 Segmentology Parameters\\nDV360 funnel analysis using Census data.\\n 1. Wait for BigQuery->->->Census_Join<\\/b> to be created.\\n 1. Join the StarThinker Assets Group<\\/a> to access the following assets\\n 1. Copy DV360 Segmentology Sample<\\/a>. Leave the Data Source as is, you will change it in the next step.\\n 1. Click Edit Connection, and change to BigQuery->->->Census_Join<\\/b>.\\n 1. Or give these intructions to the client.\\nModify the values below for your use case, can be done multiple times, then click play.\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\nFIELDS = {\\n 'auth_read': 'user', # Credentials used for reading data.\\n 'recipe_timezone': 'America\\/Los_Angeles', # Timezone for report dates.\\n 'recipe_project': '', # Project ID hosting dataset.\\n 'auth_write': 'service', # Authorization used for writing data.\\n 'recipe_name': '', # Name of report, not needed if ID used.\\n 'recipe_slug': '', # Name of Google BigQuery dataset to create.\\n 'partners': [], # DV360 partner id.\\n 'advertisers': [], # Comma delimited list of DV360 advertiser ids.\\n}\\n\\nprint(\\\"Parameters Set To: %s\\\" % FIELDS)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"# Check the columns\\nFR_re_df.isnull().all()\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nCheck missing values\\nNow, we will drop all the columns and all the rows which contain only null values.\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"3. ANOVA tables and post-hoc comparisons\\n ANOVAs and post-hoc tests are only available for :code:`Lmer` models estimated using the :code:`factors` argument of :code:`model.fit()` and rely on implementations in R<\\/p><\\/div>\\n\\nIn the previous tutorial where we looked at categorical predictors, behind the scenes :code:pymer4 was using the :code:factor functionality in R. This means the output of :code:model.fit() looks a lot like :code:summary() in R applied to a model with categorical predictors. But what if we want to compute an F-test across all levels of our categorical predictor?\\n:code:pymer4 makes this easy to do, and makes it easy to ensure Type III sums of squares infereces are valid. It also makes it easy to follow up omnibus tests with post-hoc pairwise comparisons.\\nANOVA tables and orthogonal contrasts\\nBecause ANOVA is just regression, :code:pymer4 can estimate ANOVA tables with F-results using the :code:.anova() method on a fitted model. This will compute a Type-III SS table given the coding scheme provided when the model was initially fit. Based on the distribution of data across factor levels and the specific coding-scheme used, this may produce invalid Type-III SS computations. For this reason the :code:.anova() method has a :code:force-orthogonal=True argument that will reparameterize and refit the model using orthogonal polynomial contrasts prior to computing an ANOVA table.\\nHere we first estimate a mode with dummy-coded categories and suppress the summary output of :code:.fit(). Then we use :code:.anova() to examine the F-test results.\\n\",\"targets\":\"# import basic libraries and sample data\\nimport os\\nimport pandas as pd\\nfrom pymer4.utils import get_resource_path\\nfrom pymer4.models import Lmer\\n\\n# IV3 is a categorical predictors with 3 levels in the sample data\\ndf = pd.read_csv(os.path.join(get_resource_path(), \\\"sample_data.csv\\\"))\\n\\n# # We're going to fit a multi-level regression using the\\n# categorical predictor (IV3) which has 3 levels\\nmodel = Lmer(\\\"DV ~ IV3 + (1|Group)\\\", data=df)\\n\\n# Using dummy-coding; suppress summary output\\nmodel.fit(factors={\\\"IV3\\\": [\\\"1.0\\\", \\\"0.5\\\", \\\"1.5\\\"]}, summarize=False)\\n\\n# Get ANOVA table\\nprint(model.anova())\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"I am working on the file \\\"Part1a_Fundamentals_of_Programming.ipynb\\\".\\nThe first task is:\\nSets\\nInstances of the set type are equivalent to mathematical sets. Like their math counterparts, literal sets in Python are defined by comma seperated values between curly braces ({}). Sets are unordered containers of unique values. Duplicated elements are ignored. Beacuse they unordered, sets are not sequences and cannot be duplicated.\\nCan you write Python code for it?\\n\",\"targets\":\"\\n# a literal set formed with elements of various types\\n{1.0, 10, \\\"one hundred\\\", (1, 0, 0, 0)}\\n\\n# a literal set OF special values\\n{True, False, None, \\\"\\\", 0.0, 0}\\n\\n# conversion from a list to a set\\nset([2.0, 4, \\\"eight\\\", (16,), 4, 4, 2.0])\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"%matplotlib inline\\nimport matplotlib.pyplot as plt\\nfrom mpl_toolkits.mplot3d import Axes3D\\nplt.xkcd()\\nfig = plt.figure(figsize=(10, 5))\\nax = fig.add_subplot(121, projection='3d')\\nax.plot_surface(x, y, f)\\nax.set_title('Original function')\\nax = fig.add_subplot(122, projection='3d')\\nax.plot_surface(x, y, fappr - f)\\nax.set_title('Approximation error with rank=%d, err=%3.1e' % (r, er))\\nfig.subplots_adjust()\\nfig.tight_layout()\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nAnd do some 3D plotting\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"Create Convolutional Model\\nImplement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model:\\n\\nApply 1, 2, or 3 Convolution and Max Pool layers\\nApply a Flatten Layer\\nApply 1, 2, or 3 Fully Connected Layers\\nApply an Output Layer\\nReturn the output\\nApply TensorFlow's Dropout to one or more layers in the model using keep_prob.\\n\",\"targets\":\"def conv_net(x, keep_prob):\\n \\\"\\\"\\\"\\n Create a convolutional neural network model\\n : x: Placeholder tensor that holds image data.\\n : keep_prob: Placeholder tensor that hold dropout keep probability.\\n : return: Tensor that represents logits\\n \\\"\\\"\\\"\\n # Should I attempt the Siraj's VGG16 conv model? xxx given we have merged conv2d and max into \\n # a single function, I guess below is not possible the way this project is setup. Another day.\\n # Conv block 1 with 064 output filters - Conv2d > Conv2d > MaxPooling2D\\n # Conv block 2 with 128 output filters - Conv2d > Conv2d > MaxPooling2D\\n # Conv block 3 with 256 output filters - Conv2d > Conv2d > Conv2d > MaxPooling2d\\n # Conv block 4 with 512 output filters - Conv2d > Conv2d > Conv2d > MaxPooling2d\\n # Fully-connected classifier - Flatten > Dense > Dense > Dense\\n '''\\n model_vgg = Sequential()\\n model_vgg.add(ZeroPadding2D((1, 1), input_shape=(img_width, img_height,3)))\\n model_vgg.add(Convolution2D(64, 3, 3, activation='relu', name='conv1_1'))\\n model_vgg.add(ZeroPadding2D((1, 1)))\\n model_vgg.add(Convolution2D(64, 3, 3, activation='relu', name='conv1_2'))\\n model_vgg.add(MaxPooling2D((2, 2), strides=(2, 2)))\\n\\n model_vgg.add(ZeroPadding2D((1, 1)))\\n model_vgg.add(Convolution2D(128, 3, 3, activation='relu', name='conv2_1'))\\n model_vgg.add(ZeroPadding2D((1, 1)))\\n model_vgg.add(Convolution2D(128, 3, 3, activation='relu', name='conv2_2'))\\n model_vgg.add(MaxPooling2D((2, 2), strides=(2, 2)))\\n\\n model_vgg.add(ZeroPadding2D((1, 1)))\\n model_vgg.add(Convolution2D(256, 3, 3, activation='relu', name='conv3_1'))\\n model_vgg.add(ZeroPadding2D((1, 1)))\\n model_vgg.add(Convolution2D(256, 3, 3, activation='relu', name='conv3_2'))\\n model_vgg.add(ZeroPadding2D((1, 1)))\\n model_vgg.add(Convolution2D(256, 3, 3, activation='relu', name='conv3_3'))\\n model_vgg.add(MaxPooling2D((2, 2), strides=(2, 2)))\\n\\n model_vgg.add(ZeroPadding2D((1, 1)))\\n model_vgg.add(Convolution2D(512, 3, 3, activation='relu',...\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"\\\"2.3 Question 3\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\nliste1 = [\\\"a\\\", \\\"b\\\", \\\"c\\\", \\\"d\\\", \\\"e\\\"]\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"\\\"Configure and build the neural network.\\nEach layer has the 4 inputs $\\\\mathbf{H}^{T}\\\\mathbf{r}$, $\\\\mathbf{H}^{T}\\\\mathbf{H}$, $\\\\mathbf{t}_k$ and $\\\\mathbf{v}_k$. The index $k$ denotes the layer. The layers can also be interpreted as iterations of an optimization algorithm [1].\\nThe nonlinear operation\\n$\\n\\\\begin{align}\\n&\\\\quad z_{k} = \\\\rho\\\\left(\\\\text{W}{1k}\\\\begin{bmatrix}\\n\\\\mathbf{H}^{T}\\\\mathbf{r}\\\\\\n\\\\hat{\\\\mathbf{t}}{k}\\\\\\n\\\\mathbf{H}^{T}\\\\mathbf{H}\\\\hat{\\\\text{t}}{k}\\\\\\n\\\\mathbf{v}{k}\\n\\\\end{bmatrix}+\\\\mathbf{b}{1k}\\\\right)\\\\\\n&\\\\hat{\\\\mathbf{t}}{k+1} = \\\\psi_{t_{k}}(\\\\mathbf{W}{2k}\\\\mathbf{z}{k}+\\\\mathbf{b}{2k})\\\\\\n&\\\\hat{\\\\mathbf{v}}{k+1} = \\\\mathbf{W}{3k}\\\\mathbf{z}{k}+\\\\mathbf{b}{3k}\\\\\\n&\\\\qquad\\\\hat{\\\\mathbf{t}}{1} = \\\\mathbf{0}\\\\tag{10}\\n\\\\end{align}\\n$\\nis applied to the input. $\\\\mathbf{t}_0$ is the received data vector.\\nSummarized, each layer does roughly the following steps:\\n* Concatenate the inputs.\\n* Linear transformation.\\n* Apply ReLU function.\\n* Calculate $\\\\mathbf{v}{k+1}$ as a linear trafo of the ReLU output and use ResNet feature.\\n* Calculate $\\\\hat{\\\\mathbf{t}}{k+1}$ as a linear trafo of the ReLU output which is then fed to the linear soft sign function.\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\n# DetNet config\\nlayers = 3*K\\nv_len = 2*K\\nz_len = 8*K\\n\\n# Training params\\ntraining_steps = 10000\\nbatch_size_train = 5000\\nsnr_var_train = 3.0 # Maximum absolute deviation of the SNR from its mean in logarithmic scale.\\n\\n# Test params\\ntest_steps= 1000\\nbatch_size_test = 5000\\nsnr_range = np.arange(8, 14, 1)\\n\\n# Definition of the Loss function\\ndef own_loss(t, t_train, t_ZF):\\n loss_l = torch.zeros(len(t), 1, device=device) # Denotes the loss in Layer L\\n for layer in range(1,len(t)+1):\\n loss_l[layer-1] = torch.log(torch.Tensor([layer+1]).to(device))*torch.mean(torch.mean(torch.square(t_train - t[layer-1]),1)\\/torch.mean(torch.square(t_train - t_ZF),1))\\n return loss_l\\n \\n\\n# Definition of the DetNet\\nclass DetNet(nn.Module):\\n # Build DetNet\\n def __init__(self, layers, K, v_len, z_len):\\n # Here we define the trainable parameter (Net)\\n super(DetNet, self).__init__()\\n # We have to use here nn.ModuleList instead of a PythonList. (Otherwise you’ll get an error saying\\n # that your model has no parameters, because PyTorch does not see the parameters of the layers stored\\n # in a Python list)\\n # Furtheremore, we initialize the linear trafo with normailzed weights\\n # Linear Traffos W_1l, W_2l, W_3l\\n self.linear_trafo_1_l = nn.ModuleList()\\n self.linear_trafo_1_l.extend([nn.Linear(3*K + v_len, z_len) for i in range(1, layers+1)])\\n for i in range(0, layers):\\n nn.init.normal_(self.linear_trafo_1_l[i].weight, std = 0.01)\\n nn.init.normal_(self.linear_trafo_1_l[i].bias, std = 0.01)\\n self.linear_trafo_2_l = nn.ModuleList()\\n self.linear_trafo_2_l.extend([nn.Linear(z_len, K) for i in range(1, layers+1)])\\n for i in range(0, layers):\\n nn.init.normal_(self.linear_trafo_2_l[i].weight, std = 0.01)\\n nn.init.normal_(self.linear_trafo_2_l[i].bias, std = 0.01)\\n self.linear_trafo_3_l = nn.ModuleList()\\n self.linear_trafo_3_l.extend([nn.Linear(z_len , v_len) for i in range(1,...\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"\\\"T\\\".istitle()\\n\\n\\\"t\\\".istitle()\\n\\n# we need a list to store the tagged tokens\\ntagged_tokens = []\\n\\n# tokenisation is done by using the string method `split(\\\" \\\")` \\n# that splits a string upon white spaces\\nfor n, token in enumerate(de_bello_gallico_book1.split(\\\" \\\")):\\n if(token.istitle()):\\n tagged_tokens.append((token, \\\"Entity\\\"))\\n else:\\n tagged_tokens.append((token, \\\"O\\\")) \\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nVery simple baseline\\nNow let's write what in NLP jargon is called a baseline, that is a method for extracting named entities that can serve as a term of comparison to evaluate the accuracy of other methods. \\nBaseline method: \\n- cycle through each token of the text\\n- if the token starts with a capital letter it's a named entity (only one type, i.e. Entity)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"# Tehdään luokittelija\\nclf_dt = DecisionTreeClassifier()\\nclf_dt.fit(X_train.reshape(-1,28*28), y_train)\\npred_dt = clf_dt.predict(X_test.reshape(-1,28*28))\\n\\n# Piirretään tulokseksi alkupäästä luokituksia\\npltsize=1\\nplt.figure(figsize=(10*pltsize, pltsize))\\nfor i in range(10):\\n plt.subplot(1,10,i+1)\\n plt.axis('off')\\n plt.imshow(X_test[i,:,:])\\n plt.title(str(pred_dt[i]) + ' (' + str(y_test[i]) + ')')\\n\\n# Raportoidaan tulokset\\nprint('Luokiteltu', len(pred_dt), 'kuvaa, luokituksista oikein menneiden osuus on:', accuracy_score(y_test, pred_dt)*100, '%')\\nprint('Alla kuvat ja analysoidut luokat, oikeat luokat ovat suluissa')\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nVoidaanko sanoa, että tämä satunnainen sotku on selvästi eri tulos kuin aiemmin tekemämme pääkomponenttianalyysin tuottama?\\nMerkkien tunnistaminen\\nSeuraavaksi pääsemme itse asiaan. Olemme nyt pöyhineet dataa ja päässeet varmuuteen, että se on järkevää ja analysoitavissa.\\nRakennamme koneoppimismenetelmiä käyttäen nk. luokittelijan, joka kykenee oppimaan kuvien piirteet ja tunnistamaan niitä sen jälkeen. Nyt on syytä jälleen olla tarkkana: jos syötämme menetelmälle dataa, niin se varmasti osaa oppia ulkoa kyseisen datan kaikki yksityiskohdat. Mutta haluamme, että menetelmä \\\"näkee metsän puilta\\\", eli oppii tunnistamaan merkkien yleisiä hahmoja. Teemme siis koneoppimisen perustempun, eli jaamme datan kahteen osaan. Harjoitusdatalla opetetaan menetelmä, kun taas sen toimintaa testataan testidatalla. Koneoppijan tulee siis selvitä sellaisistakin kuvista, joita se ei ole aikaisemmin nähnyt. Tällä tavalla varmistetaan, että ei pelkästään opita ulkoa harjoitusdataa.\\nAlla oleva koodi tekee luokittelun ja tulostaa esimerkiksi 10 ensimmäistä luokiteltua merkkiä. Luokitteluun käytetään klassista koneoppimisen menetelmää, nk. päätöspuuta. Kuinka hyvin menetelmä pärjää?\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"\\\"Com maior refinamento de dados:\\nMultinomialNB:\\nTodos: 0.808652246256\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\nfrom sklearn.ensemble import AdaBoostClassifier\\nclassificador = AdaBoostClassifier(n_estimators=100)\\n\\nresultado = fit_and_predict(classificador, treino_dados, treino_marcacoes, teste_dados, teste_marcacoes)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"I am working on the file \\\"DL0110EN\\/1.1_1Dtensors_v2.ipynb\\\".\\nThe first task is:\\nThe result is simply CHANGE INPUTS BELOW\\n\",\"targets\":\"# Select an analysis region (Lat-Lon) within the extents listed above. \\n# Select a time period (Min-Max) within the extents listed above (Year-Month-Day)\\n# This region and time period will be used for the cloud assessment\\n\\n# Nairobi, Kenya\\nlatitude = (-1.3407, -1.2809)\\nlongitude = (36.7640, 36.9206)\\n\\n# Mombasa, Kenya\\n# latitude = (-4.12, -3.975)\\n# longitude = (39.55, 39.7) \\n\\n# Mau Forest - Western Kenya\\n# latitude = (-0.13406, 0.21307)\\n# longitude = (35.28322, 35.56681)\\n\\n# Dar es Salaam, Tanzania\\n# latitude = (-7.0, -6.7)\\n# longitude = (39.1, 39.4)\\n\\n# Lake Sulunga, Tanzania\\n# latitude = (-6.2622, -5.8822) \\n# longitude = (34.9802, 35.3602) \\n\\n# Freetown, Sierra Leone\\n# latitude = (8.3267, 8.5123)\\n# longitude = (-13.3109, -13.1197 )\\n\\n# Vietnam\\n# latitude = (10.9358, 11.0358)\\n# longitude = (107.1899, 107.2899)\\n\\n# Ghanas\\n# latitude = (5.5, 5.7) # Accra\\n# longitude = (-0.4, 0.0) # Accra\\n\\n# Time Period\\ntime_extents = ('2016-01-01', '2016-01-31')\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"\\\"Comparisons\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\nmethods = [odeint, rungekutta1, rungekutta2, rungekutta4]\\nmarkers = ['+', 'o', 's', '>']\\n\\ndef test_1(n=101):\\n t = np.linspace(0, 10, n)\\n for method, m in zip(methods, markers):\\n sol = method(pend, y0, t, args=(b, c))\\n plt.plot(t, sol[:, 0], label=method.__name__, marker=m)\\n plt.legend(loc='best')\\n plt.title(\\\"Comparison of different ODE integration methods for $n={}$ points\\\".format(n))\\n plt.xlabel(\\\"$t = [0, 10]$\\\")\\n plt.grid()\\n plt.show()\\n\\ntest_1(10)\\n\\ntest_1(20)\\n\\ntest_1(100)\\n\\ntest_1(200)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"\\\"Create Print View List Helper Function\\nThis function will gather the current list of custom views from the HPE IMC NMS and print them out to the screen.\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\ndef print_views():\\n views_list = get_custom_views(url=auth.url, auth=auth.creds)\\n print (\\\"There are a total of \\\" + str(len(views_list)) + \\\" views currently\\\")\\n for view in views_list:\\n print (view['name'])\\n print (json.dumps(views_list[0], indent = 4))\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"\\\"1. Facet Grid 2 . Pair Plot\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\ng = sns.FacetGrid(pokemon, col =\\\"Generation\\\", row=\\\"Legendary\\\")\\ng.map(sns.kdeplot, \\\"Attack\\\")\\nplt.show()\\n\\nsns.pairplot(pokemon[['HP', 'Attack', 'Defense']])\\nplt.show()\\n\\n\\ng = sns.PairGrid(pokemon,\\n x_vars=[\\\"Generation\\\",\\\"Legendary\\\"],\\n y_vars=[\\\"Attack\\\",\\\"Defense\\\",\\\"Sp. Atk\\\", \\\"Sp. Def\\\"],\\n aspect=.85, size=6)\\ng.map(sns.violinplot,palette=\\\"pastel\\\")\\nplt.show()\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"Also remember, we should normalize our input values!\\n \\nSolve the questions in green blocks. Save the file as ME249-Lecture-3-YOURNAME.ipynb and change YOURNAME in the bottom cell. Send me and the grader the html<\\/b> file not the ipynb file. \\n<\\/p>\\n\\n \\n$$\\nf(x) = \\\\sum_{n=-\\\\infty}^{\\\\infty}a_n\\\\exp\\\\left(\\\\hat{\\\\jmath}\\\\frac{2\\\\pi nx}{Lx}\\\\right)\\n$$\\n<\\/p>\\nand \\n \\n$$\\na_n = \\\\frac{1}{L_x}\\\\int_Lf(x)\\\\exp\\\\left(-\\\\hat{\\\\jmath}\\\\frac{2\\\\pi nx}{Lx}\\\\right)dx\\n$$\\n<\\/p>\\nHere $\\\\hat{\\\\jmath}^2=-1$.Often the reduction to wavenumber is used, where\\n \\n$$\\nk_n = \\\\frac{2\\\\pi n}{L_x}\\n$$\\n<\\/p>\\nNote that if $x$ is time instead of distance, $L_x$ is a time $T$ and the smallest frequency contained in the domain is $f_0=1\\/T_0$ and the wavenumber $n$ is $k_n=2\\\\pi f_0n=2\\\\pi f_n$ with $f_n$ for $\\\\vert n\\\\vert >1$ are the higher frequencies. \\n \\n$$\\nk_n=\\\\frac{2\\\\pi n}{N_x}\\n$$\\n<\\/p>\\nConsider a function $f$ periodic over a domain $0\\\\leq x\\\\leq 2\\\\pi$, discretized by $N_x$ points. The nodal value is $f_i$ located at $x_i=(i+1)\\\\Delta x$ with $\\\\Delta x=L_x\\/Nx$. The DFT is defined as\\n \\n# Daniel Strohmeier Photon current density is then calculated at $z=d$ and plotted below as a function of time.<\\/font><\\/p>\\n\",\"targets\":\"particlecurrentarray = []\\ntarray = []\\nfor t in linspace(10**-15,50*10**-12,1000):\\n tarray.append(t*10**12)\\n particlecurrentarray.append(particlecurrent(t))\\n\\n#Update the matplotlib configuration parameters\\nmpl.rcParams.update({'font.size': 18, 'font.family': 'serif'})\\n\\n#Adjust figure size\\nplt.subplots(figsize=(12,6))\\n\\nplt.plot(tarray,particlecurrentarray,linewidth=2)\\nplt.xlim(np.min(tarray),np.max(tarray))\\nplt.ylim(0)\\nplt.xlabel('time (ps)')\\nplt.ylabel('Photon Current at $z=d$ $(s^{-1} \\\\cdot m^{-2})$')\\n#plt.semilogy()\\nplt.legend(loc=4)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"I am working on the file \\\"02_introduction\\/intro_to_python.ipynb\\\".\\nThe first task is:\\nUnicode string\\nLike strings, but with more characters!\\nCan you write Python code for it?\\n\",\"targets\":\"\\nmy_unicode = u'Hellö World!'\\nmy_unicode\\n\\nprint(my_unicode)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"I am working on the file \\\"0.14\\/_downloads\\/plot_object_raw.ipynb\\\".\\nThe first task is:\\nThe :class:Raw <mne.io.Raw> data structure: continuous data\\nCan you write Python code for it?\\n\",\"targets\":\"\\nfrom __future__ import print_function\\n\\nimport mne\\nimport os.path as op\\nfrom matplotlib import pyplot as plt\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"Combine generation into one large dataframe\\n\",\"targets\":\"if __name__ == '__main__':\\n exception_list = []\\n facility_gen = pd.concat(Parallel(n_jobs=-1)(delayed(facility_line_to_df)(json.loads(row)) for row in gen_rows))\\n facility_gen.reset_index(drop=True, inplace=True)\\n facility_gen.rename({'value':'generation (MWh)'}, axis=1, inplace=True)\\n\\nfacility_gen.loc[:,'lat'] = facility_gen.loc[:,'lat'].astype(float)\\nfacility_gen.loc[:,'lon'] = facility_gen.loc[:,'lon'].astype(float)\\nfacility_gen.loc[:, 'plant id'] = facility_gen.loc[:, 'plant id'].astype(int)\\n\\n#drop\\nfacility_gen.tail()\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"I am working on the file \\\"Exercises-1.ipynb\\\".\\nThe first task is:\\n96\\nHow many people have played a role called \\\"The Dude\\\"?\\nCan you write Python code for it?\\n\",\"targets\":\"\\nc = cast\\nc = c[c.character == \\\"The Dude\\\"]\\nlen(c)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"All input data is stored in pandas dataframes under, self.Data.Interances and self.Data.Foliations:\\n\",\"targets\":\"sandstone.Data.Foliations.head()\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"I am working on the file \\\"Chapman\\/Ch1-Problem_1-19.ipynb\\\".\\nThe first task is:\\nAnswer the following questions about this power system.\\n(a)\\n\\nAssume that the switch shown in the figure is initially open, and calculate the current I , the power factor, and the real, reactive, and apparent power being supplied by the source.\\n\\n(b)\\n\\nHow much real, reactive, and apparent power is being consumed by each load with the switch open?\\n\\n(c)\\n\\nAssume that the switch shown in the figure is now closed, and calculate the current I , the power factor, and the real, reactive, and apparent power being supplied by the source.\\n\\n(d)\\n\\nHow much real, reactive, and apparent power is being consumed by each load with the switch closed?\\n\\n(e)\\n\\nWhat happened to the current flowing from the source when the switch closed? Why?\\n\\nSOLUTION\\n(a)\\nWith the switch open, only loads 1 and 2 are connected to the source. The current $\\\\vec{I}_1$ in Load 1 and the current $\\\\vec{I}_2$ in Load 2 are:\\nCan you write Python code for it?\\n\",\"targets\":\"\\nI1 = V\\/Z1\\nI2 = V\\/Z2\\nI1_angle = arctan(I1.imag\\/I1.real)\\nI2_angle = arctan(I2.imag\\/I2.real)\\nprint('''I1 = {:.1f} A ∠{:.1f}°\\nI2 = {:.1f} A ∠{:.1f}°'''.format(\\n abs(I1), I1_angle\\/pi*180,\\n abs(I2), I2_angle\\/pi*180))\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"I am working on the file \\\"LSTM.ipynb\\\".\\nThe first task is:\\nTraining is good, but having visual insight is even better:\\nOkay, let's plot this simply in the notebook for now.\\nCan you write Python code for it?\\n\",\"targets\":\"\\n# (Inline plots: )\\n%matplotlib inline\\n\\nfont = {\\n 'family' : 'Bitstream Vera Sans',\\n 'weight' : 'bold',\\n 'size' : 18\\n}\\nmatplotlib.rc('font', **font)\\n\\nwidth = 12\\nheight = 12\\nplt.figure(figsize=(width, height))\\n\\nindep_train_axis = np.array(range(batch_size, (len(train_losses)+1)*batch_size, batch_size))\\nplt.plot(indep_train_axis, np.array(train_losses), \\\"b--\\\", label=\\\"Train losses\\\")\\nplt.plot(indep_train_axis, np.array(train_accuracies), \\\"g--\\\", label=\\\"Train accuracies\\\")\\n\\nindep_test_axis = np.append(\\n np.array(range(batch_size, len(test_losses)*display_iter, display_iter)[:-1]),\\n [training_iters]\\n)\\nplt.plot(indep_test_axis, np.array(test_losses), \\\"b-\\\", label=\\\"Test losses\\\")\\nplt.plot(indep_test_axis, np.array(test_accuracies), \\\"g-\\\", label=\\\"Test accuracies\\\")\\n\\nplt.title(\\\"Training session's progress over iterations\\\")\\nplt.legend(loc='upper right', shadow=True)\\nplt.ylabel('Training Progress (Loss or Accuracy values)')\\nplt.xlabel('Training iteration')\\n\\nplt.show()\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"\\\" Creating an :class:`mne.Epochs` object with metadata is done by passing\\n a :class:`pandas.DataFrame` to the ``metadata`` kwarg as follows:<\\/p><\\/div>\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\ndata = epochs.get_data()\\nmetadata = epochs.metadata.copy()\\nepochs_new = mne.EpochsArray(data, epochs.info, metadata=metadata)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"Baseline model\\n\",\"targets\":\"rf = RandomForestClassifier(n_estimators=100, n_jobs=-1)\\nrf.fit(X_train, y_train)\\n\\ny_pred = rf.predict_proba(X_test)\\n\\nauc = roc_auc_score(y_true=y_test, y_score=y_pred[:, 1])\\n\\nauc\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\" First calling and training the algorithm.\\nA specificity here is the presence of the 'shape_1X' keyword to specify the shape of a single sample.\\nI have added it as pictures fed to the machinery might not be square. **New in version 0.1.3** : possibility to directly use an int as shape_1X for sequence data.<\\/p>\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\ngcf = gcForest(shape_1X=4, window=2, tolerance=0.0)\\ngcf.fit(X_tr, y_tr)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"# format payload\\nhttp_body = httpbody_pb2.HttpBody(\\n data=open(payload_file).read().encode(\\\"utf-8\\\"),\\n content_type=\\\"application\\/json\\\",\\n)\\n\\n# Initialize request argument(s)\\nrequest = gapic.RawPredictRequest(endpoint=endpoint_name, http_body=http_body)\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nFormat the http request\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"AlignInfo\\n\",\"targets\":\"from Bio import AlignIO\\nfrom Bio.Align.AlignInfo import SummaryInfo\\nfrom Bio.Alphabet import ProteinAlphabet\\n\\nalign = AlignIO.read('samples\\/cas9align.fasta', 'fasta', alphabet=ProteinAlphabet())\\nsummary = SummaryInfo(align)\\nprint(summary.information_content())\\n\\nsummary.dumb_consensus(consensus_alpha=ProteinAlphabet())\\n\\nsummary.gap_consensus(consensus_alpha=ProteinAlphabet())\\n\\nprint(summary.alignment)\\n\\nprint(summary.pos_specific_score_matrix())\\n\\nfrom Bio.Align.Applications import ClustalwCommandline\\nclustalw_exe = 'clustalw2'\\nccli = ClustalwCommandline(clustalw_exe, infile=\\\"samples\\/input4align.fasta\\\", outfile='..\\/..\\/aoutput.aln')\\nprint(ccli)\\n\\n\\n clustalw_exe = 'clustalw2'\\n\\nclustalw_exe='c:\\\\\\\\windows\\\\\\\\program file\\\\\\\\clustal\\\\\\\\clustalw.exe'\\n\\nfrom Bio.Align.Applications import ClustalwCommandline\\nclustalw_exe = 'clustalw2'\\nccli = ClustalwCommandline(clustalw_exe,\\ninfile=\\\"samples\\/input4align.fasta\\\", outfile='..\\/..\\/aoutput.aln')\\nccli()\\n\\nfrom Bio import AlignIO\\nseqs = AlignIO.read('samples\\/aoutput.aln', 'clustal')\\nseqs[0]\\n\\nseqs[1]\\n\\nseqs[2]\\n\\nfrom Bio.Align.Applications import ClustalwCommandline\\nclustalw_exe = 'clustalw2'\\nccli = ClustalwCommandline(clustalw_exe,\\ninfile=\\\"input4align.fasta\\\", outfile='..\\/..\\/aoutput.aln',\\npwgapopen=5)\\nprint(ccli)\\n\\nfrom Bio.Align.Applications import ClustalwCommandline\\nccli = ClustalwCommandline()\\nhelp(ccli)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"I am working on the file \\\"Oracle_Jupyter\\/Oracle_histograms.ipynb\\\".\\nThe first task is:\\nFetch the histogram data into a pandas dataframe\\nCan you write Python code for it?\\n\",\"targets\":\"\\nimport pandas as pd\\n\\n# query Oracle using ora_conn and put the result into a pandas Dataframe\\nwith oracledb.connect(user=db_user, password=db_pass, dsn=db_connect_string) as ora_conn:\\n hist_pandasDF = pd.read_sql(query, con=ora_conn) \\n\\n# Decription\\n#\\n# BUCKET: the bucket number, range from 1 to bins (included)\\n# VALUE: midpoint value of the given bucket\\n# COUNT: number of values in the bucket \\n \\nhist_pandasDF\\n\\n# Optionally normalize the event count into a frequency\\n# dividing by the total number of events\\n \\nhist_pandasDF[\\\"FREQUENCY\\\"] = hist_pandasDF[\\\"COUNT\\\"] \\/ sum(hist_pandasDF[\\\"COUNT\\\"]) \\n \\nhist_pandasDF\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\" for full documentation on the `Info` object, see\\n `tut_info_objects`. See also\\n `sphx_glr_auto_examples_io_plot_objects_from_arrays.py`.<\\/p><\\/div>\\n\\nNormally, :class:mne.Info objects are created by the various\\ndata import functions <ch_convert>.\\nHowever, if you wish to create one from scratch, you can use the\\n:func:mne.create_info function to initialize the minimally required\\nfields. Further fields can be assigned later as one would with a regular\\ndictionary.\\nThe following creates the absolute minimum info structure:\\nCan you write Python code for it?\\n\",\"targets\":\"\\n# Create some dummy metadata\\nn_channels = 32\\nsampling_rate = 200\\ninfo = mne.create_info(32, sampling_rate)\\nprint(info)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"\\\"BGS, ELG, LRG, QSO, LYA, MWS_NEARBY, WD, SKY\\nThese mocks are are one single mock catalog, so MOCKID will provide the row of each target in the original, unfiltered parent catalog. Here, we demonstrate how to navigate the input and output files using the BGS mock.\\nFor simplicity we show how to compare the input and output celestial coordinates and the redshifts, although obviuosly other properties are available in the parent mock.\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\ndef read_bgs_mock():\\n import h5py\\n mockfile = class2mockfile['BGS'].format(**os.environ)\\n print('Reading {}'.format(mockfile))\\n with h5py.File(mockfile, mode='r') as f:\\n ra = f['Data\\/ra'][:].astype('f8') % 360.0 # enforce 0 < ra < 360\\n dec = f['Data\\/dec'][:].astype('f8')\\n zobs = f['Data\\/z_obs'][:].astype('f4') \\n return ra, dec, zobs\\n\\n%time ra, dec, zobs = read_bgs_mock()\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"# Group events by \\\"PID\\\" and compute \\nmost_switching = df.groupby('next_pid').describe(include=['object'])\\nmost_switching.head()\\n\\nmost_switching = most_switching.unstack()\\nmost_switching.head()\\n\\nmost_switching = most_switching['next_comm']\\\\\\n .sort_values(by=['count'], ascending=False)\\nmost_switching.head()\\n\\nmost_switching_pid = most_switching.index[1]\\nmost_switching_task = most_switching.values[1][2]\\ntask_name = \\\"{}:{}\\\".format(most_switching_pid, most_switching_task)\\nlogging.info(\\\"The second most swithing task is: [%s]\\\", task_name)\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nEvents grouping\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"A continuación, lo siguiente que tenemos que hacer es crear un enjambre de hormigas. Con esta función, lo podremos hacer fácilmente.\\n\",\"targets\":\"map1.swarm_create(100) # Creamos un enjambre de 100 hormigas\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"1 Counter\\nA Counter is a dict subclass for counting hashable objects. It is an unordered collection where elements are stored as dictionary keys and their counts are stored as dictionary values.\\n1.1 construction\\n\",\"targets\":\"c1 = Counter()\\nc2 = Counter('gaufung')\\nc3 = Counter({'red':4,'blue':10})\\nc4 = Counter(cats=4,dogs=5)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"#I import the environmental characteristics data file\\n\\n#Same thing with the filepath here.\\npitch=pd.read_table('..\\/data\\/pitches.csv', sep=',')\\n\\n#I display my dataframe\\npitch\\n\\n# If you look at the end of your table (lines 54 to 56), the columns are shifted. \\n# You need to fix this first, especially because you want to work with the 2010-04-17 date. \\n# To fix this, first isolate the last 3 rows that you need to fix. \\n\\npitch2 = pitch.ix[54:]\\npitch2\\n\\n# Now you need to drop the first column with the NaN values\\n\\npitch3 = pitch2.drop('time', axis=1)\\npitch3\\n\\n# Now you need to rename your columns\\npitch3.columns = [['time', 'div', 'note', 'freq1', 'freq2', 'freq3', 'freq4', 'freq5', 'freq6', 'freq7', 'freq8']]\\npitch3\\n\\n# Now you can merge this fixed data back with the original data frame and delete the old rows containing this data\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nNow I know that my datetime column is read as an actually date and time value (a function of Python), and not as an object or string, as it was before performing the \\\"datetime\\\" operation.\\nNow, I will upload the pitch data so I can compare change in pitch of certain notes and change in environmental characteristics.\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"%bash\\nOUTDIR=gs:\\/\\/${BUCKET}\\/fashion\\/trained_${MODEL_TYPE}\\nJOBNAME=fashion_${MODEL_TYPE}_$(date -u +%y%m%d_%H%M%S)\\necho $OUTDIR $REGION $JOBNAME\\ngsutil -m rm -rf $OUTDIR\\ngcloud ml-engine jobs submit training $JOBNAME \\\\\\n --region=$REGION \\\\\\n --module-name=trainer.task \\\\\\n --package-path=${PWD}\\/fashionmodel\\/trainer \\\\\\n --job-dir=$OUTDIR \\\\\\n --staging-bucket=gs:\\/\\/$BUCKET \\\\\\n --scale-tier=BASIC_GPU \\\\\\n --runtime-version=$TFVERSION \\\\\\n -- \\\\\\n --output_dir=$OUTDIR \\\\\\n --train_steps=10000 --learning_rate=0.01 --train_batch_size=512 \\\\\\n --model=$MODEL_TYPE\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nMake sure that local training completed successfully before training using Cloud ML Engine\\nNote that GPU speed up depends on the model type. You'll notice that more complex models train substantially faster on GPUs. When you are working with simple models that take just seconds to minutes to train on a single node, keep in mind that Cloud ML Engine introduces a few minutes of overhead for training job setup & teardown.\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"I am working on the file \\\"Notebooks\\/1-Your first steps .ipynb\\\".\\nThe first task is:\\nLet's break down what happened there. To do that we must go to the innermost part of the statement, the part where we are using the plus sign. We are adding the string at the first position in dir_of_root to the string in root_root. If you don't understand how that works, try doing it:\\nCan you write Python code for it?\\n\",\"targets\":\"\\nroot_root+dir_of_root[0]\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"\\\"Attribute Information:\\n\\nNo: row number \\nyear: year of data in this row \\nmonth: month of data in this row \\nday: day of data in this row \\nhour: hour of data in this row \\npm2.5: PM2.5 concentration (ug\\/m^3) \\nDEWP: Dew Point (ƒ) \\nTEMP: Temperature (ƒ) \\nPRES: Pressure (hPa) \\ncbwd: Combined wind direction \\nIws: Cumulated wind speed (m\\/s) \\nIs: Cumulated hours of snow \\nIr: Cumulated hours of rain\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\npm2 = pd.read_csv('http:\\/\\/archive.ics.uci.edu\\/ml\\/machine-learning-databases\\/00381\\/PRSA_data_2010.1.1-2014.12.31.csv',\\n na_values='NA')\\npm2.columns = ['id', 'year', 'month', 'day', 'hour', 'pm2', 'dew_point', 'temperature',\\n 'pressure', 'wind_dir', 'wind_speed', 'hours_snow', 'hours_rain']\\n\\npm2.head()\\n\\npm2.info()\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"%matplotlib inline\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\n오늘의 주요 예제 해결\\n$y = x^2$ 함수의 그래프를 그리고자 한다. \\n그래프를 그리기 위해 matplotlib.pyplot 이란 모듈을 이용한다. \\n아래 코드처럼 퍼센트 기호(%)로 시작하는 코드는 쥬피터 노트북에만 사용하는 코드이며,\\n아래 코드는 쥬피터 노트북에 그래프를 직접 나타내기 위해 사용한다.\\nspyder 등 파이썬 에디터를 사용하는 경우 필요하지 않는 코드이다.\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"I am working on the file \\\"Cap05\\/Notebooks\\/DSA-Python-Cap05-02-Objetos.ipynb\\\".\\nThe first task is:\\nObjetos\\nEm Python, tudo é objeto!\\nCan you write Python code for it?\\n\",\"targets\":\"\\n# Criando uma lista\\nlst_num = [\\\"Data\\\", \\\"Science\\\", \\\"Academy\\\", \\\"Nota\\\", 10, 10]\\n\\n# A lista lst_num é um objeto, uma instância da classe lista em Python\\ntype(lst_num)\\n\\nlst_num.count(10)\\n\\n# Usamos a função type, para verificar o tipo de um objeto\\nprint(type(10))\\nprint(type([]))\\nprint(type(()))\\nprint(type({}))\\nprint(type('a'))\\n\\n# Criando um novo tipo de objeto chamado Carro\\nclass Carro(object):\\n pass\\n\\n# Instância do Carro\\npalio = Carro()\\n\\nprint(type(palio))\\n\\n# Criando uma classe\\nclass Estudantes:\\n def __init__(self, nome, idade, nota):\\n self.nome = nome\\n self.idade = idade\\n self.nota = nota\\n\\n# Criando um objeto chamado Estudante1 a partir da classe Estudantes\\nEstudante1 = Estudantes(\\\"Pele\\\", 12, 9.5)\\n\\n# Atributo da classe Estudante, utilizado por cada objeto criado a partir desta classe\\nEstudante1.nome\\n\\n# Atributo da classe Estudante, utilizado por cada objeto criado a partir desta classe\\nEstudante1.idade\\n\\n# Atributo da classe Estudante, utilizado por cada objeto criado a partir desta classe\\nEstudante1.nota\\n\\n# Criando uma classe\\nclass Funcionarios:\\n def __init__(self, nome, salario):\\n self.nome = nome\\n self.salario = salario\\n\\n def listFunc(self):\\n print(\\\"O nome do funcionário é \\\" + self.nome + \\\" e o salário é R$\\\" + str(self.salario))\\n\\n# Criando um objeto chamado Func1 a partir da classe Funcionarios\\nFunc1 = Funcionarios(\\\"Obama\\\", 20000)\\n\\n# Usando o método da classe\\nFunc1.listFunc()\\n\\nprint(\\\"**** Usando atributos *****\\\")\\n\\nhasattr(Func1, \\\"nome\\\")\\n\\nhasattr(Func1, \\\"salario\\\")\\n\\nsetattr(Func1, \\\"salario\\\", 4500)\\n\\nhasattr(Func1, \\\"salario\\\")\\n\\ngetattr(Func1, \\\"salario\\\")\\n\\ndelattr(Func1, \\\"salario\\\")\\n\\nhasattr(Func1, \\\"salario\\\")\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"I am working on the file \\\"ANTLR4-Python\\/Interpreter\\/Interpreter-Matching.ipynb\\\".\\nThe first task is:\\nThe function evaluate takes two arguments:\\n- expr is a logical expression or an arithmetic expression,\\n- Values is a dictionary assigning integer values to variable names.\\nThe function evaluates the given expression and returns this value.\\nCan you write Python code for it?\\n\",\"targets\":\"\\ndef evaluate(expr, Values):\\n match expr:\\n case int(expr):\\n return expr\\n case str(expr):\\n return Values[expr] \\n case ('read',):\\n return int(input('Please enter a natural number: '))\\n case ('==', lhs, rhs):\\n return evaluate(lhs, Values) == evaluate(rhs, Values)\\n case ('<', lhs, rhs):\\n return evaluate(lhs, Values) < evaluate(rhs, Values)\\n case ('+', lhs, rhs):\\n return evaluate(lhs, Values) + evaluate(rhs, Values)\\n case ('-', lhs, rhs):\\n return evaluate(lhs, Values) - evaluate(rhs, Values)\\n case ('*', lhs, rhs):\\n return evaluate(lhs, Values) * evaluate(rhs, Values)\\n case ('\\/', lhs, rhs):\\n return evaluate(lhs, Values) \\/ evaluate(rhs, Values)\\n case _:\\n assert False, f'{expr} unexpected'\\n\\n!type sum.sl\\n\\n!cat sum.sl\\n\\nmain('sum.sl')\\n\\n!type factorial.sl\\n\\n!cat factorial.sl\\n\\nmain('factorial.sl')\\n\\n!del *.py *.tokens *.interp\\n!del *.pdf\\n!del ast\\n\\n!rmdir \\/Q \\/S __pycache__\\n\\n!dir \\/B\\n\\n!rm *.py *.tokens *.interp\\n!rm ast\\n!rm -r __pycache__\\/\\n!rm *.pdf\\n\\n!ls\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"Da das gewählte Element wieder eine Liste ist, können wir auch auf einzelne Element zugreifen. Den ersten Messwert des zweiten Tages erhalten wir so:\\n\",\"targets\":\"temperatures[1][0]\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"!ls -lh *\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nLet's take a look at what we've got\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"I am working on the file \\\"05其他\\/pandas文档-zh-master\\/数据合并、连接和拼接-Merge, join, and concat.ipynb\\\".\\nThe first task is:\\n注意到结果中的索引是层次化的。\\nCan you write Python code for it?\\n\",\"targets\":\"\\nresult.ix['y'] #查看df2\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"\\\"Backbone\\nDetects and fixes several problems with the backbone\\nuse any of \\n--fix_atoms All|None|Residue List \\n--fix_chain All|None|Break list\\n--add_caps All|None|Terms|Breaks|Residue list\\n--no_recheck\\n--no_check_clashes\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\nst_c.backbone()\\n\\nst_c.backbone('--fix_atoms All --fix_chain none --add_caps none')\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"\\\"As can be seen in the above the class object is organized here, and hence for better results, I start with randomly shuffling the data.\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\ntelescope_shuffle=telescope.iloc[np.random.permutation(len(telescope))]\\ntelescope_shuffle.head()\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"%load_ext rpy2.ipython \\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nLine magics\\nIPython has an rmagic extension that contains a some magic functions for working with R via rpy2. This extension can be loaded using the %load_ext magic as follows:\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"\\\"Create an empty init.py file required to be in the container\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\nimport os\\n\\nwith open(os.path.join(\\\"trainer\\\", \\\"__init__.py\\\"), \\\"w\\\") as fp:\\n pass\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"I am working on the file \\\"Day_01\\/01_Advanced_Python\\/03_LambdaFunction-Solutions.ipynb\\\".\\nThe first task is:\\nProblem 2<\\/u>\\nUse the filter function to remove all the vowels from the sentence\\nCan you write Python code for it?\\n\",\"targets\":\"\\nsentence = \\\"It's a myth that there are no words in English without vowels.\\\"\\nvowels = 'aeiou' \\n\\nresult = filter(lambda x: x not in vowels, sentence)\\nprint result\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"Age Swamplot\\nSeaborn<\\/a> has some cool canned functions for examining data distributions by category. I am particuraly fond sns.swarmplot that makes plots like the one below. The plot is a sampling of the distribution of first 1000 rows by region and poster age. Strangely, there are a few posters with ages around 100 years old.\\n\",\"targets\":\"import seaborn as sns\\n%matplotlib inline\\nfrom pylab import rcParams\\nrcParams['figure.figsize'] = (7.0, 7.0)\\n\\nsns.swarmplot(y='region',x='posterage',data=df.head(1000),size=2)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"\\\"By default, TFX ExampleGen divides examples into two splits, train and\\neval, but you can\\nadjust your split configuration.\\nExamine output from StatisticsGen\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\nvisualize_artifacts(stats_artifacts)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"\\\"Bursts Counts\\nDexAem Counts\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\nvar = 'na'\\nsize_th = 15\\nfig, ax = plt.subplots(1, 2, figsize=(11, 4.5), sharey=True, sharex=True)\\nplt.subplots_adjust(hspace=0.05)\\n#kws = dict(marker='o', ls='')\\nkws = dict(lw=lw)\\nvar_labels = dict(na='DexAem', nd='DexDem')\\n\\nbins = np.arange(0, 350, 5)\\nx = bins[:-1] + 0.5*(bins[1] - bins[0])\\nfor ich in range(8):\\n for i, s in enumerate(samples[:]):\\n bursts = BurstsM[s]\\n bursts = bursts.loc[bursts.ich == ich]\\n color = colors[i]\\n sizes = bursts.na + bursts.nd * gammaM\\n mask = (sizes > size_th)\\n data = bursts.loc[mask, var]\\n counts, bins = np.histogram(data, bins, normed=True)\\n if ich == 0:\\n ax[1].plot([], label=s, **kws) # empty lines for the legend\\n counts[counts == 0] = np.nan # break lines at zeros in log-scale\\n ax[1].plot(x, counts, color=color, alpha=0.5, **kws)\\n \\n if ich == 0 and 'DO' not in s:\\n bursts = BurstsA[s]\\n sizes = bursts.na + bursts.nd * gammaA\\n mask = (sizes > size_th)\\n data = bursts.loc[mask, var]\\n counts, bins = np.histogram(data, bins, normed=True)\\n counts[counts == 0] = np.nan # break lines at zeros\\n ax[0].plot(x, counts, color=color, label=label, **kws)\\n \\nplt.yscale('log')\\nplt.ylim(1e-4)\\nif var == 'na':\\n plt.xlim(0, 140)\\nax[1].legend(title='Sample')\\nfor a in ax:\\n sns.despine(ax=a)\\n #a.set_title('DexAem Burst Size Distribution')\\n a.set_xlabel('Photon Counts (%s)' % var_labels[var])\\ntitle_kw = dict(fontdict={'verticalalignment': 'top'}, fontsize=18)\\nax[0].set_title('μs-ALEX', **title_kw)\\nax[1].set_title('Multispot', **title_kw);\\nsavefig('%s distribution usALEX vs multispot, size_th=%d' % (var, size_th))\\nsavefig('%s distribution usALEX vs multispot, size_th=%d.svg' % (var, size_th))\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"I am working on the file \\\"Cpp\\/Cpp14\\/FileIO\\/FileIO.ipynb\\\".\\nThe first task is:\\nWhitespace separated, or \\nno header\\ncf. Manufacturing Learning Curves\\nCan you write Python code for it?\\n\",\"targets\":\"\\nManu_learn = pd.read_csv(datafilefolder+\\\"manuf_learn.dat\\\",header=None,delim_whitespace=True)\\n\\nManu_learn\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"3 Ejercicio\\nLa función farenToCentig convierte grados Farenheit en grados centígrados. \\n* Utiliza la función farenToCentig para generar la siguiente tabla de conversión:\\n0:º F = -18.0º C\\n10:º F = -13.0º C\\n20:º F = -7.0º C\\n ...\\n100:º F = 37.0º C\\n110:º F = 43.0º C\\n120:º F = 48.0º C\\nNota:\\n* Genera la lista de valores $[0, ... , 120]$ con la función range.\\n* Utiliza un bucle for para calcular los grados centígrados de cada elemento de la lista de valores.\\n\",\"targets\":\"# Sol :\\nF = list(range(0,130,10))\\ndef farenToCentig(F):\\n return (F-32)*(5\\/9) \\n\\nfor i in F:\\n cent = farenToCentig(i)\\n conversion = round(cent)\\n print( '%d ºF = %.1f ºC' % (i,conversion))\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"\\\"A validation set can be used to assess how well the model is performing. A low accuracy on the training and validation\\nsets imply underfitting. A high accuracy on the training set but low accuracy on the validation set implies overfitting.\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\n### Train your model here.\\n### Calculate and report the accuracy on the training and validation set.\\n### Once a final model architecture is selected, \\n### the accuracy on the test set should be calculated and reported as well.\\n### Feel free to use as many code cells as needed.\\n\\n#Features and Labels\\nx = tf.placeholder(tf.float32, (None, 32, 32, 3))\\ny = tf.placeholder(tf.int32, (None))\\none_hot_y = tf.one_hot(y, 43)\\n\\nprint(\\\"start\\\")\\n\\n#Training Pipeline\\nrate = 0.0025 # SMCM decreased rate to .0008 from 0.001\\n\\nlogits = LeNet(x)\\ncross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=one_hot_y, logits=logits)\\nloss_operation = tf.reduce_mean(cross_entropy)\\noptimizer = tf.train.AdamOptimizer(learning_rate = rate)\\ntraining_operation = optimizer.minimize(loss_operation)\\n\\ncorrect_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1))\\naccuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\\nsaver = tf.train.Saver()\\n\\n#Model Evaluation\\ndef evaluate(X_data, y_data):\\n num_examples = len(X_data)\\n total_accuracy = 0\\n sess = tf.get_default_session()\\n for offset in range(0, num_examples, BATCH_SIZE):\\n batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]\\n accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y})\\n total_accuracy += (accuracy * len(batch_x))\\n return total_accuracy \\/ num_examples\\n\\n#Train the Model\\n\\nwith tf.Session() as sess:\\n sess.run(tf.global_variables_initializer())\\n num_examples = len(X_train)\\n \\n print(\\\"Training...\\\")\\n print()\\n for i in range(EPOCHS):\\n X_train, y_train = shuffle(X_train, y_train)\\n for offset in range(0, num_examples, BATCH_SIZE):\\n end = offset + BATCH_SIZE\\n batch_x, batch_y = X_train[offset:end], y_train[offset:end]\\n sess.run(training_operation, feed_dict={x: batch_x, y: batch_y})\\n \\n \\n validation_accuracy = evaluate(norm_X_valid, y_valid)\\n print(\\\"EPOCH {} ...\\\".format(i+1))\\n ...\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"# Get the counts of each of the unique hashs of our splitting column\\nfirst_bucketing_query = \\\"\\\"\\\"\\nSELECT\\n hash_values,\\n COUNT(*) AS num_records\\nFROM\\n ({CTE_data})\\nGROUP BY\\n hash_values\\n\\\"\\\"\\\".format(CTE_data=data_query)\\n\\ndisplay_dataframe_head_from_query(first_bucketing_query)\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nThe next query is going to find the counts of each of the unique 657484 hash_values. This will be our first step at making actual hash buckets for our split via the GROUP BY.\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"Create Data\\n\",\"targets\":\"# Create a list of 20 observations drawn from a random distribution \\n# with mean 1 and a standard deviation of 1.5\\nx = np.random.normal(1, 1.5, 20)\\n\\n# Create a list of 20 observations drawn from a random distribution \\n# with mean 0 and a standard deviation of 1.5\\ny = np.random.normal(0, 1.5, 20)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"\\\"Before you begin\\nSet up your Google Cloud project\\nThe following steps are required, regardless of your notebook environment.\\n\\n\\nEnable the Vertex AI API and Compute Engine API. \\n\\n\\nIf you are running this notebook locally, you will need to install the Cloud SDK.\\n\\n\\nEnter your project ID in the cell below. Then run the cell to make sure the\\nCloud SDK uses the right project for all the commands in this notebook.\\n\\n\\nNote: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.\\nSet your project ID\\nIf you don't know your project ID, you may be able to get your project ID using gcloud.\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\nimport os\\n\\nPROJECT_ID = \\\"qwiklabs-gcp-01-17ee7907a406\\\" # Replace your project id here \\n\\n# Get your Google Cloud project ID from gcloud\\nif not os.getenv(\\\"IS_TESTING\\\"):\\n shell_output = !gcloud config list --format 'value(core.project)' 2>\\/dev\\/null\\n PROJECT_ID = shell_output[0]\\n print(\\\"Project ID: \\\", PROJECT_ID)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"# displays the first few rows of the table\\ndata.head(4)\\n\\n# Set variables for scatter plot\\nx = data.Population\\ny = data.WaterUsed\\n\\nfig = plt.figure(figsize=(15, 6))\\nplt.scatter(x,y)\\nplt.xlim(0,3000000)\\nplt.ylim(0,350)\\nplt.title('The Relationship Between Population and How Much Water a County Consumes Each Year')\\nplt.xlabel('Population (individuals)')\\nplt.ylabel('Water Used (million gallons)')\\n\\n# This actually shows the plot\\nplt.show()\\n\\n# Creates a new dataset for County\\n\\nplace = data.groupby(\\\"County\\\", as_index = False).sum()\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nPre-Questions\\n\\nUsing what you've learned in this unit, answer questions 1 & 2 in your coding booklet. The Pre-Questions are listed below for you convienence.\\n\\nAccess to freshwater is a limiting factor in many ecosystems, what are some\\nother limiting factors that can effect native populations?\\nDraw a diagram that shows a lake water source. Add and label arrows for\\nways that water can be added to the lake. Add and label arrows for the ways\\nthat water can be removed from the lake. (Be sure to include human and\\nnatural \\/ non-human sources)\\n\\n\\nPART 1: Water Used by Florida Counties in 2010\\n\\nThis table displays the County name, its population, the puplic water supply for that county, and the total water used by that county.\\nUse and modify the sections of code below for Part 1 to answer questions 3-5 in your coding booklet.\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"# Conversion factor from 758nm to 740nm is roughly 1.55 (we can provide shapes if needed)\\nplt.errorbar(dates, iowa_timeseries_oco2.mean*fac,yerr=iowa_timeseries_oco2.standard_error*fac, label='OCO-2 Mean')\\nplt.errorbar(dates, iowa_timeseries_tropomi.mean,yerr=iowa_timeseries_tropomi.standard_error, label='TROPOMI Mean')\\nplt.ylabel('SIF @740nm (W\\/m$^2$\\/sr\\/$\\\\mu$m)')\\nplt.legend(loc=0)\\nplt.title('Iowa Timeseries, +\\/-3 day running mean')\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nNow we can use the factor to better match the 2 time-series:\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"I am working on the file \\\"2015-10_Lecture\\/Lecture2\\/code\\/2_MNIST_solution.ipynb\\\".\\nThe first task is:\\nTime to train the model\\nNow we can train our model by calling train_model(mini_batch_index). To predict labels, we can use the function predict_labels(data).\\nCan you write Python code for it?\\n\",\"targets\":\"\\nnumber_of_minibatches = len(train_x) \\/ batch_size\\nprint \\\"%d mini batches\\\" % (number_of_minibatches)\\n\\nnumber_of_epochs = 10\\nprint \\\"%d epochs\\\" % number_of_epochs\\n\\n#\\ndef compute_accurarcy(dataset_x, dataset_y): \\n predictions = predict_labels(dataset_x)\\n errors = sum(predictions != dataset_y) #Number of errors\\n accurarcy = 1 - errors\\/float(len(dataset_y))\\n return accurarcy\\n\\nfor epoch in xrange(number_of_epochs):\\n #Train the model on all mini batches\\n for idx in xrange(0, number_of_minibatches):\\n train_model(idx)\\n \\n\\n accurarcy_dev = compute_accurarcy(dev_x, dev_y)\\n accurarcy_test = compute_accurarcy(test_x, test_y)\\n\\n print \\\"%d epoch: Accurarcy on dev: %f, accurarcy on test: %f\\\" % (epoch, accurarcy_dev, accurarcy_test)\\n \\nprint \\\"DONE\\\"\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"3.4 Setting up Flow Parameters\\nRLlib and rllab experiments both generate a params.json file for each experiment run. For RLlib experiments, the parameters defining the Flow scenario and environment must be stored as well. As such, in this section we define the dictionary flow_params, which contains the variables required by the utility function make_create_env. make_create_env is a higher-order function which returns a function create_env that initializes a Gym environment corresponding to the Flow scenario specified.\\n\",\"targets\":\"# Creating flow_params. Make sure the dictionary keys are as specified. \\nflow_params = dict(\\n # name of the experiment\\n exp_tag=name,\\n # name of the flow environment the experiment is running on\\n env_name=env_name,\\n # name of the scenario class the experiment uses\\n scenario=scenario_name,\\n # simulator that is used by the experiment\\n simulator='traci',\\n # sumo-related parameters (see flow.core.params.SumoParams)\\n sim=sumo_params,\\n # environment related parameters (see flow.core.params.EnvParams)\\n env=env_params,\\n # network-related parameters (see flow.core.params.NetParams and\\n # the scenario's documentation or ADDITIONAL_NET_PARAMS component)\\n net=net_params,\\n # vehicles to be placed in the network at the start of a rollout \\n # (see flow.core.vehicles.Vehicles)\\n veh=vehicles,\\n # (optional) parameters affecting the positioning of vehicles upon \\n # initialization\\/reset (see flow.core.params.InitialConfig)\\n initial=initial_config\\n)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"I am working on the file \\\"IMD0104 - PROGRAMAÇÃO ORIENTADA A OBJETOS E MAPEAMENTO OBJETO-RELACIONAL\\/assignments\\/3\\/.ipynb_checkpoints\\/all-that-you-need-to-know-about-the-android-market-checkpoint.ipynb\\\".\\nThe first task is:\\nGenerally, most apps do well with an average rating of 4.17.\\nLet's break this down and inspect if we have categories which perform exceptionally good or bad.\\nApp ratings across categories - One Way Anova Test\\nCan you write Python code for it?\\n\",\"targets\":\"\\nimport scipy.stats as stats\\nf = stats.f_oneway(df.loc[df.Category == 'BUSINESS']['Rating'].dropna(), \\n df.loc[df.Category == 'FAMILY']['Rating'].dropna(),\\n df.loc[df.Category == 'GAME']['Rating'].dropna(),\\n df.loc[df.Category == 'PERSONALIZATION']['Rating'].dropna(),\\n df.loc[df.Category == 'LIFESTYLE']['Rating'].dropna(),\\n df.loc[df.Category == 'FINANCE']['Rating'].dropna(),\\n df.loc[df.Category == 'EDUCATION']['Rating'].dropna(),\\n df.loc[df.Category == 'MEDICAL']['Rating'].dropna(),\\n df.loc[df.Category == 'TOOLS']['Rating'].dropna(),\\n df.loc[df.Category == 'PRODUCTIVITY']['Rating'].dropna()\\n )\\n\\nprint(f)\\nprint('\\\\nThe p-value is extremely small, hence we reject the null hypothesis in favor of the alternate hypothesis.\\\\n')\\n#temp = df.loc[df.Category.isin(['BUSINESS', 'DATING'])]\\n\\ngroups = df.groupby('Category').filter(lambda x: len(x) > 286).reset_index()\\narray = groups['Rating'].hist(by=groups['Category'], sharex=True, figsize=(20,20))\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"I am working on the file \\\".ipynb_checkpoints\\/TextModels-checkpoint.ipynb\\\".\\nThe first task is:\\nCanned input data\\nCan you write Python code for it?\\n\",\"targets\":\"\\nfrom keras.datasets import imdb\\n\\n(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_words) # limits vocab to num_words\\n\\n?imdb.load_data\\n\\nfrom keras.preprocessing import sequence\\n\\nx_train = sequence.pad_sequences(x_train, maxlen=sequence_length, padding=\\\"post\\\", truncating=\\\"post\\\")\\nx_test = sequence.pad_sequences(x_test, maxlen=sequence_length, padding=\\\"post\\\", truncating=\\\"post\\\")\\n\\nx_train[0]\\n\\nvocabulary = imdb.get_word_index() # word to integer map\\n\\nvocabulary['good']\\n\\nlen(vocabulary)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"# remove possible diuplicate files with other extension names\\n!rm -rf .\\/photograph_template_texts\\/*\\n\\ntotal_images = 0\\nOK_images = 0\\nuncategorized_images = 0\\nfaulty_images = 0\\n\\nfilenames_file = open(\\\".\\/filenames_mapping.csv\\\",\\\"w\\\")\\nfilenames_file.write(\\\"Folder|Original|Commons\\\\n\\\")\\n\\nfor row_no, row in merged.iterrows():\\n # Filename: v<\\/code>:\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"\\\"BA网络\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\nimport networkx as nx\\nimport matplotlib.pyplot as plt\\nBA= nx.random_graphs.barabasi_albert_graph(200,2) #生成n=20、m=1的BA无标度网络\\npos = nx.spring_layout(BA) #定义一个布局,此处采用了spring布局方式\\nnx.draw(BA,pos,with_labels=False,node_size = 30) #绘制图形\\nplt.show()\\n\\nplotDegreeDistribution(BA)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"\\\"ClinVar documentation\\nSubmission guidelines can be found here, in particular note the requirement:\\n\\na valid description of the variant, one of:\\n * an HGVS expression\\n * chromosome coordinates and change\\n * cytogenetic description\\n\\nAlso found the xsd for ClinVar XML submissions, which includes all possible measure types:\\nxml\\n<xs:simpleType name=\\\"Measuretype\\\">\\n <xs:restriction base=\\\"xs:string\\\">\\n <xs:enumeration value=\\\"Gene\\\"\\/>\\n <xs:enumeration value=\\\"Variation\\\"\\/>\\n <xs:enumeration value=\\\"Insertion\\\"\\/>\\n <xs:enumeration value=\\\"Mobile element insertion\\\"\\/>\\n <xs:enumeration value=\\\"Novel sequence insertion\\\"\\/>\\n <xs:enumeration value=\\\"Microsatellite\\\"\\/>\\n <xs:enumeration value=\\\"Deletion\\\"\\/>\\n <xs:enumeration value=\\\"single nucleotide variant\\\"\\/>\\n <xs:enumeration value=\\\"Multiple nucleotide variation\\\"\\/>\\n <xs:enumeration value=\\\"Indel\\\"\\/>\\n <xs:enumeration value=\\\"Duplication\\\"\\/>\\n <xs:enumeration value=\\\"Tandem duplication\\\"\\/>\\n <xs:enumeration value=\\\"copy number loss\\\"\\/>\\n <xs:enumeration value=\\\"copy number gain\\\"\\/>\\n <xs:enumeration value=\\\"protein only\\\"\\/>\\n <xs:enumeration value=\\\"Inversion\\\"\\/>\\n <xs:enumeration value=\\\"Translocation\\\"\\/>\\n <xs:enumeration value=\\\"Interchromosomal breakpoint\\\"\\/>\\n <xs:enumeration value=\\\"Intrachromosomal breakpoint\\\"\\/>\\n <xs:enumeration value=\\\"Complex\\\"\\/>\\n <\\/xs:restriction>\\n<\\/xs:simpleType>\\nCompare measure types we've actually found in the data, here.\\nFilter\\nFilter full dataset to get just a manageable (hopefully representative) sample of records with measures representing complex events\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\ncomplex_xml = os.path.join(PROJECT_ROOT, 'complex-events.xml.gz')\\n\\n# get just \\\"complex events\\\"\\n# Q: what's complex? -- complex == no full coordinates\\ndef complex_measures(x):\\n if x.measure:\\n return (\\n # smattering of all non SNV variants\\n (x.measure.variant_type.lower() not in {'single nucleotide variant'} and np.random.random() < 0.01)\\n # be sure to get the rare ones\\n or (x.measure.variant_type.lower() in {'tandem duplication', 'fusion', 'complex', 'translocation', 'inversion'})\\n )\\n return False\\n\\nfilter_xml(\\n input_xml=clinvar_path,\\n output_xml=complex_xml,\\n filter_fct=complex_measures,\\n)\\n\\ndataset = ClinVarDataset(complex_xml)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"I am working on the file \\\"Physique.ipynb\\\".\\nThe first task is:\\nThen the attributes can accessed by the column names.\\nCan you write Python code for it?\\n\",\"targets\":\"\\nprint(lbf2N.Toconvertfrom)\\nprint(lbf2N.to)\\nprint(lbf2N.Multiplyby)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"I am working on the file \\\"0.21\\/_downloads\\/80342e62fc31882c2b53e38ec1ed14a6\\/plot_background_filtering.ipynb\\\".\\nThe first task is:\\nNow we have very sharp frequency suppression, but our filter rings for the\\nentire 10 seconds. So this naïve method is probably not a good way to build\\nour low-pass filter.\\nFortunately, there are multiple established methods to design FIR filters\\nbased on desired response characteristics. These include:\\n1. The Remez_ algorithm (:func:`scipy.signal.remez`, `MATLAB firpm`_)\\n2. Windowed FIR design (:func:`scipy.signal.firwin2`,\\n :func:`scipy.signal.firwin`, and `MATLAB fir2`_)\\n3. Least squares designs (:func:`scipy.signal.firls`, `MATLAB firls`_)\\n4. Frequency-domain design (construct filter in Fourier\\n domain and use an :func:`IFFT <numpy.fft.ifft>` to invert it)\\n\\n
Note<\\/h4>
\\nVarianz ist ein Streumaß, das uns eine Einschätzung erlaubt, wie sehr die Daten vom Mittelwert abweichen. Offensichtlich haben die Datenreihen [6,6,6,6] und [1,11,2,10] den gleichen Mittelwert, aber eine ganz unterschiedliche Varianz. Berechnet wird die Varianz einer Population folgendermaßen:\\n$$\\nv = \\\\sum_{i=0}^n \\\\frac {(\\\\mu - x_i)^2} {n}\\n$$\\nDie Quadrierung der Werte ist notwendig, um zu vermeiden, dass sich positive und negative Werte gegenseitig aufwiegen. Allerdings erschwert die Quadrierung die Interpretation einer Varianz (Wenn wir die Varianz der Körpergröße gemessen in cm anschauen, dann haben wir einen Wert in Quadratzentimeter vor uns...), außerdem sind die Werte recht groß.\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\nnp.var(data)\\n\\n\\na = [6,6,6,6] \\nb = [1,11,2,10]\\nprint(\\\"Mittelwert a: \\\", np.mean(a))\\nprint(\\\"Mittelwert b: \\\", np.mean(b))\\nprint(\\\"Varianz a: \\\", np.var(a))\\nprint(\\\"Varianz b: \\\", np.var(b))\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"# HMC - Unscaled\\nnsample = 10000\\nm = 20\\neps = .0001\\n#theta = np.zeros(p)\\ntheta = beta_true_unscale.copy()\\nphi = 0.01\\n\\nnp.random.seed(2)\\nsamples = np.zeros((nsample, p))\\nu = np.zeros(nsample)\\nfor i in range(nsample):\\n theta = hmc(Y, X, gradU, M, eps, m, theta, C, V)\\n samples[i] = theta\\n u[i] = U(theta, Y, X)\\n \\nnp.mean(samples, axis=0) - beta_true_unscale\\n\\nplt.plot((samples - beta_true_unscale)[:,4])\\nplt.show()\\n\\nplt.plot(u)\\nplt.show()\\n\\nbeta_true_unscale\\n\\n# HMC - Scaled\\nnsample = 10000\\nm = 20\\neps = .001\\ntheta = np.zeros(p)\\n#theta = beta_true_scale.copy()\\nphi = 0.1\\n\\nnp.random.seed(2)\\nsamples = np.zeros((nsample, p))\\nu = np.zeros(nsample)\\nfor i in range(nsample):\\n theta = hmc(Y, Xs, gradU, M, eps, m, theta, C, V)\\n samples[i] = theta\\n u[i] = U(theta, Y, Xs)\\n \\nnp.mean(samples, axis=0) - beta_true_scale\\n\\nplt.plot((samples - beta_true_scale)[:,1])\\nplt.show()\\n\\nplt.plot(u)\\nplt.show()\\n\\n# HMC - Scaled (no intercept)\\nnsample = 10000\\nm = 20\\neps = .001\\ntheta = np.zeros(p-1)\\n#theta = beta_true_scale.copy()[1:]\\nphi = 1\\n\\nnp.random.seed(2)\\nsamples = np.zeros((nsample, p-1))\\nu = np.zeros(nsample)\\nfor i in range(nsample):\\n theta = hmc(Y, Xs[:,1:], gradU, np.identity(p-1), eps, m, theta, C, V)\\n samples[i] = theta\\n u[i] = U(theta, Y, Xs[:,1:])\\n \\nnp.mean(samples, axis=0) - beta_true_scale[1:]\\n\\nplt.plot((samples - beta_true_scale[1:])[:,5])\\nplt.show()\\n\\nplt.plot(u)\\nplt.show()\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nOur code - HMC\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"\\\"Create a separate list for each review for the businesses that show up in the business_id list. Remove all reviews that relate to the current user.\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\nuser_id = user_reviews_json.keys()[29]\\nrest_reviews = []\\nrest_ratings = []\\nbiz_ids = []\\nfor i in tqdm.tqdm(range(0, len(restreview.keys()))):\\n for restaurant in restreview[restreview.keys()[i]]:\\n if restaurant['user_id'] != user_id:\\n rest_reviews.append(restaurant['text'])\\n rest_ratings.append(restaurant['stars'])\\n biz_ids.append(restreview.keys()[i])\\n else:\\n pass\\nrestaurant_df = pd.DataFrame({'review_text': rest_reviews, 'rating': rest_ratings, 'biz_id': biz_ids})\\n\\n#Feature objects and functions\\nstop_words = set(stopwords.words('english'))\\n\\ndef sent_percent(review):\\n regex_words = re.compile('[a-z]+')\\n words = [x.lower() for x in review.split(' ')]\\n words = [x for x in words if regex_words.match(x)]\\n pos_count, neg_count = 0, 0\\n for word in words:\\n if word in lh_pos:\\n pos_count += 1\\n elif word in lh_neg:\\n neg_count += 1\\n return [float(pos_count)\\/float(len(words)), float(neg_count)\\/float(len(words))]\\n\\npos_vectorizer = CountVectorizer(vocabulary = lh_pos)\\nneg_vectorizer = CountVectorizer(vocabulary = lh_neg)\\nclass SentimentPercentage(BaseEstimator, TransformerMixin):\\n \\\"\\\"\\\"Takes in two lists of strings, extracts the lev distance between each string, returns list\\\"\\\"\\\"\\n\\n def __init__(self):\\n pass\\n\\n def transform(self, reviews):\\n ##Take in a list of textual reviews and return a list with two elements:\\n ##[Positive Percentage, Negative Percentage]\\n pos_vect = pos_vectorizer.transform(reviews)\\n neg_vect = neg_vectorizer.transform(reviews)\\n features = []\\n \\n for i in range(0, len(reviews)):\\n sent_percentage = []\\n sent_percentage.append(float(pos_vect[i].sum())\\/float(len(reviews[i])))\\n sent_percentage.append(float(neg_vect[i].sum())\\/float(len(reviews[i])))\\n features.append(sent_percentage)\\n \\n return np.array(features)\\n\\n def fit(self, reviews, y=None, n_grams = None):\\n ...\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"Create the model with the saltwater well (Simulation 2)\\n\",\"targets\":\"modelname2 = 'swiex4_s2'\\nml2 = mf.Modflow(modelname2, version='mf2005', exe_name=exe_name, model_ws=workspace)\\n\\ndiscret = mf.ModflowDis(ml2, nlay=nlay, nrow=nrow, ncol=ncol, laycbd=0,\\n delr=delr, delc=delc, top=botm[0], botm=botm[1:],\\n nper=nper, perlen=perlen, nstp=nstp)\\nbas = mf.ModflowBas(ml2, ibound=ibound, strt=ihead)\\nlpf = mf.ModflowLpf(ml2, laytyp=laytyp, hk=hk, vka=vka)\\nwel = mf.ModflowWel(ml2, stress_period_data=swwells_well_data)\\nghb = mf.ModflowGhb(ml2, stress_period_data=ghb_data)\\nrch = mf.ModflowRch(ml2, rech=rch_data)\\nswi = mf.ModflowSwi2(ml2, nsrf=1, istrat=1, toeslope=toeslope, tipslope=tipslope, nu=nu,\\n zeta=z, ssz=ssz, isource=iso, nsolver=1,\\n adaptive=adaptive, nadptmx=nadptmx, nadptmn=nadptmn,\\n nobs=nobs, iswiobs=iswiobs, obsnam=obsnam, obslrc=obslrc)\\noc = mf.ModflowOc(ml2, stress_period_data=spd)\\npcg = mf.ModflowPcg(ml2, hclose=1.0e-6, rclose=3.0e-3, mxiter=100, iter1=50)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"I am working on the file \\\".ipynb_checkpoints\\/Data Analysis Project 3 - Data Wrangle OpenStreetMaps Data-checkpoint.ipynb\\\".\\nThe first task is:\\nReligions in Places of Worship\\nGrouping and sorting by the occurences of the religion attribute for all amenities classified as place_of_worship or community_center gives us an indication, how prevalent religions are in our city: obviously, christian is the most prevalent here.\\nCan you write Python code for it?\\n\",\"targets\":\"\\nfrom Project.notebook_stub import project_coll\\nimport pprint\\n\\n# Query used - see function: Project.audit_stats_map.stats_religions(...):\\npipeline = [\\n {\\\"$match\\\": {\\\"amenity\\\":{\\\"$in\\\": [\\\"place_of_worship\\\",\\\"community_center\\\"]}}},\\n {\\\"$group\\\": {\\\"_id\\\": \\\"$religion\\\", \\\"count\\\": {\\\"$sum\\\": 1}}},\\n {\\\"$sort\\\": {\\\"count\\\": -1}}\\n ]\\nl = list(project_coll.aggregate(pipeline))\\npprint.pprint(l)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"### Luke's Rabbits\\n# Luke loves his rabbits.\\n# However, they are breeding like crazy.\\n# Currently, he has 7 rabbits. Every month, they double. So next month he will have 14.\\n# Feeding each rabbit costs $10 dollars a month.\\n#\\n# The input should be how many months Luke wants to breed his rabbits.\\n# The output should be how much month it would cost to feed all of those rabbits that month.\\n\\nprint(\\\"\\\"\\\"Every month, the number of rabbits doubles.\\n If you tell me how many months you want to breed the rabbits,\\n then I will calculate how much money it will cost in food\\\"\\\"\\\")\\nnum_months = int(input(\\\"Number of months: \\\"))\\n# since it doubles every month, then 7 * 2 is the first month, 7 * 2 * 2 is the second\\n# so, it is 7 * 2 ** num_months\\nnum_rabbits = 7 * 2 ** num_months\\nfood_cost = 10*num_rabbits\\nprint(\\\"It will cost you {} dollars to feed your rabbits that month\\\".format(food_cost))\\n\\n### Bill's Money\\n# Bill wants to know how much money he can earn from saving.\\n# He has an investment account that gives him 10% every month\\n# He wants to know how much month he will have after 1, 3, and 6 months\\n#\\n# The input should be how much money he wants to invest.\\n# The output should be the numbers for each of the 3 lengths of time.\\n\\n### Sara's Army\\n# Sara wants to hire an army.\\n# It will cost her $500 per soldier.\\n#\\n# The input should be the amount of money that Sara has\\n# The output should be the number of soldiers she can get.\\n#\\n# Note that the amount of money she has may not be exactly the amount of a soldier\\n# For example, if she has $700 dollars, she can only get 1 soldier\\n# So, you will have to do floor division.\\n# See the slides if you forgot how\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nExercise 2: Harder Formulas\\nI will provide a description of a problem.\\nIt is your job to convert that description into a formula.\\nThen, you must code that formula into an equation and code.\\nEach of the problems will require an input that I specify.\\nWrite an intro print statement explaining the situation for the problem.\\nThen write an input statement to get the relevant information.\\nFinally, use your equation to calculate the answer.\\nThen, print out the answer in a nice way.\\nI have completed the first one as an example.\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"And you can find the max and min values of the array:\\n\",\"targets\":\"print('The minimum of `x` is `{0}`'.format(x.min()))\\n\\nprint('The maximum of `x` is `{0}`'.format(x.max()))\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"I am working on the file \\\"GettingStartedCNN\\/CaffeOnDockerStable.ipynb\\\".\\nThe first task is:\\nNow we can visualize one by one as follows (Please note that the grayscale is inverse plotted<\\/span>):\\nCan you write Python code for it?\\n\",\"targets\":\"\\ndef plot_mnist_digit(image, title=None):\\n fig = plt.figure()\\n ax = fig.add_subplot(1,1,1)\\n \\n imgplot = ax.imshow(image[:,:,0], cmap=mpl.cm.Greys)\\n imgplot.set_interpolation('nearest')\\n ax.xaxis.set_ticks_position('bottom')\\n ax.yaxis.set_ticks_position('left')\\n \\n major_ticks = np.arange(0, 29, 7) \\n minor_ticks = np.arange(0, 28, 1) \\n\\n ax.set_xticks(major_ticks) \\n ax.set_xticks(minor_ticks, minor=True) \\n ax.set_yticks(major_ticks) \\n ax.set_yticks(minor_ticks, minor=True) \\n\\n# ax.grid(which='both',color='gray', linestyle='-',linewidth=0.5)\\n \\n if not title == None:\\n plt.title(title, fontsize=15) \\n plt.show()\\n \\ndigit = next(test_set)\\nlabel = digit[0]; image = digit[1]\\nplot_mnist_digit(image, \\\"LABEL: \\\" + str(label))\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"%%cython\\n# cython: boundscheck = False\\n# cython: wraparound = False\\nfrom cpython.datetime cimport (\\n import_datetime, datetime_new, datetime, timedelta)\\nfrom pandas import Timestamp\\n\\nimport_datetime()\\n\\ncpdef convert_arrays_ts(\\n long[:] year, long[:] month, long[:] day, \\n long long[:] out):\\n \\\"\\\"\\\" Result goes into `out` \\\"\\\"\\\"\\n cdef int i, n = year.shape[0]\\n cdef datetime dt\\n for i in range(n):\\n dt = Note<\\/h4>
Note<\\/h4>
tensor([3, 4])<\\/code>. This result is achieved by multiplying every element in
u<\\/code> with the corresponding element in the same position
v<\\/code>, which is similar to [1 * 3, 2 * 2]<\\/i>.\\n\\n\\n
Dot Product<\\/h3>\\n\\nThe dot product is a special operation for a vector that you can use in Torch.\\nHere is the dot product of the two tensors
u<\\/code> and
v<\\/code>:\\nCan you write Python code for it?\\n\",\"targets\":\"\\n# Calculate dot product of u, v\\n\\nu = torch.tensor([1, 2])\\nv = torch.tensor([3, 2])\\n\\nprint(\\\"Dot Product of u, v:\\\", torch.dot(u,v))\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"%%Table members\\nMEMBERID,NODEJ,NODEK\\nAB,A,B\\nBC,B,C\\nDC,D,C\\n\\n@sl.extend(Frame2D)\\nclass Frame2D:\\n \\n COLUMNS_members = ('MEMBERID','NODEJ','NODEK')\\n \\n def install_members(self):\\n table = self.get_table('members')\\n for ix,m in table.data.iterrows():\\n if m.MEMBERID in self.members:\\n raise Exception('Multiply defined member: {}'.format(m.MEMBERID))\\n memb = Member(m.MEMBERID,self.get_node(m.NODEJ),self.get_node(m.NODEK))\\n self.members[memb.id] = memb\\n self.rawdata.members = table\\n \\n def get_member(self,id):\\n try:\\n return self.members[id]\\n except KeyError:\\n raise Exception('Member not defined: {}'.format(id))\\n\\n##test:\\nf.install_members()\\nf.members\\n\\n##test:\\nm = f.get_member('BC')\\nm.id, m.L, m.dcx, m.dcy\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nMembers\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"1. Загрузите выборку Wine\\n\",\"targets\":\"df = pd.read_csv('..\\/data\\/wine.data')\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"Data Loading and Management\\n\\nDefine a function to load data \\nConstruct the Bunch object for the data set by defining the paths and file names \\nLoad the features and labels from the meta data \\nLoad the read me description\\nUse Pandas to load data from the txt file\\nExtract the target from the data by indexing with column names\\nCreate a 'Bunch' object, which is a dictionary that exposes dictionary keys as properties so that you can access them with dot notation.\\n\",\"targets\":\"def load_data(root=DATA_DIR):\\n # Construct the `Bunch` for the fertility dataset\\n filenames = {\\n 'meta': os.path.join(root, 'meta.json'),\\n 'rdme': os.path.join(root, 'README.md'),\\n 'data': os.path.join(root, 'fertility_diagnosis.txt'),\\n }\\n\\n # Load the meta data from the meta json\\n with open(filenames['meta'], 'r') as f:\\n meta = json.load(f)\\n target_names = meta['target_names']\\n feature_names = meta['feature_names']\\n\\n # Load the description from the README. \\n with open(filenames['rdme'], 'r') as f:\\n DESCR = f.read()\\n\\n # Load the dataset from the text file.\\n dataset = pd.read_csv('fertility_Diagnosis.txt', delimiter=',', names=FEATURES)\\n \\n # 'diagnosis' is stored as a text value. We convert (or 'map') it into numeric binaries \\n # so it will be ready for scikit-learn.\\n dataset.diagnosis = dataset.diagnosis.map({'N': 0,'O': 1})\\n \\n # Extract the target from the data\\n data = dataset[['season_of_analysis', 'age', 'childhood_disease', 'accident_or_trauma', 'surgical_intervention',\\n 'high_fevers', 'alcohol', 'smoking', 'hours_sitting']]\\n target = dataset['diagnosis']\\n\\n # Create the bunch object\\n return Bunch(\\n data=data,\\n target=target,\\n filenames=filenames,\\n target_names=target_names,\\n feature_names=feature_names,\\n DESCR=DESCR\\n )\\n\\n# Save the dataset as a variable we can use.\\ndataset = load_data()\\n\\nprint(dataset.data.shape)\\nprint(dataset.target.shape)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"I am working on the file \\\".ipynb_checkpoints\\/6. Community Detection-checkpoint.ipynb\\\".\\nThe first task is:\\nFinding Communities\\nTo perform the community detection algorithm, the directed graph needs to be made into an undirected one.\\nCan you write Python code for it?\\n\",\"targets\":\"\\nPR = P.to_undirected()\\nPR = nx.Graph(PR)\\n\\nmypalette = [\\\"blue\\\",\\\"red\\\",\\\"green\\\", \\\"yellow\\\", \\\"orange\\\", \\\"violet\\\", \\\"grey\\\", \\\"grey\\\",\\\"grey\\\"]\\n\\npos = nx.spring_layout(PR)\\n#colors = [mypalette[PR.node[i]['value']] for i in range(1,len(PR.nodes()))]\\ncolors = [mypalette[PR.node[i]['value']] for i in PR.nodes()]\\nnx.draw(PR, pos, node_color=colors, node_size=10)\\nplt.show()\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"I am working on the file \\\"Python Quick Reference\\/Strings.ipynb\\\".\\nThe first task is:\\nCase insensitive search with reg exps\\nCan you write Python code for it?\\n\",\"targets\":\"\\ntext = 'UPPER PYTHON, lower python, Mixed Python'\\nre.findall('python', text, flags=re.IGNORECASE)\\n\\n# note that case is not carriedd through in a case insensitive replace.\\nre.sub('python', 'snake', text, flags=re.IGNORECASE)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"\\\"434\\\" in embeddings\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nDigit Expansion\\nWe reduce the size of the vocabulary while training the embeddings by grouping special classes of words.\\nOnce common case of such grouping is digits.\\nEvery digit in the training corpus get replaced by the symbol #.\\nFor example, a number like 123.54 becomes ###.##.\\nTherefore, querying the embedding for a new number like 434 will result in a failure\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"Associer un caractère avec son point de code de façon biunivoque.\\n\",\"targets\":\"def code_cesar(mot='Bonjour tout le monde', decalage=3):\\n \\\"\\\"\\\"César avec un décallage de 1\\\"\\\"\\\"\\n return \\\"\\\".join([chr(ord(c)+decalage) for c in mot])\\ncode_cesar()\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"I am working on the file \\\"Mining massive datasets\\/association.ipynb\\\".\\nThe first task is:\\nPrune non frequent candidate triples\\nCan you write Python code for it?\\n\",\"targets\":\"\\nfor candidate in allCandidateTriples:\\n whatAboutIt = True\\n for pair in itertools.combinations(candidate,2):\\n if pair not in frequentPairs:\\n whatAboutIt = False\\n break\\n if whatAboutIt:\\n candidateTriples[candidate] = 0\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"Advanced\\/optional solution:\\n\",\"targets\":\"# %load _solutions\\/case3_bacterial_resistance_lab_experiment6.py\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"I am working on the file \\\"Level_03\\/Level_3.ipynb\\\".\\nThe first task is:\\nbreak, continue und else\\nDie Schlüsselwörter break, continue und else können wir innerhalb einer for-Schleife genauso benutzen, wie in einer while-Schleife.\\nCan you write Python code for it?\\n\",\"targets\":\"\\nzauberwort = \\\"abracadabra#test\\\"\\nneuer_zauber = \\\"\\\"\\nfor zeichen in zauberwort:\\n if zeichen == \\\"#\\\":\\n break\\n elif zeichen == \\\"a\\\":\\n continue\\n else:\\n neuer_zauber += zeichen\\nprint(neuer_zauber)\\n\\nstring = \\\"Das ist ein Teststring.\\\"\\nfor zeichen in string:\\n if zeichen == \\\"Y\\\":\\n break\\nelse:\\n print(\\\"Kein 'Y' gefunden.\\\")\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"# Set the number of Trotter steps.\\ntrotter_steps = 10\\n\\n# Evolution under the hopping Hamiltonian for a single Trotter step.\\numat = expm(-1j * hopping_matrix * (e_time \\/ trotter_steps))\\n\\n# Simulate each Trotter step.\\ncurrent_wfn = copy.deepcopy(init_wfn)\\nfor _ in range(trotter_steps):\\n # Evolve the Hopping Hamiltonian.\\n current_wfn = evolve_fqe_givens(current_wfn, u=umat)\\n\\n # Evolve the charge-charge interaction.\\n current_wfn = evolve_fqe_charge_charge_alpha_beta(\\n current_wfn, \\n charge_charge_matrix, \\n e_time \\/ trotter_steps,\\n )\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nAt this point we can simulate a specified number of Trotter steps as follows.\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"1 - Baseline model: Emojifier-V1\\n1.1 - Dataset EMOJISET\\nLet's start by building a simple baseline classifier. \\nYou have a tiny dataset (X, Y) where:\\n- X contains 127 sentences (strings)\\n- Y contains a integer label between 0 and 4 corresponding to an emoji for each sentence\\n\\n
TODO! COMPLETE THIS SECTION!<\\/font><\\/h3>\\n\",\"targets\":\"# Normalize the input (xs) using its mean and standard deviation\\nxs = ...\\n\\n# Just to make sure you have normalized it correctly:\\nprint(np.min(xs), np.max(xs))\\nassert(np.min(xs) > -3.0 and np.max(xs) < 3.0)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"3. Writing custom functions (8pts)\\nComplete the following. For some of these problems, you can use your code from previous labs as a starting point. \\n(If you didn't finish those problems, feel free to use the code from the answer sheet, just make sure you understand how they work! Optionally, for extra practice you can try re-writing them using some of the new things we've learned since then.)\\n(A) (1pt) Create a function called \\\"gc\\\" that takes a single sequence as a parameter and returns the GC content of the sequence (as a 2 decimal place float).\\n(B) (1pt) Create a function called \\\"reverse_compl\\\" that takes a single sequence as a parameter and returns the reverse complement.\\n(C) (1pt) Create a function called \\\"read_fasta\\\" that takes a file name as a parameter (which is assumed to be in fasta format), puts each fasta entry into a dictionary (using the header line as a key and the sequence as a value), and then returns the dictionary.\\n(D) (2pts) Create a function called \\\"rand_seq\\\" that takes an integer length as a parameter, and then returns a random DNA sequence of that length. \\nHint: make a list of the possible nucleotides\\n(E) (2pts) Create a function called \\\"shuffle_nt\\\" that takes a single sequence as a parameter and returns a string that is a shuffled version of the sequence (i.e. the same nucleotides, but in a random order). \\nHint: Look for Python functions that will make this easier. For example, the random module has some functions for shuffling. There may also be some built-in string functions that are useful. However, you can also do this just using things we've learned.\\n(F) (1pt) Run the code below to show that all of your functions work. Try to fix any that have problems.\\n\",\"targets\":\"##### testing gc\\ngcCont = gc(\\\"ATGGGCCCAATGG\\\")\\n\\nif type(gcCont) != float:\\n print \\\">> Problem with gc: answer is not a float, it is a %s.\\\" % type(gcCont)\\nelif gcCont != 0.62:\\n print \\\">> Problem with gc: incorrect answer (should be 0.62; your code gave\\\", gcCont, \\\")\\\" \\nelse:\\n print \\\"gc: Passed.\\\"\\n\\n\\n##### testing reverse_compl\\nrevCompl = reverse_compl(\\\"GGGGTCGATGCAAATTCAAA\\\")\\n\\nif type(revCompl) != str:\\n print \\\">> Problem with reverse_compl: answer is not a string, it is a %s.\\\" % type(revCompl) \\nelif revCompl != \\\"TTTGAATTTGCATCGACCCC\\\":\\n print \\\">> Problem with reverse_compl: answer (%s) does not match expected (%s)\\\" % (revCompl, \\\"TTTGAATTTGCATCGACCCC\\\") \\nelse:\\n print \\\"reverse_compl: Passed.\\\"\\n \\n\\n##### testing read_fasta\\ntry:\\n ins = open(\\\"horrible.fasta\\\", 'r')\\nexcept IOError:\\n print \\\">> Can not test read_fasta because horrible.fasta is missing. Please add it to the directory with this notebook.\\\"\\nelse:\\n seqDict = read_fasta(\\\"horrible.fasta\\\")\\n \\n if type(seqDict) != dict:\\n print \\\">> Problem with read_fasta: answer is not a dictionary, it is a %s.\\\" % type(seqDict)\\n elif len(seqDict) != 22:\\n print \\\">> Problem with read_fasta: # of keys in dictionary (%s) does not match expected (%s)\\\" % (len(seqDict), 22)\\n else:\\n print \\\"read_fasta: Passed.\\\"\\n\\n\\n##### testing rand_seq\\nrandSeq1 = rand_seq(23)\\nrandSeq2 = rand_seq(23)\\n\\nif type(randSeq1) != str:\\n print \\\">> Problem with rand_seq: answer is not a string, it is a %s.\\\" % type(randSeq1)\\nelif len(randSeq1) != 23:\\n print \\\">> Problem with rand_seq: answer length (%s) does not match expected (%s).\\\" % (len(randSeq1), 23)\\nelif randSeq1 == randSeq2:\\n print \\\">> Problem with rand_seq: generated the same sequence twice (%s) -- are you sure this is random?\\\" % randSeq1\\nelse:\\n print \\\"rand_seq: Passed.\\\"\\n\\n\\n##### testing shuffle_nt\\nshuffSeq = shuffle_nt(\\\"AAAAAAGTTTCCC\\\")\\n\\nif type(shuffSeq) != str:\\n print \\\">> Problem with shuffle_nt: answer is not a string, it is a %s.\\\" % type(shuffSeq)\\nelif len(shuffSeq) != 13:\\n...\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"# Load results and create a bundles with extracted quantities for each \\n# interaction strength.\\ntrapping_3u3d_files = [\\n glob.glob(f'{data_dir}\\/trapping_3u3d\\/{u}\\/*.json')\\n for u in [0.0, 1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 4.0, 4.5, 5.0]]\\ntrapping_3u3d_bundles = [InstanceBundle(\\n experiments=[load_experiment(file) for file in files],\\n numerics_transform=parasitic_cphase_compensation(0.138),\\n steps=range(11),\\n rescale_steps=range(11)) for files in trapping_3u3d_files]\\n\\n# Simulate the exact numerical results that are used as a reference.\\ntotal_steps = sum(len(bundle.steps) for bundle in trapping_3u3d_bundles)\\nwith tqdm(range(total_steps)) as progress:\\n def post_run(_1, _2):\\n progress.update()\\n for bundle in trapping_3u3d_bundles:\\n bundle.cache_exact_numerics(post_run_func=post_run)\\n\\n# Use shared rescaling values among compatible problem instances.\\napply_rescalings_to_bundles(find_bundles_rescalings(trapping_3u3d_bundles))\\n\\nplot_quantity(trapping_3u3d_bundles, 'post_selection', show_std_dev=True);\\n\\nplot_quantity(trapping_3u3d_bundles, 'scaling', show_std_error=True);\\n\\nplot_quantity(trapping_3u3d_bundles, 'charge_spin_density', show_std_error=True);\\n\\nplot_quantity(trapping_3u3d_bundles, 'charge_spin_spreading', show_std_error=True);\\n\\nplot_quantity(trapping_3u3d_bundles, 'charge_spin_spreading_dt', show_std_error=True);\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nTrapping Potential N=6\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"I am working on the file \\\"Foundations\\/Python CS\\/Activity 09.ipynb\\\".\\nThe first task is:\\nExercise 09.3 (raising exceptions)\\nModify your program from the bisection exercise in Activity 04 to raise an error if the maximum number of iterations is exceeded. Reduce the maximum allowed iterations to test that an exception is raised.\\nAdd any other checks on the input data that you think are appropriate.\\nCan you write Python code for it?\\n\",\"targets\":\"\\ndef f(x):\\n #return x**3 - 6*x**2 + 4*x + 12\\n return x**2 + x - 20 # Roots = -5, 4\\n\\ndef compute_root(f, x0, x1, tol, max_it):\\n \\\"\\\"\\\"Computes the root of f between x0 and x1 using bisection,\\n stops if the value of f at the root is under tol or if max_it is reached\\n and returns the root, the value of f at the root and the number of iterations\\\"\\\"\\\"\\n # If tolerance is less than 0 return an error\\n if tol < 0:\\n raise ValueError('Tolerance must be greater than or equal to 0')\\n \\n # If x0 or x1 is a root return it\\n if f(x0) == 0:\\n return x0, f(x0), 0\\n if f(x1) == 0:\\n return x1, f(x1), 0\\n \\n # If f(x0)*f(x1) the function has no solution in the interval, so return an error\\n if f(x0)*f(x1) > 0:\\n raise RuntimeError('There is no solution between x0 and x1')\\n \\n # Initialize iteration counter\\n i = 0\\n \\n while True:\\n # Increment counter\\n i += 1\\n # If max_it is passed return an error\\n if i > max_it:\\n raise RuntimeError('Maximum number of iterations exceeded')\\n \\n # Compute x_mid\\n x_mid = (x0 + x1) \\/ 2\\n\\n # Compute f for the three values\\n f_0, f_1, f_mid = f(x0), f(x1), f(x_mid)\\n\\n # Check the value of f_0*f_mid to determine how to update the endpoints\\n if f_0*f_mid < 0:\\n x1 = x_mid\\n else:\\n x0 = x_mid\\n \\n # Check if f is under tol\\n if abs(f_mid) < tol:\\n return x_mid, f_mid, i\\n\\n # We don't need another return statement because if we pass max_it we return an error\\n\\n# Test for the function f\\nx, f_x, num_it = compute_root(f, x0=3, x1=6, tol=1.0e-6, max_it=1000) # Ok\\n\\nprint('Approximate root:', x)\\nprint('Value of f:', f_x)\\nprint('Number of iterations:', num_it)\\n\\nprint('-----------------------------------------------------------')\\n\\nx, f_x, num_it = compute_root(f, x0=3, x1=6, tol=1.0e-6, max_it=10) # Maximum iterations exceeded\\n\\nx, f_x, num_it = compute_root(f, x0=3, x1=6, tol=-5, max_it=1000) #...\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"Advanced Features\\nGet optional layout parameters\\n\",\"targets\":\"res = requests.get(BASE + 'apply\\/layouts\\/force-directed')\\njp(res.json())\\n\\nparams= [\\n {\\n 'name': 'defaultNodeMass',\\n 'value': 10\\n },\\n {\\n 'name': 'defaultSpringLength',\\n 'value': 100\\n },\\n {\\n 'name': 'isDeterministic',\\n 'value': True\\n }\\n]\\nres = requests.put(BASE + 'apply\\/layouts\\/force-directed', data=json.dumps(params), headers=HEADERS)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"# Load the FAA N-Number inquiry records\\nfaa_tail_number_inquiry = spark.read.json('..\\/data\\/faa_tail_number_inquiry.jsonl')\\nfaa_tail_number_inquiry.show()\\n\\n# Count the records\\nfaa_tail_number_inquiry.count()\\n\\n# Load our unique tail numbers\\nunique_tail_numbers = spark.read.json('..\\/data\\/tail_numbers.jsonl')\\nunique_tail_numbers.show()\\n\\n# Join tail numbers to our inquries\\ntail_num_plus_inquiry = unique_tail_numbers.join(\\n faa_tail_number_inquiry,\\n unique_tail_numbers.TailNum == faa_tail_number_inquiry.TailNum,\\n)\\ntail_num_plus_inquiry = tail_num_plus_inquiry.drop(unique_tail_numbers.TailNum)\\ntail_num_plus_inquiry.show()\\n\\n# Dump extra field and store tail_numbers plus inquiry\\ntail_num_plus_inquiry.createOrReplaceTempView(\\\"tail_num_plus_inquiry\\\")\\nairplanes = spark.sql(\\\"\\\"\\\"SELECT\\n TailNum AS TailNum,\\n engine_manufacturer AS EngineManufacturer,\\n engine_model AS EngineModel,\\n manufacturer AS Manufacturer,\\n mfr_year AS ManufacturerYear,\\n model AS Model,\\n owner AS Owner,\\n owner_state AS OwnerState,\\n serial_number AS SerialNumber\\nFROM\\n tail_num_plus_inquiry\\\"\\\"\\\")\\n\\nairplanes.repartition(1).write.mode('overwrite').json('..\\/data\\/airplanes.json')\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nInvestigating Airplanes (Entities)\\nIn Chapter 5, we were just getting into some interesting data that we will now look at once again. To begin, let’s store our enriched airplanes in their own intermediate dataset. Check out ch06\\/prepare_airplanes.py:\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"# Правильно\\nif a is None:\\n pass\\n\\n# Неверно\\nif a == None:\\n pass\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nСравнение с None\\nСравнение с None должно осуществляться с помощью операторов is и is not, а не операторами сравнения.\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"\\\"Construction de tous les niveaux de l'arbre\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\ndef tree_recursive(level, seg, angle=pi\\/6, re=0.6, rl=0.95):\\n all_segs = [seg]\\n iter_segs = [seg]\\n for i in range(level):\\n new_segs = []\\n for seg in iter_segs:\\n new_segs.extend(tree3(seg, angle=angle, re=re, rl=rl))\\n all_segs.extend(new_segs)\\n iter_segs = new_segs\\n return all_segs\\n\\npremier = segment()\\nsegs = tree_recursive(3, premier)\\ndraw(segs, L=3);\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"I am working on the file \\\"03_problems.ipynb\\\".\\nThe first task is:\\nSolution:\\nUsing list comprehension\\nCan you write Python code for it?\\n\",\"targets\":\"\\n[(i0, i1) for i0 in l0 for i1 in l1]\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"\\\"Bidir\\nA second things that might help is to use a bidirectional model for the encoder.\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\nclass Seq2SeqQRNN(nn.Module):\\n def __init__(self, emb_enc, emb_dec, n_hid, max_len, n_layers=2, p_inp:float=0.15, p_enc:float=0.25, \\n p_dec:float=0.1, p_out:float=0.35, p_hid:float=0.05, bos_idx:int=0, pad_idx:int=1):\\n super().__init__()\\n self.n_layers,self.n_hid,self.max_len,self.bos_idx,self.pad_idx = n_layers,n_hid,max_len,bos_idx,pad_idx\\n self.emb_enc = emb_enc\\n self.emb_enc_drop = nn.Dropout(p_inp)\\n self.encoder = QRNN(emb_enc.weight.size(1), n_hid, n_layers=n_layers, dropout=p_enc, bidirectional=True)\\n self.out_enc = nn.Linear(2*n_hid, emb_enc.weight.size(1), bias=False)\\n self.hid_dp = nn.Dropout(p_hid)\\n self.emb_dec = emb_dec\\n self.decoder = QRNN(emb_dec.weight.size(1), emb_dec.weight.size(1), n_layers=n_layers, dropout=p_dec)\\n self.out_drop = nn.Dropout(p_out)\\n self.out = nn.Linear(emb_dec.weight.size(1), emb_dec.weight.size(0))\\n self.out.weight.data = self.emb_dec.weight.data\\n self.pr_force = 0.\\n \\n def forward(self, inp, targ=None):\\n bs,sl = inp.size()\\n hid = self.initHidden(bs)\\n emb = self.emb_enc_drop(self.emb_enc(inp))\\n enc_out, hid = self.encoder(emb, hid)\\n \\n hid = hid.view(2,self.n_layers, bs, self.n_hid).permute(1,2,0,3).contiguous()\\n hid = self.out_enc(self.hid_dp(hid).view(self.n_layers, bs, 2*self.n_hid))\\n\\n dec_inp = inp.new_zeros(bs).long() + self.bos_idx\\n res = []\\n for i in range(self.max_len):\\n emb = self.emb_dec(dec_inp).unsqueeze(1)\\n outp, hid = self.decoder(emb, hid)\\n outp = self.out(self.out_drop(outp[:,0]))\\n res.append(outp)\\n dec_inp = outp.data.max(1)[1]\\n if (dec_inp==self.pad_idx).all(): break\\n if (targ is not None) and (random.random()
iii)<\\/h3>\\nThe points here are near the visible maxes on the 2D contour plot that have not been found yet\\n\",\"targets\":\"guess.append( [ 1.9, 0.75 ] )\\na.append(optimize.fmin(negative_posterior, guess[2], args=(x_data, y_data, sigma_meas, p_err)))\\nprint(a[2])\\ny.append(a[2][0]*numpy.ones(len(x_data)) + a[2][1]*x_data)\\n\\nguess[3] = [ 11.0, -0.6 ]\\na.append( optimize.fmin(negative_posterior, guess[3], args=(x_data, y_data, sigma_meas, p_err)) )\\nprint(a[3])\\ny.append( a[3][0]*numpy.ones(len(x_data)) + a[3][1]*x_data )\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"\\\"\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\nsns.displot(data=Clustered_Users, x=\\\"Final Score\\\", hue=\\\"Is CrSk\\\", multiple=\\\"stack\\\")\\n\\nsns.displot(data=Clustered_Users, x=\\\"Total Homeworks\\\", hue=\\\"Is CrSk\\\", multiple=\\\"stack\\\")\\n\\nsns.displot(\\n data=Clustered_Users, x=\\\"Final Score\\\", hue=\\\"Is CrSk\\\", multiple=\\\"stack\\\", kind=\\\"kde\\\"\\n)\\n\\nsns.displot(\\n data=Clustered_Users, x=\\\"Total Exams\\\", hue=\\\"Is CrSk\\\", multiple=\\\"stack\\\", kind=\\\"kde\\\"\\n)\\n\\nsns.displot(\\n data=Clustered_Users, x=\\\"Letter Grade\\\", stat=\\\"percent\\\", hue=\\\"Is CrSk\\\", multiple=\\\"stack\\\"\\n)\\n\\nsns.displot(data=Clustered_Users, x=\\\"% of CrSk Sessions\\\", hue=\\\"A or Not\\\", multiple=\\\"stack\\\")\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"Creating recommendations for your personal ratings\\n\\nTry with different similarity metrics (look in \\/src\\/similarity.py)\\nTry with different values of K (K is the number of neigbhours to consider when generating the recommendations)\\nWhich combination of K and number of metrics works better?, discuss it with others.\\n\",\"targets\":\"# get recommendations for a single user\\nrecommendations = recommenders.recommend_uknn(ratings, my_customer_number, K=200, similarity_metric='cosine', N=10)\\nrecommendations\\n\\n# get recommendations for a single user\\nrecommendations = recommenders.recommend_iknn(ratings, my_customer_number, K=100, similarity_metric='cosine')\\nrecommendations\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"I am working on the file \\\"Example3DvisualizationPyNoddyAndCSV2History.ipynb\\\".\\nThe first task is:\\nCreate a history file from fault traces\\nCan you write Python code for it?\\n\",\"targets\":\"\\nimport pandas as pd\\nimport pynoddy.history\\n#Read a csv file with the vertices of the faults\\n#see notes in the bottom of the notebook for instructions on how to generate such vertices files\\ncsvfile = 'examples\\/FaultDataCSV\\/Scenario1_Vertices.csv'\\nCsvFaultData = pd.read_csv(csvfile).sort_values(['id'])\\n\\n#how much does the fault slip relative to the fault length\\nSlipParam = 0.04\\n\\n#the xyz origin of the model you will be generating\\nxy_origin=[317883,4379646, 1200-4000]\\n\\n#Get information about each parameter in Noddy format\\n#The output from the function is a dictionary with lists of the fault parameters\\nnoddyFormattedFaultData = pynoddy.history.setUpFaultRepresentation(CsvFaultData,\\n xy_origin=xy_origin, \\n SlipParam=SlipParam)\\n\\n#Create a dictionary with the stratigraphy information\\nStratDict = {}\\nStratDict['Heights'] = [2000, 2500, 3000, 3700]\\nStratDict['Names'] = ['Intrusive', 'Felsic', 'Mafic','Sed'] \\nStratDict['Density'] = [2.65, 2.5, 2.4, 2.3] \\nStratDict['MagSus'] = [0.0015, 0.0012, 0.0018, 0.001]\\n\\n#Now make the history file\\nfilename = 'sandbox\\/faultmodel.his'\\nnoddyFormattedFaultData = pynoddy.history.createPyNoddyHistoryFile(noddyFormattedFaultData, StratDict, filename=filename)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"I am working on the file \\\"2017-02-15-jm-predict-employee-leaving.ipynb\\\".\\nThe first task is:\\nNow lets create our first hidden layer. Typically your hidden layers will use a Rectified Linear Unit(Relu) activation function in these cases, but in our case we will use an Exponential Linear Unit(Elu) activation function for its nice properties of reducing the bias shift effect on our network to have faster learning than Relu and for the fact that it acts like batch normalization without the computational complexity. We will also add the caveat of initalizing our weights and biases with a standard deviation of 0.01.\\nCan you write Python code for it?\\n\",\"targets\":\"\\nw_1 = tf.Variable(tf.truncated_normal([num_features, 10], stddev=0.01))\\nb_1 = tf.Variable(tf.truncated_normal([10], stddev=0.01))\\n\\nlayer_1 = tf.nn.elu(tf.add(tf.matmul(X_init, w_1), b_1))\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"%%R\\n\\ndat = results %>%\\n filter(scale==0.001, coverage<=30) %>%\\n select(rho, metric, coverage)\\n \\ndat$coverage = as.factor(dat$coverage)\\nggplot(dat, aes(x=coverage, y=rho, fill=metric)) +\\n geom_boxplot(aes(fill=metric))\\n \\n\\n%%R\\n\\n# AND AGAIN WITHOUT SUBSETTING\\ndat = results %>%\\n filter(scale==0.001) %>%\\n select(rho, metric, coverage)\\n \\ndat$coverage = as.factor(dat$coverage)\\nggplot(dat, aes(x=coverage, y=rho, fill=metric)) +\\n geom_boxplot(aes(fill=metric)) +\\n theme_bw()\\n \\n\\n%%R\\n\\ndat = subset(results, scale==0.001, select=-scale)\\nggplot(dat, aes(x=coverage, y=rho, colour=seed, linetype=metric)) +\\n geom_line() +\\n scale_x_log10()\\n\\n%%R\\nsumm = results %>%\\n filter(scale==0.001, coverage <=100) %>%\\n select(-scale) %>%\\n group_by(coverage, metric) %>%\\n summarise(rho_av=mean(rho), rho_err=sd(rho))\\n \\np = ggplot(summ, aes(x=coverage, y=rho_av, ymin=rho_av-rho_err, ymax=rho_av+rho_err, group=metric)) +\\n geom_line(aes(linetype=metric)) +\\n geom_ribbon(aes(fill=metric), alpha=0.2) +\\n xlab('Genome Coverage') +\\n ylab(expression(paste(\\\"Spearman's \\\", rho, \\\" +- SD\\\"))) +\\n #scale_x_log10()+\\n #ggtitle(\\\"Performance of WIP & IP\\\") +\\n theme_bw()\\n\\npdf(\\\"coverage-vs-rho_full.pdf\\\",width=7, height=4)\\nprint(p)\\ndev.off()\\np\\n\\n%%R\\nsumm = results %>%\\n filter(scale==0.001, coverage <= 50) %>%\\n select(-scale) %>%\\n group_by(coverage, metric) %>%\\n summarise(rho_av=mean(rho), rho_err=sd(rho))\\n \\np = ggplot(summ, aes(x=coverage, y=rho_av, ymin=rho_av-rho_err, ymax=rho_av+rho_err, group=metric)) +\\n geom_line(aes(linetype=metric)) +\\n geom_ribbon(aes(fill=metric), alpha=0.2) +\\n xlab('Genome Coverage') +\\n ylab(expression(paste(\\\"Spearman's \\\", rho, \\\" +- SD\\\"))) +\\n #scale_x_log10()+\\n #ggtitle(\\\"Performance of WIP & IP\\\") +\\n theme_bw()\\n\\npdf(\\\"coverage-vs-rho_50x.pdf\\\",width=5, height=4)\\nprint(p)\\ndev.off()\\np\\n\\n%%R\\nsem <- function(x) sqrt(var(x,na.rm=TRUE)\\/length(na.omit(x)))\\nsumm = results %>%\\n filter(scale==0.001) %>%\\n select(-scale) %>%\\n ...\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nEffect of Coverage\\nHere we show the spread of data across the 100 reps as boxplots per metric and covreage level.\\nI note that the weighted product seems slightly more variable, particularly at higher coverage. Though the median is nearly always higher\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"# Use subplots to generate all axes objects\\nfig, (ax1, ax2) = plt.subplots(ncols=2, figsize=(15,6))\\n# These can then be passed to multiplot, or any other obj.plot() method with the 'axes' keyword\\nnorm_cdf.label = 'Normal CDF'\\nfig = st.multiplot([brfss_weights_cdf, norm_cdf], \\n plt_kwds={'linewidth':1.5, 'title': 'Normal CDF Comparison', 'xlabel':'Weight kg'}, \\n axes=ax1)\\nnorm_cdf.label = 'Lognormal CDF'\\nfig = st.multiplot([brfss_weights_cdf, norm_cdf], \\n plt_kwds={'linewidth':1.5, 'xscale': 'log', \\n 'title': 'Lognormal CDF Comparison', 'xlabel':'log(Weight kg)'}, \\n axes=ax2)\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nFrom comparing the standad cdfs, difference is not so obvious, though if x is now plotted with a log scale on the x axis, the normal model fits slightly better\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"# Solution\\n\\nlast_pops = run_many_simulations(system, update_func2, 1000)\\nnet_changes = last_pops - p_0\\nnet_changes.describe()\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nUse run_many_simulations to collect the results and describe to summarize the distribution of net changes.\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"\\\"Change the order that the colors are chosen\\nChange orientation to horizontal\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\np = sns.violinplot(data=df,\\n y = 'Category',\\n x = 'Duration',\\n order = sorted(df.Category.unique()),\\n orient=\\\"h\\\")\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"\\\"Cost of Householder factorization\\nThe dominant cost comes from the line\\nPython\\n R[i:,i:] -= 2 * numpy.outer(v, v.dot(R[i:,i:]))\\nwere R[i:,i:] is an $(m-i)\\\\times(n-i)$ matrix.\\nThis line performs $2(m-i)(n-i)$ operations in v.dot(R[i:,i:]), another $(m-i)(n-i)$ in the \\\"outer\\\" product and again in subtraction. As written, multiplication by 2 would be another $(m-i)(n-i)$ operations, but is only $m-i$ operations if we rewrite as\\nPython\\n w = 2*v\\n R[i:,i:] -= numpy.outer(w, v.dot(R[i:,i:]))\\nin which case the leading order cost is $4(m-i)(n-i)$. To compute the total cost, we need to sum over all columns $i$,\\n$$\\\\begin{split} \\\\sum_{i=1}^n 4(m-i)(n-i) &= 4 \\\\Big[ \\\\sum_{i=1}^n (m-n)(n-i) + \\\\sum_{i=1}^n (n-i)^2 \\\\Big] \\\\\\n&= 4 (m-n) \\\\sum_{i=1}^n i + 4 \\\\sum_{i=1}^n i^2 \\\\\\n&\\\\approx 2 (m-n) n^2 + 4 n^3\\/3 \\\\\\n&= 2 m n^2 - \\\\frac 2 3 n^3 .\\n\\\\end{split}$$\\nRecall that Gram-Schmidt QR cost $2 m n^2$, so Householder costs about the same when $m \\\\gg n$ and is markedly less expensive when $m \\\\approx n$.\\nLeast squares and the normal equations\\nA least squares problem takes the form: given an $m\\\\times n$ matrix $A$ ($m \\\\ge n$), find $x$ such that\\n$$ \\\\lVert Ax - b \\\\rVert $$\\nis minimized. If $A$ is square and full rank, then this minimizer will satisfy $A x - b = 0$, but that is not the case in general because $b$ is not in the range of $A$.\\nThe residual $A x - b$ must be orthogonal to the range of $A$.\\n\\nIs this the same as saying $A^T (A x - b) = 0$?\\nIf $QR = A$, is it the same as $Q^T (A x - b) = 0$?\\n\\nIn HW2, we showed that $QQ^T$ is an orthogonal projector onto the range of $Q$. If $QR = A$,\\n$$ QQ^T (A x - b) = QQ^T(Q R x - b) = Q (Q^T Q) R x - QQ^T b = QR x - QQ^T b = A x - QQ^T b . $$\\nSo if $b$ is in the range of $A$, we can solve $A x = b$. If not, we need only orthogonally project $b$ into the range of $A$.\\nSolution by QR (Householder)\\nSolve $R x = Q^T b$.\\n\\nQR factorization costs $2 m n^2 - \\\\frac 2 3 n^3$ operations and is done once per matrix $A$.\\nComputing $Q^T b$ costs $4 (m-n)n + 2 n^2 = 4 mn - 2n^2$ (using the...\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\n# Test accuracy of solver for an ill-conditioned square matrix\\n\\nx = numpy.linspace(-1,1,19)\\nA = numpy.vander(x)\\nprint('cond(A) = ',numpy.linalg.cond(A))\\nQ, R = numpy.linalg.qr(A)\\nprint('cond(R^{-1} Q^T A) =', numpy.linalg.cond(numpy.linalg.solve(R, Q.T.dot(A))))\\nL = numpy.linalg.cholesky(A.T.dot(A))\\nprint('cond(L^{-T} L^{-1} A^T A) =', numpy.linalg.cond(numpy.linalg.solve(L.T, numpy.linalg.solve(L, A.T.dot(A)))))\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"I am working on the file \\\"Notebooks\\/Sistema_Binario-Evolucion_Temporal_y_observaciones_HyT.ipynb\\\".\\nThe first task is:\\nAhora graficamos $a$ en términos de $e$, tanto para la solución analítica como numérica\\nCan you write Python code for it?\\n\",\"targets\":\"\\nfig,eje= plt.subplots(1,1,figsize=(5,5))\\neje.plot(e,at*R_ast, label=u'sol. numérica')\\nee = np.linspace(min(e),e0,10)\\na_an = a0_m*g(ee)\\/g(e0)\\neje.plot(ee,a_an,'o',label=u'sol. analítica')\\neje.set_title(u'Semieje mayor v\\/s excentricidad',fontsize=14)\\neje.set_xlabel(r'$e$',fontsize=15)\\neje.set_ylabel(r'$a$',fontsize=15)\\nplt.legend(loc='best')\\nplt.grid()\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"Aperture location sanity check by visual inspection\\nArbitrarily choose the first G FITS file\\n\",\"targets\":\"fits_file = fits_files[5]\\nprint(fits_file)\\nhdus = fits.open(os.path.join(fits_root, fits_file))\\nimage_data = hdus[data_index].data\\n\\nmedian = np.median(image_data)\\nshow_image(image_data, position_map, measurement_aperture, annotate=True, vmin=10, vmax=median*4)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"But since myHDL implements 2's complement, we need to test for negative numbers where the leading bit is the signed signal\\n\",\"targets\":\"TestNegNum=-26\\nprint(f\\\"\\\"\\\"Target: {TestNegNum}\\nAbsolote Bin: {bin(abs(TestNegNum), 8)}, \\nSigned Bin: {bin(TestNegNum, 8)}\\\"\\\"\\\")\\n\\nTestNegNumBV=intbv(TestNegNum)[8:]\\nTestNegNumBV, TestNegNumBV.signed()\\n\\nR=-R; I=-I\\nprint(f'Re: {R}, Im: {I}')\\n\\nRN=intbv(R, min=ReMin, max=ReMax); RN\\nRNBin=''.join([str(int(i)) for i in RN])\\nRN.signed(), bin(R, ReWordLen), RNBin, bin(R, ReWordLen)==RNBin\\n\\nIN=intbv(I, min=ImMin, max=ImMax); IN\\nINBin=''.join([str(int(i)) for i in IN])\\nIN.signed(), bin(I, ImWordLen), INBin, bin(I, ImWordLen)==INBin\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"# Produce learning curves for varying training set sizes and maximum depths\\nvs.ModelLearning(features, prices)\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nBenefit of splitting the data set into Training and Testing sets\\nWithout testing our model, we would not know if the model is suffering from high bias or high variance, ie, we won't know if the model is very simple or very complex. By very simple, I mean, having fewer features than needed or having fewer nonlinear terms. By very complex or high variance, I mean having many features than needed or having many nonlinear terms in the model. Without having test data, we will go on making the model more and more complex just to decrease the training error. \\n\\nAnalyzing Model Performance\\nLearning Curves\\nThe following code cell produces four graphs for a decision tree model with different maximum depths. Each graph visualizes the learning curves of the model for both training and testing as the size of the training set is increased. Note that the shaded reigon of a learning curve denotes the uncertainty of that curve (measured as the standard deviation). The model is scored on both the training and testing sets using R2<\\/sup>, the coefficient of determination.\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"I am working on the file \\\"0.14\\/_downloads\\/plot_introduction.ipynb\\\".\\nThe first task is:\\nLook at the channels in raw:\\nCan you write Python code for it?\\n\",\"targets\":\"\\nprint(raw.ch_names)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"# Datasets\\nnoisy_circles, noisy_moons, blobs, gaussian_quantiles, no_structure = load_extra_datasets()\\n\\ndatasets = {\\\"noisy_circles\\\": noisy_circles,\\n \\\"noisy_moons\\\": noisy_moons,\\n \\\"blobs\\\": blobs,\\n \\\"gaussian_quantiles\\\": gaussian_quantiles}\\n\\n### START CODE HERE ### (choose your dataset)\\ndataset = \\\"gaussian_quantiles\\\"\\n### END CODE HERE ###\\n\\nX, Y = datasets[dataset]\\nX, Y = X.T, Y.reshape(1, Y.shape[0])\\n\\n# make blobs binary\\nif dataset == \\\"blobs\\\":\\n Y = Y%2\\n\\n# Visualize the data\\nplt.scatter(X[0, :], X[1, :], c=Y, s=40, cmap=plt.cm.Spectral);\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nInterpretation:\\n- The larger models (with more hidden units) are able to fit the training set better, until eventually the largest models overfit the data. \\n- The best hidden layer size seems to be around n_h = 5. Indeed, a value around here seems to fits the data well without also incurring noticable overfitting.\\n- You will also learn later about regularization, which lets you use very large models (such as n_h = 50) without much overfitting. \\nOptional questions:\\nNote: Remember to submit the assignment but clicking the blue \\\"Submit Assignment\\\" button at the upper-right. \\nSome optional\\/ungraded questions that you can explore if you wish: \\n- What happens when you change the tanh activation for a sigmoid activation or a ReLU activation?\\n- Play with the learning_rate. What happens?\\n- What if we change the dataset? (See part 5 below!)\\n\\nYou've learnt to:\\n- Build a complete neural network with a hidden layer\\n- Make a good use of a non-linear unit\\n- Implemented forward propagation and backpropagation, and trained a neural network\\n- See the impact of varying the hidden layer size, including overfitting.\\nNice work! \\n5) Performance on other datasets\\nIf you want, you can rerun the whole notebook (minus the dataset part) for each of the following datasets.\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"And let's give our model a descriptive title.\\n\",\"targets\":\"m.title = \\\"MTC Example 1 (Simple MNL)\\\"\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"\\\"Computer Number Systems\\nDecimal to Binary\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\ndef DecimialtoBaseN(dec, base):\\n quotant=[dec]; remander=[]\\n while True:\\n q, r=np.divmod(quotant[-1], base)\\n quotant.append(q); remander.append(r)\\n if q==0: break\\n \\n TwosProd=[]\\n for i in range(len(remander)):\\n TwosProd.append(remander[i]*base**i)\\n return quotant, remander, TwosProd[::-1], np.sum(TwosProd)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"Create the feature columns\\nNext, we define the feature columns\\nExercise 3\\nThere are different ways to set up the feature columns for our model. \\nIn the first TODO below, you are asked to create a function get_categorical which takes a feature name and its potential values and returns an indicator tf.feature_column based on a categorical with vocabulary list column. Look back at the documentation for tf.feature_column.indicator_column to ensure you call the arguments correctly.\\nIn the next TODO, you are asked to complete the code to create a function called get_cols. It has no argumnets but should return a list of all the tf.feature_columns you intend to use for your model. Hint: use the get_categorical function you created above to make your code easier to read.\\n\",\"targets\":\"def get_categorical(name, values):\\n return # TODO: Your code goes here\\n\\ndef get_cols():\\n # Define column types\\n return # TODO: Your code goes here\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"Comparing with Day of Week\\n\",\"targets\":\"dayofweek = pd.DatetimeIndex(pivoted.columns).dayofweek\\n\\nplt.scatter(X2[:, 0], X2[:, 1], c=dayofweek, cmap='rainbow')\\nplt.colorbar();\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"\\\".... Wait, only 9? Out of the total number of battles Robb Stark fought, he was successful as a attacker but not great on the defensive. Overall, winning 9 out of 24 battles is really not that impressive.\\nPerhaps 'The Young Wolf' wasn't as impressive as we thought...\\nTry answering some more questions:\\n\\nWhat was the average size of Robb Stark's armies against those defending against him?\\nHow did the Lanninster\\/Baratheons fare in the War of the Five Kings? \\nWhich king had the highest winning percentages? (Requires some light statistics...)\\nWho was the most effective commander (there are several to choose from)?\\n\\nTry some other methods as well in Pandas:\\n\\n.mean() - gives you the average of some value (you have to designate the key-value in some cases)\\n.median() - returns the median value of an object\\n.min() - gives you the lowest value in that array\\n.fillna(0.0).astype(int) - this is a way to get rid of all the float objects in your dataset. \\n.describe() - gives you an overview of the object's data, according to counts, unique values, and data types\\n\\nNow that you have a light understanding of how data analysis is done, let's create some visualizations!\\nCreating Data Visualizations in Python\\nRelying a lot on Matplotlib here, data visualizations allow us to better communicate and understand the information we're able to create through our analyses.\\nLet's try to do a few based on the questions we've already resolved so far. Let's create some bar graphs.\\nFirst, let's create a new object robb_off_viz that measures what's going on in our robb_off object, using two more methods:\\n* .groupby() - calculating the unique values in a particular key\\n* .len() - measuring them by their \\\"length\\\" or number of rows\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\nrobb_off_viz = robb_off.groupby('attacker_outcome').apply(len)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"# The function for training the model\\n\\nLOSS_SGD = []\\nw = torch.tensor(-15.0, requires_grad = True)\\nb = torch.tensor(-10.0, requires_grad = True)\\n\\ndef train_model_SGD(iter):\\n \\n # Loop\\n for epoch in range(iter):\\n \\n # SGD is an approximation of out true total loss\\/cost, in this line of code we calculate our true loss\\/cost and store it\\n Yhat = forward(X)\\n\\n # store the loss \\n LOSS_SGD.append(criterion(Yhat, Y).tolist())\\n \\n for x, y in zip(X, Y):\\n \\n # make a pridiction\\n yhat = forward(x)\\n \\n # calculate the loss \\n loss = criterion(yhat, y)\\n\\n # Section for plotting\\n get_surface.set_para_loss(w.data.tolist(), b.data.tolist(), loss.tolist())\\n \\n # backward pass: compute gradient of the loss with respect to all the learnable parameters\\n loss.backward()\\n \\n # update parameters slope and bias\\n w.data = w.data - lr * w.grad.data\\n b.data = b.data - lr * b.grad.data\\n\\n # zero the gradients before running the backward pass\\n w.grad.data.zero_()\\n b.grad.data.zero_()\\n \\n #plot surface and data space after each epoch \\n get_surface.plot_ps()\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nDefine
train_model_SGD<\\/code> function for training the model.\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"I am working on the file \\\"Chapter 5 - Functions and Files.ipynb\\\".\\nThe first task is:\\nWe can also have a function return multiple values by combining them in a tuple (the type of ordered, immutable list we talked about in the previous chapter!). Python has a nice way of 'unpacking' such a tuple using assignment:\\nCan you write Python code for it?\\n\",\"targets\":\"\\ndef count(text):\\n words = text.split(\\\" \\\")\\n word_count = len(words)\\n character_count = len(text)\\n return word_count, character_count\\n\\nword_count, character_count = count(\\\"To be or not to be , that is the question .\\\")\\nprint(word_count)\\nprint(character_count)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"\\\"Create a Cloud Storage bucket\\nThe following steps are required, regardless of your notebook environment.\\nSet the name of your Cloud Storage bucket below. It must be unique across all\\nCloud Storage buckets.\\nYou may also change the REGION variable, which is used for operations\\nthroughout the rest of this notebook. Make sure to choose a region where Vertex AI services are\\navailable. You may\\nnot use a Multi-Regional Storage bucket for training with Vertex AI.\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\nBUCKET_URI = \\\"gs:\\/\\/[your-bucket-name]\\\" # @param {type:\\\"string\\\"}\\nREGION = \\\"[your-region]\\\" # @param {type:\\\"string\\\"}\\n\\nif BUCKET_URI == \\\"\\\" or BUCKET_URI is None or BUCKET_URI == \\\"gs:\\/\\/[your-bucket-name]\\\":\\n BUCKET_URI = \\\"gs:\\/\\/\\\" + PROJECT_ID + \\\"aip-\\\" + TIMESTAMP\\n\\nif REGION == \\\"[your-region]\\\":\\n REGION = \\\"us-central1\\\"\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"!pncaqsraw4pnceval.py -O --timeresolution=daily \\\\\\n --start-date 2013-05-01 --end-date 2013-07-01 \\\\\\n --wktpolygon \\\"POLYGON ((-181.25 0, 178.75 0, 178.75 90, -181.25 90, -181.25 0))\\\"\\n\\n%ls -l AQS_DATA_20130501-20130701.nc\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nCHECK POINT\\nWhat should the bounding box be as a WKT Polygon?\\nANSWERS Hidden\\n
\\n
\\n
\\n
\\n
\\n
\\n
\\n
\\n
\\n
\\n
\\n
\\n
\\n
\\n
\\n
\\n
\\n
\\n
\\n
\\n
\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"\\\"CPU\\n\\nNote: the warning below doesn't mean you are using GPU. In fact it's using CPU here since I'm tracking the Windows Task Manager's \\\"Performance\\\".\\nTo track whether there is really GPU in use, need to make sure tf.config.experimental.list_physical_devices('GPU') has at least 1 GPU available.\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\n# Stacked LSTMs\\ndevice_name = '\\/cpu:0'\\nwith tf.device(device_name):\\n model_name = 'stacking_lstm_model_tanh_cpu'\\n model = stack_models(X_train, device_name)\\n history = fit_diy_model(model, X_train, y_train, X_val, y_val, model_name)\\n evaluate_diy_model(model_name, history, X_val, df_val)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"I am working on the file \\\"Data_Analytics_in_Action\\/numpy.ipynb\\\".\\nThe first task is:\\n不形成的数组,直接修改数组的shape属性\\nCan you write Python code for it?\\n\",\"targets\":\"\\na.shape=(4,3)\\na\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"\\\"Calculate Stats for Multiple Fields\\nUp to this point, we have focused on stats of a single field. Now we get into the meat of things, calculating stats across multiple fields and grouping by crop type (specified by the subclass property).\\nDetermine List of Sample Features\\nThere are just too many categorized crop fields to calculate statistics on them all in a reasonable time using just one CPU. Therefore, we will create a list of sample features, features that equally represent the crop types.\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\n# determine the subclasses in this set and counts\\nsubclasses_list = [field_geojson['properties']['SUBCLASS1']\\n for field_geojson in cat_crop_ground_truth]\\nsubclasses = dict([x, subclasses_list.count(x)]\\n for x in set(subclasses_list))\\nprint('subclasses and counts')\\nprint(json.dumps(subclasses, indent=4))\\n\\n# number of samples for each subclass\\nnum_samples = 5\\n\\n# filter the subclasses to those with adequate number of features\\nfilt_subclasses = [subclass\\n for (subclass, count) in subclasses.items()\\n if count > num_samples]\\nprint('filtered subclasses: {}'.format(filt_subclasses))\\n\\n# lets focus on only 3 subclasses for now, comment to use all subclasses\\nfilt_subclasses = filt_subclasses[:3]\\nprint('filtered subclasses: {}'.format(filt_subclasses))\\n\\n# create a list of sample features\\n# first filter to features within a subclass, then randomly pick a sample of those features\\n\\nnp.random.seed(0) # make random sampling repeatable\\n\\nsample_features = []\\nfor subclass in filt_subclasses:\\n subclass_features = [f for f in crop_ground_truth if get_subclass(f) == subclass]\\n sample_features.extend(np.random.choice(subclass_features, num_samples, replace=False))\\nprint('{} sample field features'.format(len(sample_features)))\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"4. Results analysis\\nNow we can check the performance of the trained network by predicting the results of the test set and comparing them with the actual labels.\\nNote that the predict() function has been adapted to cope with the multi-class labels.\\n\",\"targets\":\"def predict(X, yOHE, parameters):\\n \\\"\\\"\\\"\\n This function is used to predict the results of a L-layer neural network.\\n It also checks them against the true labels and print the accuracy\\n Arguments:\\n X -- data set of examples you would like to label\\n yOHE -- the true labels, as multi-class vectors\\n parameters -- parameters of the trained model\\n \\n Returns:\\n p -- predictions (the label) for the given dataset X \\n \\\"\\\"\\\"\\n \\n m = X.shape[1]\\n nLabels = yOHE.shape[1]\\n n = len(parameters) \\/\\/ 2 # number of layers in the neural network\\n p = np.zeros((1, m)) # the predicted output, initialised to zero\\n y = np.zeros((1, m)) # the actual output\\n \\n # Forward propagation\\n probas, caches = L_model_forward(X, parameters)\\n\\n # probas is a matrix of shape [nLabels, m] (one-hot-encoded)\\n assert (probas.shape[1] == m)\\n \\n for i in range(0, m):\\n # convert probs to label predictions:\\n # just take the label with max prob\\n p[0,i] = np.argmax(probas[:,i])\\n\\n # convert expected results into label: takes the value with one\\n y[0,i] = np.argmax(yOHE[:,i])\\n \\n # print results\\n print(\\\"Accuracy: \\\" + str(np.sum((p == y)\\/m)))\\n \\n return p\\n\\nprint (\\\"On the training set:\\\")\\npredictions_train = predict(train_set_x, train_set_y, fit_params)\\nprint (\\\"On the test set:\\\")\\npredictions_test = predict(X_test.T, y_test.T, fit_params)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"Below code will not work as f1 is not returning anything :). This is to show what can happen with one silly tab. Also it is one of the most common mistake.\\n\",\"targets\":\"def f1(a):\\n def f2(b):\\n return f2\\n def f3(c):\\n return f3\\n def f4(d):\\n return f4\\n def f5(e):\\n return f5\\ntry:\\n print (f1(1)(2)(3)(4)(5)) \\nexcept Exception as e:\\n print(e)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"\\\"Compute the RSS of each of the three normalized weights on the (unnormalized) test_feature_matrix:\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\ndef cal_rss(feature_matrix, weights, output):\\n return rss(predict_output(feature_matrix, weights), output)\\n\\ncal_rss(test_feature_matrix, normalized_weights1e7, test_output)\\n\\ncal_rss(test_feature_matrix, normalized_weights1e8, test_output)\\n\\ncal_rss(test_feature_matrix, normalized_weights1e4, test_output)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"Adding Average market price across all exchanges.\\n\",\"targets\":\"mkt_price = pd.read_csv(\\\"\\/Users\\/ibrahimgabr\\/Downloads\\/project-5\\/Data\\/blockchain.info\\/market_price_btc.csv\\\", header=None)\\nsubset_mkt_price = clean_blockchain_csv(mkt_price, ['date', \\\"mkt_price\\\"])\\ndf7 = pd.merge(df6, subset_mkt_price, on=\\\"date\\\", how=\\\"outer\\\")\\ndf7.head()\\n\\ndates_lst = df7['date']\\ndf7.head()\\n\\ndf7.drop([\\\"date\\\"],axis=1, inplace=True)\\nfeatures = \\\"+\\\".join(df7.columns[:-1])\\ny, X = dmatrices('mkt_price ~ ' + features, df7, return_type='dataframe')\\nvif = pd.DataFrame()\\nvif[\\\"VIF Factor\\\"] = [variance_inflation_factor(X.values, i) for i in range(X.shape[1])]\\nvif[\\\"features\\\"] = X.columns\\nvif.round(1) #looks like we are doing great!\\n\\ndf7.corr()\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"#Get variant list form vcf file\\nopen_file = myvariant_parsing_utils.VariantParsing()\\nlist_file = open_file.get_variants_from_vcf(vcf_file)\\n\\n#Run process\\nmy_variants = annotate_batch.AnnotationMethods()\\nmyvariant_data = my_variants.my_variant_at_once(list_file)\\n\\n#Name Collection & DB\\ncollection_name = 'My_Variant_Info_Collection_Full'\\ndb_name = 'My_Variant_Database'\\n\\n#Export\\nexporting_function = mongo_DB_export.export\\nexporting_function(myvariant_data, collection_name, db_name)\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nMETHOD 3: ignore annovar, get data solely from myvariant\\nEasier to run, doesn't require annovar\\nWill however be incomplete (some variants will have no information).\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"\\\"By default, PolynomialFeatures adds a constant term equal to 1. It corresponds to the intercept (or degree 0) of the model we are training.\\nLet's now define another linear regression and train it on the augmented features $\\\\phi(x) = \\\\left[1, x, x^2\\\\right]$.\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\nlinear_regression = LinearRegression()\\npolynomial_features = PolynomialFeatures(degree=2)\\npolynomial_X = polynomial_features.fit_transform(X[:, np.newaxis])\\nmy_regression = linear_regression.fit(polynomial_X, y)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"%matplotlib inline\\n\\nimport matplotlib\\nimport matplotlib.pyplot as plt\\nimport numpy as np\\nimport tensorflow as tf\\n\\nfrom IPython.display import YouTubeVideo\\n\\nplt.style.use('bmh')\\nmatplotlib.rcParams['figure.figsize'] = (15, 4)\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nChange Log\\nDate Created: 2017-03-24\\n\\nDate of Change Change Notes\\n-------------- ----------------------------------------------------------------\\n2017-03-24 Initial draft\\n\\nSetup\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"# depth mean salinity\\ngrid.average(ds.SALT, ['Z']).plot();\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nGrid-aware (weighted) average\\nxgcm can also calcualate the weighted average along each axis and combinations of axes. \\nSee for example the vertical average of salinity:\\n$$ \\\\frac{\\\\int_{-H}^0 S dz}{\\\\int_{-H}^0 dz} $$\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"\\\"Abtastung, Rekonstruktion der gesendeten Symbole\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\nr = rt[2*group_delay:-2*group_delay:M]\\n\\nplt.stem(r[:20]); \\nplt.title(\\\"RX symbols after sampling\\\"); plt.show()\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"I am working on the file \\\"Notebooks\\/.ipynb_checkpoints\\/AppendMicrosoftAIData-checkpoint.ipynb\\\".\\nThe first task is:\\nGenerate rank list of tags by share rate.\\nCan you write Python code for it?\\n\",\"targets\":\"\\ntgsShrNoShrCount = {}\\nfor lst in rnkFlLst:\\n tgs = gidFtrs[lst[0]]\\n tmpDict = {'share': int(lst[1]), 'not_share': int(lst[2]), 'total' : int(lst[3])}\\n for tag in tgs:\\n oldDict ={}\\n oldDict = tgsShrNoShrCount.get(tag,{'share' : 0,'not_share' : 0,'total' : 0})\\n oldDict['share'] = oldDict.get('share',0) + tmpDict['share']\\n oldDict['not_share'] = oldDict.get('not_share',0) + tmpDict['not_share']\\n oldDict['total'] = oldDict.get('total',0) + tmpDict['total']\\n\\n tgsShrNoShrCount[tag] = oldDict\\n\\n## Append data into data frames and build visualizations\\ntgsShrCntDf = pd.DataFrame(tgsShrNoShrCount).transpose()\\ntgsShrCntDf['proportion'] = tgsShrCntDf['share'] * 100 \\/ tgsShrCntDf['total']\\ntgsShrCntDf.sort_values(by=['proportion','share'],ascending=False,inplace=True)\\ntgsShrCntDf = tgsShrCntDf[['share','not_share','total','proportion']]\\ntgsShrCntDf.to_csv(\\\"..\\/FinalResults\\/RankListTags.csv\\\")\\n\\nfullFl = HT.html(HT.body(HT.HTML(tgsShrCntDf.to_html(bold_rows = False))))\\n\\noutputFile = open(\\\"..\\/FinalResults\\/RankListTags.html\\\",\\\"w\\\")\\noutputFile.write(fullFl)\\noutputFile.close()\\n\\niFrameBlock = []\\nfig = tgsShrCntDf['proportion'].iplot(kind='line',filename=\\\"All_Tags\\\",title=\\\"Distribution of Tags\\\")\\niFrameBlock.append(fig.embed_code)\\n#plt.savefig(\\\"..\\/FinalResults\\/RankListTags.png\\\",bbox_inches='tight')\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"#set the number of threads to use for your machine \\nnum_threads=20\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nConclusions\\n\\nNo spacers hit any protospacers on the Ecoli genomes in the CLdb!\\nThis indicates that there are no interegrated mobile genetic elements in the E.coli genomes that can be detected using the CRISPR spacers.\\nMaybe we will get some hits when we blast against NCBI's nt database.\\n\\nArray Blast vs NCBI's nt database\\n\\nNOTE: you will need the BLAST nt database & the blast+ toolkit\\nThe arrayBlast wrapper only works with the genomes in CLdb.\\nWe will need to do this part 'manually'\\nThe blast output just needs to be in .xml format.\\nYou will need the BLAST nt database\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"I am working on the file \\\"FastRoot.ipynb\\\".\\nThe first task is:\\nHmm, weren't we expecting 0x5f3759df instead of 0x5f400000?\\n * It turns out that magic constants that are close to each other have nearly identicle behavior. Here, 0x5f3759df is only 0.035% away from 0x5f400000! Close enough.\\nBehold! The Fast Arbitrary Power Method!\\nCan you write Python code for it?\\n\",\"targets\":\"\\ndef qexp(C):\\n # (1 - C) * f2l(1) + C * f2l(x)\\n return np.vectorize(lambda x: l2f((1 - C) * f2l(1) + C * f2l(x)))\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"Add landsat and cloud mask to a common Xarray\\n\",\"targets\":\"product_dataset = xr.merge([year_of_landsat_dataset, is_clear_mask])\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"Converting categorical feature to numeric\\nWe can now convert the EmbarkedFill feature by creating a new numeric Port feature.\\n\",\"targets\":\"for dataset in combine:\\n dataset['Embarked'] = dataset['Embarked'].map( {'S': 0, 'C': 1, 'Q': 2} ).astype(int)\\n\\ntrain_df.head()\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"Compute MNE-dSPM inverse solution on single epochs\\nCompute dSPM inverse solution on single trial epochs restricted\\nto a brain label.\\n\",\"targets\":\"# Author: Alexandre Gramfort Discrete Fourier Series<\\/h1>\\n\\nConsider a function $f$ periodic over a domain $0\\\\leq x\\\\leq 2\\\\pi$, discretized by $N_x$ points. The longest wavelength wave that can be contained in the domain is $L_x$. A phyiscal understanding of Fourier series is the representation of a system as the sum of many waves fo wavelengths smaller or equal to $L_x$. In a discrete sense, the series of wave used to decompose the system is defined as:\\n$$\\na_n\\\\exp\\\\left(\\\\hat{\\\\jmath}\\\\frac{2\\\\pi n}{Lx}\\\\right)\\n$$\\nsuch that\\n
Discrete Fourier Transform (DFT)<\\/h1>\\n\\nIn scientific computing we are interested in applying Fourier series on vectors or matrices, containing a integer number of samples. The DFT is the fourier series for the number of samples. DFT functions available in python or any other language only care about the number of samples, therefore the wavenumber is \\n
Exercise 3a<\\/h1>\\n<\\/div>\\n\\nThe timeseries below was generated by a linear function of time, $y(t)= mt + b$. In addition to observational uncertainty $\\\\sigma$ (white noise), there is a fair bit of correlated (red) noise, which we will assume is well described\\nby the squared exponential covariance with a certain (unknown) amplitude $A$ and timescale $l$.\\nYour task is to estimate the values of $m$ and $b$, the slope and intercept of the line, respectively. In this part of the exercise, assume there is no correlated noise. Your model for the $n^\\\\mathrm{th}$ datapoint is thus\\n$$\\n\\\\begin{align}\\n y_n \\\\sim \\\\mathcal{N}(m t_n + b, \\\\sigma_n\\\\mathbf{I})\\n\\\\end{align}\\n$$\\nand the probability of the data given the model can be computed by calling your GP likelihood function:\\npython\\ndef lnprob(params):\\n m, b = params\\n model = m * t + b\\n return ln_gp_likelihood(t, y - model, sigma, A=0, l=1)\\nNote, importantly, that we are passing the residual vector, $y - (mt + b)$, to the GP, since above we coded up a zero-mean Gaussian process. We are therefore using the GP to model the residuals of the data after applying our physical model (the equation of the line).\\nTo estimate the values of $m$ and $b$ we could generate a fine grid in those two parameters and compute the likelihood at every point. But since we'll soon be fitting for four parameters (in the next part), we might as well upgrade our inference scheme and use the emcee package to do Markov Chain Monte Carlo (MCMC). If you haven't used emcee before, check out the first few tutorials on the documentation page. The basic setup for the problem is this:\\n```python\\nimport emcee\\nsampler = emcee.EnsembleSampler(nwalkers, ndim, lnprob)\\ninitial = [4.0, 15.0]\\np0 = initial + 1e-3 * np.random.randn(nwalkers, ndim)\\nprint(\\\"Running burn-in...\\\")\\np0, _, _ = sampler.run_mcmc(p0, nburn) # nburn = 500 should do\\nsampler.reset()\\nprint(\\\"Running...\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\nt, y, sigma = np.loadtxt(\\\"data\\/sample_data_line.txt\\\", unpack=True)\\nm_true, b_true, A_true, l_true = np.loadtxt(\\\"data\\/sample_data_line_truths.txt\\\", unpack=True)\\nplt.errorbar(t, y, yerr=sigma, fmt=\\\"k.\\\", label=\\\"observed\\\")\\nplt.plot(t, m_true * t + b_true, color=\\\"C0\\\", label=\\\"truth\\\")\\nplt.legend(fontsize=12)\\nplt.xlabel(\\\"time\\\")\\nplt.ylabel(\\\"data\\\");\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"# The map projection capabilities come from the cartopy package. There are many possible projections\\nimport cartopy.crs as ccrs\\n\\ndef make_map(field):\\n '''input field should be a 2D xarray.DataArray on a lat\\/lon grid.\\n Make a filled contour plot of the field, and a line plot of the zonal mean\\n '''\\n fig = plt.figure(figsize=(14,6))\\n nrows = 10; ncols = 3\\n mapax = plt.subplot2grid((nrows,ncols), (0,0), colspan=ncols-1, rowspan=nrows-1, projection=ccrs.Robinson())\\n barax = plt.subplot2grid((nrows,ncols), (nrows-1,0), colspan=ncols-1)\\n plotax = plt.subplot2grid((nrows,ncols), (0,ncols-1), rowspan=nrows-1)\\n cx = mapax.contourf(field.lon, field.lat, field, transform=ccrs.PlateCarree())\\n mapax.set_global(); mapax.coastlines();\\n plt.colorbar(cx, cax=barax, orientation='horizontal')\\n plotax.plot(field.mean(dim='lon'), field.lat)\\n plotax.set_ylabel('Latitude')\\n plotax.grid()\\n return fig, (mapax, plotax, barax), cx\\n\\n# Plot a single time slice of surface air temperature just as example\\nfig, axes, cx = make_map(atm['cpl_control'].TREFHT.isel(time=0))\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nSome CMIP climate sensitivity results to compare against\\n\\n\\nComparing against the multi-model mean of the ECS and TCR, our model is apparently slightly less sensitive than the CMIP5 mean.\\nLet's make some maps to compare spatial patterns of transient vs. equilibrium warming\\nHere is a helper function that takes a 2D lat\\/lon field and renders it as a nice contour map with accompanying zonal average line plot.\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"I am working on the file \\\"Project 2\\/dlnd_image_classification.ipynb\\\".\\nThe first task is:\\nFully-Connected Layer\\nImplement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.\\nCan you write Python code for it?\\n\",\"targets\":\"\\ndef fully_conn(x_tensor, num_outputs):\\n \\\"\\\"\\\"\\n Apply a fully connected layer to x_tensor using weight and bias\\n : x_tensor: A 2-D tensor where the first dimension is batch size.\\n : num_outputs: The number of output that the new tensor should be.\\n : return: A 2-D tensor where the second dimension is num_outputs.\\n \\\"\\\"\\\"\\n weights = tf.Variable(tf.random_normal([x_tensor.get_shape().as_list()[1],num_outputs],stddev=0.1))\\n bias = tf.Variable(tf.zeros(num_outputs,dtype=tf.float32))\\n \\n fc_layer = tf.add(tf.matmul(x_tensor, weights), bias)\\n fc_layer = tf.nn.relu(fc_layer)\\n return fc_layer\\n\\n\\n\\\"\\\"\\\"\\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\\n\\\"\\\"\\\"\\ntests.test_fully_conn(fully_conn)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"\\\"
\\n
\\n
\\nThis activity is for advanced students only and extra credit will be allocated. Students will not be penalized for not completing this activity.\\n<\\/div>\\n\\nInstructions\\n\\n\\n\\nUsing the \\\"dt.dayofweek()\\\" Pandas method, find the days of the week with most and least occurences of the access point you identified in Exercise 2.1 above. In your answer, provide both the days and corresponding number of occurrences of the access point on those days.\\n\\nHint: You will need to use the \\\"dt.dayofweek()\\\" series method on a datetime Pandas series object, which has been filtered to only contain instances of the identified access point.\\n\\n\\n\\nDescribe and explain the trend you observe regarding access point occurrences during the week, and whether or not it is similar to the behavior you would have expected.\\n\\nHint: To view the trend, use the Pandas \\\"plot(kind='bar')\\\" on the series object containing the counts of access point occurrences during the week.\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\n# Your answer here.\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"\\\"Additional keyword arguments give more control on centering and positioning, and you can pass a list of [color_negative, color_positive] to highlight lower and higher values.\\nHere's how you can change the above with the new align option, combined with setting vmin and vmax limits, the width of the figure, and underlying css props of cells, leaving space to display the text and the bars:\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\ndf2.style.bar(align=0, vmin=-2.5, vmax=2.5, color=['#d65f5f', '#5fba7d'],\\n width=60, props=\\\"width: 120px; border-right: 1px solid black;\\\").format('{:.3f}', na_rep=\\\"\\\")\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"\\\"But a more less verbose and quicker approach would be:\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\ncondition = (a > -90) & (a < -40)\\ncondition\\n\\nresult[condition] = a[condition]**2\\nresult[~condition] = 1\\nprint(result)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"BGS\\n\",\"targets\":\"from desitarget.mock.mockmaker import BGSMaker\\n\\n%time demo_mockmaker(BGSMaker, seed=seed)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"Create Linear Model\\n\",\"targets\":\"# Create a linear regression\\nols = linear_model.LinearRegression()\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"I am working on the file \\\"Falcon-9\\/Falcon-9 frames.ipynb\\\".\\nThe first task is:\\nThis notebook shows an analysis of the Falcon-9 upper stage S-band telemetry frames. It is based on r00t.cz's analysis.\\nThe frames are CCSDS Reed-Solomon frames with an interleaving depth of 5, a (255,239) code, and an (uncoded) frame size of 1195 bytes.\\nCan you write Python code for it?\\n\",\"targets\":\"\\nx = np.fromfile('falcon9_frames_20210324_084608.u8', dtype = 'uint8')\\nx = x.reshape((-1, 1195))\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"I am working on the file \\\"Python\\/4 Automatic Theorem Proving\\/Knuth-Bendix-Algorithm-KBO.ipynb\\\".\\nThe first task is:\\nWe define the class OrderException to be able to deal with equations that can't be ordered into a rewrite rule.\\nCan you write Python code for it?\\n\",\"targets\":\"\\nclass OrderException(Exception):\\n pass\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"# agg\\nx = sqlContext.createDataFrame([(\\\"Alice\\\",\\\"Bob\\\",0.1),(\\\"Bob\\\",\\\"Carol\\\",0.2),(\\\"Carol\\\",\\\"Dave\\\",0.3)], ['from','to','amt'])\\ny = x.agg({\\\"amt\\\":\\\"avg\\\"})\\nx.show()\\ny.show()\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\n\\n\\n<\\/a>\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"# plot strings!\\nn_qubits = len(graph.nodes())\\ndef plot(inst, probs):\\n probs = probs.real\\n states = inst.states\\n fig = plt.figure()\\n ax = fig.add_subplot(111)\\n ax.set_xlabel(\\\"state\\\",fontsize=20)\\n ax.set_ylabel(\\\"Probability\\\",fontsize=20)\\n ax.set_xlim([0, 2**n_qubits])\\n rec = ax.bar(range(2**n_qubits), probs[:,0],)\\n num_states = [0, \\n int(\\\"\\\".join(str(x) for x in [0,1] * (n_qubits\\/\\/2)), 2),\\n int(\\\"\\\".join(str(x) for x in [1,0] * (n_qubits\\/\\/2)), 2),\\n 2**n_qubits - 1]\\n ax.set_xticks(num_states)\\n ax.set_xticklabels(map(lambda x: inst.states[x], num_states), rotation=90)\\n plt.grid(True)\\n plt.tight_layout()\\n plt.show()\\n\\nt = np.hstack((betas, gammas))\\nprobs = ring_cut_inst.probabilities(t)\\nplot(ring_cut_inst, probs)\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nWe can see that the first 2 most frequently sampled strings are the alternating solutions to the ring graph (well damn, they are). Since we have to access the wave function, we can go one step further and view the probability distribution over the bit strings produced by our $p = 1$ circuit.\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"Plot photon current density<\\/h3>\\n
Solution<\\/summary>\\n We can fetch the html at that url with the following code:\\n\\n ```\\n # import the library we will use\\n import requests\\n\\n # specify the url where the data we want to fetch lives\\n url = 'http:\\/\\/www.gutenberg.org\\/files\\/1342\\/1342-h\\/1342-h.htm'\\n\\n # get the data at the requested url\\n response = requests.get(url)\\n\\n # get the HTML from the response\\n html = response.text\\n ```\\n<\\/details>\\n\\nParsing HTML data with BeautifulSoup\\nAfter fetching some HTML data, the next thing we'll want to do is to \\\"parse\\\" that HTML to extract the subset of the data that's of interest. \\nIn what follows, we'll use the BeautifulSoup library to parse HTML. To get started with BeautifulSoup, let's install it with the following command:\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\n!pip install beautifulsoup4\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"But what is so special about solutions of minimum length? For machine\\nlearning, driving the objective function to zero is symptomatic of overfitting\\nthe data. Usually, at the zero bound, the machine learning method has\\nessentially memorized the training data, which is bad for generalization. Thus,\\nwe can effectively stall this problem by defining a region for the solution\\nthat is away from the zero-bound.\\n$$\\n\\\\begin{aligned}\\n& \\\\underset{\\\\boldsymbol{\\\\beta}}{\\\\text{minimize}}\\n& & \\\\Vert y - \\\\mathbf{X}\\\\boldsymbol{\\\\beta}\\\\Vert_2^2 \\\\\\n& \\\\text{subject to:}\\n& & \\\\Vert\\\\boldsymbol{\\\\beta}\\\\Vert_2 < c\\n\\\\end{aligned}\\n$$\\nwhere $c$ is the tuning parameter. Using the same process as before,\\nwe can re-write this as the following,\\n$$\\n\\\\min_{\\\\boldsymbol{\\\\beta}\\\\in\\\\mathbb{R}^p}\\\\Vert\\ny-\\\\mathbf{X}\\\\boldsymbol{\\\\beta}\\\\Vert_2^2 +\\\\alpha\\\\Vert\\\\boldsymbol{\\\\beta}\\\\Vert_2^2\\n$$\\nwhere $\\\\alpha$ is the tuning parameter. These are the penalized or\\nLagrange forms of these problems derived from the constrained versions. The\\nobjective function is penalized by the $\\\\Vert\\\\boldsymbol{\\\\beta}\\\\Vert_2$ term.\\nFor $L_2$ penalization, this is called ridge regression. This is\\nimplemented in Scikit-learn as Ridge. The following code sets this up for\\nour example,\\n\",\"targets\":\"from sklearn.linear_model import Ridge\\nclf = Ridge(alpha=100.0,fit_intercept=False)\\nclf.fit(np.array(X).astype(float),np.array(y).astype(float))\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"Read in the flight and radar data<\\/b>\\n\",\"targets\":\"fl1 = awot.io.read_netcdf(fname=FltLevf[1:-1], platform='p-3')\\nr1 = awot.io.read_windsyn_tdr_netcdf(fname=P3Radf[1:-1], field_mapping=None)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"\\\"2. TF-slim(tf.contrib.slim)\\n\\nContrib 중 하나의 library로, 상위 수준의 개념(argument scoping, layer, variable)으로 모델을 짧고 쉽게 정의할 수 있게 만듦\\n많이 사용되는 regularizer를 사용하여 모델을 단순하게 함. VGG, AlexNet과 같이 많이 쓰이는 모델을 개발 해놓음\\n\\nwithout TF-Slim\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\ninput = ...\\nwith tf.name_scope('conv1_1') as scope:\\n kernel = tf.Variable(tf.truncated_normal([3, 3, 64, 128], dtype=tf.float32,\\n stddev=1e-1), name='weights')\\n conv = tf.nn.conv2d(input, kernel, [1, 1, 1, 1], padding='SAME')\\n biases = tf.Variable(tf.constant(0.0, shape=[128], dtype=tf.float32),\\n trainable=True, name='biases')\\n bias = tf.nn.bias_add(conv, biases)\\n conv1 = tf.nn.relu(bias, name=scope)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"#If current prices are higher than 50 or 200 days moving average, that means prices are going up\\n\\n#200 days moving average\\nth_moving_avg = google.get_200day_moving_avg()\\n\\n#50 days moving average\\nfifty_moving_avg = google.get_50day_moving_avg()\\n\\nprint \\\"200 days moving average: $\\\", th_moving_avg\\nprint \\\"50 days moving average: $\\\", fifty_moving_avg\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nMoving averages. Get a peek of what prices have been like in the past.\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"\\\"1. Exploratory Data Analysis <\\/a>\\nThe data we have to work with looks at households in Bangladesh, some of which were affected by high levels of arsenic in their water. Would affected households want to switch to a neighbour's well?\\nWe'll split the data into a train and test set, and then we'll train six different models to try to predict whether households would switch wells. Then, we'll see how we can stack them when predicting on the test set!\\nBut first, let's load it in and visualise it! Each row represents a household, and the features we have available to us are:\\n\\nswitch: whether a household switched to another well;\\narsenic: level of arsenic in drinking water;\\neduc: level of education of \\\"head of household\\\";\\ndist100: distance to nearest safe-drinking well;\\nassoc: whether the household participates in any community activities.\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\nwells = pd.read_csv(\\n \\\"http:\\/\\/stat.columbia.edu\\/~gelman\\/arm\\/examples\\/arsenic\\/wells.dat\\\", sep=\\\" \\\"\\n)\\n\\nwells.head()\\n\\nfig, ax = plt.subplots(2, 2, figsize=(12, 6))\\nfig.suptitle(\\\"Target variable plotted against various predictors\\\")\\nsns.scatterplot(data=wells, x=\\\"arsenic\\\", y=\\\"switch\\\", ax=ax[0][0])\\nsns.scatterplot(data=wells, x=\\\"dist\\\", y=\\\"switch\\\", ax=ax[0][1])\\nsns.barplot(\\n data=wells.groupby(\\\"assoc\\\")[\\\"switch\\\"].mean().reset_index(),\\n x=\\\"assoc\\\",\\n y=\\\"switch\\\",\\n ax=ax[1][0],\\n)\\nax[1][0].set_ylabel(\\\"Proportion switch\\\")\\nsns.barplot(\\n data=wells.groupby(\\\"educ\\\")[\\\"switch\\\"].mean().reset_index(),\\n x=\\\"educ\\\",\\n y=\\\"switch\\\",\\n ax=ax[1][1],\\n)\\nax[1][1].set_ylabel(\\\"Proportion switch\\\");\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"# Figure\\nplt.figure(figsize=(10,8))\\n\\n# Scatter plot of the data points\\nplt.scatter(X[Y==0,0], X[Y==0,3], color='red')\\nplt.scatter(X[Y==1,0], X[Y==1,3], color='blue')\\nplt.scatter(X[Y==2,0], X[Y==2,3], color='green')\\n\\n# Labels\\nplt.title('Fisher\\\\'s iris datset')\\nplt.xlabel('Sepal length')\\nplt.ylabel('Petal width')\\n\\n# Show\\nplt.show()\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nScatter plot of data\\nFirst, lets produce a scatter plot of sepal length and petal width, with the colour of the points indicating ground truth class:\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"# 입력 크기는 영화 리뷰 데이터셋에 적용된 어휘 사전의 크기입니다(10,000개의 단어)\\nvocab_size = 10000\\n\\nmodel = keras.Sequential()\\nmodel.add(keras.layers.Embedding(vocab_size, 16, input_shape=(None,)))\\nmodel.add(keras.layers.GlobalAveragePooling1D())\\nmodel.add(keras.layers.Dense(16, activation=tf.nn.relu))\\nmodel.add(keras.layers.Dense(1, activation=tf.nn.sigmoid))\\n\\nmodel.summary()\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\n모델 구성\\n신경망은 층(layer)을 쌓아서 만듭니다. 이 구조에서는 두 가지를 결정해야 합니다:\\n\\n모델에서 얼마나 많은 층을 사용할 것인가?\\n각 층에서 얼마나 많은 은닉 유닛(hidden unit)을 사용할 것인가?\\n\\n이 예제의 입력 데이터는 단어 인덱스의 배열입니다. 예측할 레이블은 0 또는 1입니다. 이 문제에 맞는 모델을 구성해 보죠:\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"I am working on the file \\\"Code\\/notebooks\\/bootcamp_graphics_s17_UG.ipynb\\\".\\nThe first task is:\\nComment. All of these statements must be in the same cell for this to work. \\nComment. This is overkill -- it looks horrible -- but it makes the point that we control everything in the plot. We recommend you do very little of this until you're more comfortable with the basics. \\n\\nApproach 3: Create figure objects and apply methods\\nThis approach is probably the most mysterious, but it's the best. \\nThe idea is to use the matplotlib.pyplot function subplots(), which creates two objects:\\n* fig : figure object -- blank canvas for creating a figure\\n* ax : axis object -- everything in the figure: axes, labels, legend\\napply methods on these objects to set the various elements of the graph. \\nCreate objects. We'll see this line over and over:\\nCan you write Python code for it?\\n\",\"targets\":\"\\nfig, ax = plt.subplots() # create fig and ax objects\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"\\\"
Note<\\/h4>
\\nObviously it is not very relevant for the iris data set but still, it has to be defined.<\\/p>\\n
\\n
\\n<\\/a>\\nPlotting the sample data\\nTo get an intuitive idea of how our data looks like, let us visualize it in a simple scatter plot.\\n\",\"targets\":\"%pylab inline\\n\\nimport numpy as np\\nfrom matplotlib import pyplot as plt\\n\\nf, ax = plt.subplots(figsize=(7, 7))\\nax.scatter(x1_samples[:,0], x1_samples[:,1], marker='o', color='green', s=40, alpha=0.5, label='$\\\\omega_1$')\\nax.scatter(x2_samples[:,0], x2_samples[:,1], marker='s', color='blue', s=40, alpha=0.5, label='$\\\\omega_2$')\\nax.scatter(x3_samples[:,0], x3_samples[:,1], marker='^', color='red', s=40, alpha=0.5, label='$\\\\omega_2$')\\nplt.legend(loc='upper right') \\nplt.title('Training Dataset', size=20)\\nplt.ylabel('$x_2$', size=20)\\nplt.xlabel('$x_1$', size=20)\\nplt.show()\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"\\\"\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\nw, v0, v1 = eig(Ms, left=True)\\nw = w.real\\nprint '%s\\\\n%s' % (w, v0)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"5. $k$-NN classifier\\nA simple extension of the 1-NN classifier is the $k$-NN classifier, which, for any input sample ${\\\\bf x}$, computes the $k$ closest neighbors in the training set, and takes the majority class in the subset. To avoid ties, in the binary classification case $k$ is usually taken as an odd number.\\nThe following method implements the $k$-NN classifiers.\\n\",\"targets\":\"def knn_classifier(X1,Y1,X2,k):\\n \\\"\\\"\\\" Compute the k-NN classification for the observations contained in\\n the rows of X2, for the training set given by the rows in X1 and the\\n components of S1. k is the number of neighbours.\\n \\\"\\\"\\\"\\n if X1.ndim == 1:\\n X1 = np.asmatrix(X1).T\\n if X2.ndim == 1:\\n X2 = np.asmatrix(X2).T\\n distances = spatial.distance.cdist(X1,X2,'euclidean')\\n neighbors = np.argsort(distances, axis=0, kind='quicksort', order=None)\\n closest = neighbors[range(k),:]\\n \\n y_values = np.zeros([X2.shape[0],1])\\n for idx in range(X2.shape[0]):\\n y_values[idx] = np.median(Y1[closest[:,idx]])\\n \\n return y_values\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"Calculate average age of team\\n\",\"targets\":\"dflfc_openers[['Season', 'AgeAtOpener']].groupby('Season').agg(lambda x: round(x.mean(), 1))\\n\\ndflfc_openers[['Season', 'AgeAtOpener']].groupby('Season').agg(lambda x: round(x.mean(), 1)).plot(kind='bar', ylim=(24,28))\\n\\ndflfc_transfers_with_dob[dflfc_transfers_with_dob.Direction == 'In'][['Season', 'Player', 'Fee', 'AgeAtTransfer']]\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"# Please type your code here:\\n\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nChanging case\\nWe can also change the case of a string using the built in method name. Lets see how:\\nFor uppercase, use the upper() method. In the documentation (above link) we see it listed as: str.upper()\\nnew_dna.upper()\\n\\nFor lowercase, use the lower() method.\\nnew_dna.lower()\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"I am working on the file \\\"3\\/dlnd_tv_script_generation.ipynb\\\".\\nThe first task is:\\nNeural Network Training\\nHyperparameters\\nTune the following parameters:\\n\\nSet num_epochs to the number of epochs.\\nSet batch_size to the batch size.\\nSet rnn_size to the size of the RNNs.\\nSet seq_length to the length of sequence.\\nSet learning_rate to the learning rate.\\nSet show_every_n_batches to the number of batches the neural network should print progress.\\nCan you write Python code for it?\\n\",\"targets\":\"\\n# Number of Epochs\\nnum_epochs = 80\\n# Batch Size\\nbatch_size = 23\\n# RNN Size\\nrnn_size = 256\\n# Sequence Length\\nseq_length = 33\\n# Learning Rate\\nlearning_rate = 0.005\\n# Show stats for every n number of batches\\nshow_every_n_batches = 100\\n\\n\\\"\\\"\\\"\\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\\n\\\"\\\"\\\"\\nsave_dir = '.\\/save'\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"Vectorized String Operations<\\/h1>\\n\",\"targets\":\"movies.head()\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"Again, let's define a dictionary to collect these correction function.\\n\",\"targets\":\"dCorrectionFunc = {\\n \\\"DG\\\": dgDG,\\n \\\"SG\\\": dgSG,\\n \\\"LumpLo\\\": dgLumpLo,\\n \\\"LumpChLo\\\": dgLumpChLo,\\n \\\"Ga\\\": dgGa\\n}\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n"
"{\"inputs\":\"%%html\\n\\nOne-way trip in DC\\n
Note<\\/h4>
\\n